id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
347,694
https://en.wikipedia.org/wiki/List%20of%20algebraic%20topology%20topics
This is a list of algebraic topology topics. Homology (mathematics) Simplex Simplicial complex Polytope Triangulation Barycentric subdivision Simplicial approximation theorem Abstract simplicial complex Simplicial set Simplicial category Chain (algebraic topology) Betti number Euler characteristic Genus Riemann–Hurwitz formula Singular homology Cellular homology Relative homology Mayer–Vietoris sequence Excision theorem Universal coefficient theorem Cohomology List of cohomology theories Cocycle class Cup product Cohomology ring De Rham cohomology Čech cohomology Alexander–Spanier cohomology Intersection cohomology Lusternik–Schnirelmann category Poincaré duality Fundamental class Applications Jordan curve theorem Brouwer fixed point theorem Invariance of domain Lefschetz fixed-point theorem Hairy ball theorem Degree of a continuous mapping Borsuk–Ulam theorem Ham sandwich theorem Homology sphere Homotopy theory Homotopy Path (topology) Fundamental group Homotopy group Seifert–van Kampen theorem Pointed space Winding number Simply connected Universal cover Monodromy Homotopy lifting property Mapping cylinder Mapping cone (topology) Wedge sum Smash product Adjunction space Cohomotopy Cohomotopy group Brown's representability theorem Eilenberg–MacLane space Fibre bundle Möbius strip Line bundle Canonical line bundle Vector bundle Associated bundle Fibration Hopf bundle Classifying space Cofibration Homotopy groups of spheres Plus construction Whitehead theorem Weak equivalence Hurewicz theorem H-space Further developments Künneth theorem De Rham cohomology Obstruction theory Characteristic class Chern class Chern–Simons form Pontryagin class Pontryagin number Stiefel–Whitney class Poincaré conjecture Cohomology operation Steenrod algebra Bott periodicity theorem K-theory Topological K-theory Adams operation Algebraic K-theory Whitehead torsion Twisted K-theory Cobordism Thom space Suspension functor Stable homotopy theory Spectrum (homotopy theory) Morava K-theory Hodge conjecture Weil conjectures Directed algebraic topology Applied topology Example: DE-9IM Homological algebra Chain complex Commutative diagram Exact sequence Five lemma Short five lemma Snake lemma Splitting lemma Extension problem Spectral sequence Abelian category Group cohomology Sheaf Sheaf cohomology Grothendieck topology Derived category History Combinatorial topology See also Glossary of algebraic topology topology glossary List of topology topics List of general topology topics List of geometric topology topics Publications in topology Topological property Mathematics-related lists Outlines of mathematics and logic Outlines
List of algebraic topology topics
[ "Mathematics" ]
533
[ "Fields of abstract algebra", "Topology", "nan", "Algebraic topology" ]
347,838
https://en.wikipedia.org/wiki/Nuclear%20medicine
Nuclear medicine (nuclear radiology, nucleology), is a medical specialty involving the application of radioactive substances in the diagnosis and treatment of disease. Nuclear imaging is, in a sense, radiology done inside out, because it records radiation emitted from within the body rather than radiation that is transmitted through the body from external sources like X-ray generators. In addition, nuclear medicine scans differ from radiology, as the emphasis is not on imaging anatomy, but on the function. For such reason, it is called a physiological imaging modality. Single photon emission computed tomography (SPECT) and positron emission tomography (PET) scans are the two most common imaging modalities in nuclear medicine. Diagnostic medical imaging Diagnostic In nuclear medicine imaging, radiopharmaceuticals are taken internally, for example, through inhalation, intravenously, or orally. Then, external detectors (gamma cameras) capture and form images from the radiation emitted by the radiopharmaceuticals. This process is unlike a diagnostic X-ray, where external radiation is passed through the body to form an image. There are several techniques of diagnostic nuclear medicine. 2D: Scintigraphy ("scint") is the use of internal radionuclides to create two-dimensional images. 3D: SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. Positron emission tomography (PET) uses coincidence detection to image functional processes. Nuclear medicine tests differ from most other imaging modalities in that nuclear medicine scans primarily show the physiological function of the system being investigated as opposed to traditional anatomical imaging such as CT or MRI. Nuclear medicine imaging studies are generally more organ-, tissue- or disease-specific (e.g.: lungs scan, heart scan, bone scan, brain scan, tumor, infection, Parkinson etc.) than those in conventional radiology imaging, which focus on a particular section of the body (e.g.: chest X-ray, abdomen/pelvis CT scan, head CT scan, etc.). In addition, there are nuclear medicine studies that allow imaging of the whole body based on certain cellular receptors or functions. Examples are whole body PET scans or PET/CT scans, gallium scans, indium white blood cell scans, MIBG and octreotide scans. While the ability of nuclear metabolism to image disease processes from differences in metabolism is unsurpassed, it is not unique. Certain techniques such as fMRI image tissues (particularly cerebral tissues) by blood flow and thus show metabolism. Also, contrast-enhancement techniques in both CT and MRI show regions of tissue that are handling pharmaceuticals differently, due to an inflammatory process. Diagnostic tests in nuclear medicine exploit the way that the body handles substances differently when there is disease or pathology present. The radionuclide introduced into the body is often chemically bound to a complex that acts characteristically within the body; this is commonly known as a tracer. In the presence of disease, a tracer will often be distributed around the body and/or processed differently. For example, the ligand methylene-diphosphonate (MDP) can be preferentially taken up by bone. By chemically attaching technetium-99m to MDP, radioactivity can be transported and attached to bone via the hydroxyapatite for imaging. Any increased physiological function, such as due to a fracture in the bone, will usually mean increased concentration of the tracer. This often results in the appearance of a "hot spot", which is a focal increase in radio accumulation or a general increase in radio accumulation throughout the physiological system. Some disease processes result in the exclusion of a tracer, resulting in the appearance of a "cold spot". Many tracer complexes have been developed to image or treat many different organs, glands, and physiological processes. Hybrid scanning techniques In some centers, the nuclear medicine scans can be superimposed, using software or hybrid cameras, on images from modalities such as CT or MRI to highlight the part of the body in which the radiopharmaceutical is concentrated. This practice is often referred to as image fusion or co-registration, for example SPECT/CT and PET/CT. The fusion imaging technique in nuclear medicine provides information about the anatomy and function, which would otherwise be unavailable or would require a more invasive procedure or surgery. Practical concerns in nuclear imaging Although the risks of low-level radiation exposures are not well understood, a cautious approach has been universally adopted that all human radiation exposures should be kept As Low As Reasonably Practicable, "ALARP". (Originally, this was known as "As Low As Reasonably Achievable" (ALARA), but this has changed in modern draftings of the legislation to add more emphasis on the "Reasonably" and less on the "Achievable".) Working with the ALARP principle, before a patient is exposed for a nuclear medicine examination, the benefit of the examination must be identified. This needs to take into account the particular circumstances of the patient in question, where appropriate. For instance, if a patient is unlikely to be able to tolerate a sufficient amount of the procedure to achieve a diagnosis, then it would be inappropriate to proceed with injecting the patient with the radioactive tracer. When the benefit does justify the procedure, then the radiation exposure (the amount of radiation given to the patient) should also be kept "ALARP". This means that the images produced in nuclear medicine should never be better than required for confident diagnosis. Giving larger radiation exposures can reduce the noise in an image and make it more photographically appealing, but if the clinical question can be answered without this level of detail, then this is inappropriate. As a result, the radiation dose from nuclear medicine imaging varies greatly depending on the type of study. The effective radiation dose can be lower than or comparable to or can far exceed the general day-to-day environmental annual background radiation dose. Likewise, it can also be less than, in the range of, or higher than the radiation dose from an abdomen/pelvis CT scan. Some nuclear medicine procedures require special patient preparation before the study to obtain the most accurate result. Pre-imaging preparations may include dietary preparation or the withholding of certain medications. Patients are encouraged to consult with the nuclear medicine department prior to a scan. Analysis The result of the nuclear medicine imaging process is a dataset comprising one or more images. In multi-image datasets the array of images may represent a time sequence (i.e. cine or movie) often called a "dynamic" dataset, a cardiac gated time sequence, or a spatial sequence where the gamma-camera is moved relative to the patient. SPECT (single photon emission computed tomography) is the process by which images acquired from a rotating gamma-camera are reconstructed to produce an image of a "slice" through the patient at a particular position. A collection of parallel slices form a slice-stack, a three-dimensional representation of the distribution of radionuclide in the patient. The nuclear medicine computer may require millions of lines of source code to provide quantitative analysis packages for each of the specific imaging techniques available in nuclear medicine. Time sequences can be further analysed using kinetic models such as multi-compartment models or a Patlak plot. Interventional nuclear medicine Radionuclide therapy can be used to treat conditions such as hyperthyroidism, thyroid cancer, skin cancer and blood disorders. In nuclear medicine therapy, the radiation treatment dose is administered internally (e.g. intravenous or oral routes) or externally direct above the area to treat in form of a compound (e.g. in case of skin cancer). The radiopharmaceuticals used in nuclear medicine therapy emit ionizing radiation that travels only a short distance, thereby minimizing unwanted side effects and damage to noninvolved organs or nearby structures. Most nuclear medicine therapies can be performed as outpatient procedures since there are few side effects from the treatment and the radiation exposure to the general public can be kept within a safe limit. In some centers the nuclear medicine department may also use implanted capsules of isotopes (brachytherapy) to treat cancer. History The history of nuclear medicine contains contributions from scientists across different disciplines in physics, chemistry, engineering, and medicine. The multidisciplinary nature of nuclear medicine makes it difficult for medical historians to determine the birthdate of nuclear medicine. This can probably be best placed between the discovery of artificial radioactivity in 1934 and the production of radionuclides by Oak Ridge National Laboratory for medicine-related use, in 1946. The origins of this medical idea date back as far as the mid-1920s in Freiburg, Germany, when George de Hevesy made experiments with radionuclides administered to rats, thus displaying metabolic pathways of these substances and establishing the tracer principle. Possibly, the genesis of this medical field took place in 1936, when John Lawrence, known as "the father of nuclear medicine", took a leave of absence from his faculty position at Yale Medical School, to visit his brother Ernest Lawrence at his new radiation laboratory (now known as the Lawrence Berkeley National Laboratory) in Berkeley, California. Later on, John Lawrence made the first application in patients of an artificial radionuclide when he used phosphorus-32 to treat leukemia. Many historians consider the discovery of artificially produced radionuclides by Frédéric Joliot-Curie and Irène Joliot-Curie in 1934 as the most significant milestone in nuclear medicine. In February 1934, they reported the first artificial production of radioactive material in the journal Nature, after discovering radioactivity in aluminum foil that was irradiated with a polonium preparation. Their work built upon earlier discoveries by Wilhelm Konrad Roentgen for X-ray, Henri Becquerel for radioactive uranium salts, and Marie Curie (mother of Irène Curie) for radioactive thorium, polonium and coining the term "radioactivity." Taro Takemi studied the application of nuclear physics to medicine in the 1930s. The history of nuclear medicine will not be complete without mentioning these early pioneers. Nuclear medicine gained public recognition as a potential specialty when on May 11, 1946, an article in the Journal of the American Medical Association (JAMA) by Massachusetts General Hospital's Dr. Saul Hertz and Massachusetts Institute of Technology's Dr. Arthur Roberts, described the successful use of treating Graves' Disease with radioactive iodine (RAI) was published. Additionally, Sam Seidlin. brought further development in the field describing a successful treatment of a patient with thyroid cancer metastases using radioiodine (I-131). These articles are considered by many historians as the most important articles ever published in nuclear medicine. Although the earliest use of I-131 was devoted to therapy of thyroid cancer, its use was later expanded to include imaging of the thyroid gland, quantification of the thyroid function, and therapy for hyperthyroidism. Among the many radionuclides that were discovered for medical-use, none were as important as the discovery and development of Technetium-99m. It was first discovered in 1937 by C. Perrier and E. Segre as an artificial element to fill space number 43 in the Periodic Table. The development of a generator system to produce Technetium-99m in the 1960s became a practical method for medical use. Today, Technetium-99m is the most utilized element in nuclear medicine and is employed in a wide variety of nuclear medicine imaging studies. Widespread clinical use of nuclear medicine began in the early 1950s, as knowledge expanded about radionuclides, detection of radioactivity, and using certain radionuclides to trace biochemical processes. Pioneering works by Benedict Cassen in developing the first rectilinear scanner and Hal O. Anger's scintillation camera (Anger camera) broadened the young discipline of nuclear medicine into a full-fledged medical imaging specialty. By the early 1960s, in southern Scandinavia, Niels A. Lassen, David H. Ingvar, and Erik Skinhøj developed techniques that provided the first blood flow maps of the brain, which initially involved xenon-133 inhalation; an intra-arterial equivalent was developed soon after, enabling measurement of the local distribution of cerebral activity for patients with neuropsychiatric disorders such as schizophrenia. Later versions would have 254 scintillators so a two-dimensional image could be produced on a color monitor. It allowed them to construct images reflecting brain activation from speaking, reading, visual or auditory perception and voluntary movement. The technique was also used to investigate, e.g., imagined sequential movements, mental calculation and mental spatial navigation. By the 1970s most organs of the body could be visualized using nuclear medicine procedures. In 1971, American Medical Association officially recognized nuclear medicine as a medical specialty. In 1972, the American Board of Nuclear Medicine was established, and in 1974, the American Osteopathic Board of Nuclear Medicine was established, cementing nuclear medicine as a stand-alone medical specialty. In the 1980s, radiopharmaceuticals were designed for use in diagnosis of heart disease. The development of single photon emission computed tomography (SPECT), around the same time, led to three-dimensional reconstruction of the heart and establishment of the field of nuclear cardiology. More recent developments in nuclear medicine include the invention of the first positron emission tomography scanner (PET). The concept of emission and transmission tomography, later developed into single photon emission computed tomography (SPECT), was introduced by David E. Kuhl and Roy Edwards in the late 1950s. Their work led to the design and construction of several tomographic instruments at the University of Pennsylvania. Tomographic imaging techniques were further developed at the Washington University School of Medicine. These innovations led to fusion imaging with SPECT and CT by Bruce Hasegawa from University of California, San Francisco (UCSF), and the first PET/CT prototype by D. W. Townsend from University of Pittsburgh in 1998. PET and PET/CT imaging experienced slower growth in its early years owing to the cost of the modality and the requirement for an on-site or nearby cyclotron. However, an administrative decision to approve medical reimbursement of limited PET and PET/CT applications in oncology has led to phenomenal growth and widespread acceptance over the last few years, which also was facilitated by establishing 18F-labelled tracers for standard procedures, allowing work at non-cyclotron-equipped sites. PET/CT imaging is now an integral part of oncology for diagnosis, staging and treatment monitoring. A fully integrated MRI/PET scanner is on the market from early 2011. Sources of radionuclides 99mTc is normally supplied to hospitals through a radionuclide generator containing the parent radionuclide molybdenum-99. 99Mo is typically obtained as a fission product of 235U in nuclear reactors, however global supply shortages have led to the exploration of other methods of production. About a third of the world's supply, and most of Europe's supply, of medical isotopes is produced at the Petten nuclear reactor in the Netherlands. Another third of the world's supply, and most of North America's supply, was produced at the Chalk River Laboratories in Chalk River, Ontario, Canada until its permanent shutdown in 2018. The most commonly used radioisotope in PET, 18F, is not produced in a nuclear reactor, but rather in a circular accelerator called a cyclotron. The cyclotron is used to accelerate protons to bombard the stable heavy isotope of oxygen 18O. The 18O constitutes about 0.20% of ordinary oxygen (mostly oxygen-16), from which it is extracted. The 18F is then typically used to make FDG. A typical nuclear medicine study involves administration of a radionuclide into the body by intravenous injection in liquid or aggregate form, ingestion while combined with food, inhalation as a gas or aerosol, or rarely, injection of a radionuclide that has undergone micro-encapsulation. Some studies require the labeling of a patient's own blood cells with a radionuclide (leukocyte scintigraphy and red blood cell scintigraphy). Most diagnostic radionuclides emit gamma rays either directly from their decay or indirectly through electron–positron annihilation, while the cell-damaging properties of beta particles are used in therapeutic applications. Refined radionuclides for use in nuclear medicine are derived from fission or fusion processes in nuclear reactors, which produce radionuclides with longer half-lives, or cyclotrons, which produce radionuclides with shorter half-lives, or take advantage of natural decay processes in dedicated generators, i.e. molybdenum/technetium or strontium/rubidium. The most commonly used intravenous radionuclides are technetium-99m, iodine-123, iodine-131, thallium-201, gallium-67, fluorine-18 fluorodeoxyglucose, and indium-111 labeled leukocytes. The most commonly used gaseous/aerosol radionuclides are xenon-133, krypton-81m, (aerosolised) technetium-99m. Policies and procedures Radiation dose A patient undergoing a nuclear medicine procedure will receive a radiation dose. Under present international guidelines it is assumed that any radiation dose, however small, presents a risk. The radiation dose delivered to a patient in a nuclear medicine investigation, though unproven, is generally accepted to present a very small risk of inducing cancer. In this respect it is similar to the risk from X-ray investigations except that the dose is delivered internally rather than from an external source such as an X-ray machine, and dosage amounts are typically significantly higher than those of X-rays. The radiation dose from a nuclear medicine investigation is expressed as an effective dose with units of sieverts (usually given in millisieverts, mSv). The effective dose resulting from an investigation is influenced by the amount of radioactivity administered in megabecquerels (MBq), the physical properties of the radiopharmaceutical used, its distribution in the body and its rate of clearance from the body. Effective doses can range from 6 μSv (0.006 mSv) for a 3 MBq chromium-51 EDTA measurement of glomerular filtration rate to 11.2 mSv (11,200 μSv) for an 80 MBq thallium-201 myocardial imaging procedure. The common bone scan with 600 MBq of technetium-99m MDP has an effective dose of approximately 2.9 mSv (2,900 μSv). Formerly, units of measurement were: the curie (Ci), equal to 3.7 × 1010 Bq, and also equal to 1.0 grams of radium (Ra-226); the rad (radiation absorbed dose), now replaced by the gray; and the rem (Röntgen equivalent man), now replaced by the sievert. The rad and rem are essentially equivalent for almost all nuclear medicine procedures, and only alpha radiation will produce a higher Rem or Sv value, due to its much higher Relative Biological Effectiveness (RBE). Alpha emitters are nowadays rarely used in nuclear medicine, but were used extensively before the advent of nuclear reactor and accelerator produced radionuclides. The concepts involved in radiation exposure to humans are covered by the field of Health Physics; the development and practice of safe and effective nuclear medicinal techniques is a key focus of Medical Physics. Regulatory frameworks and guidelines Different countries around the world maintain regulatory frameworks that are responsible for the management and use of radionuclides in different medical settings. For example, in the US, the Nuclear Regulatory Commission (NRC) and the Food and Drug Administration (FDA) have guidelines in place for hospitals to follow. With the NRC, if radioactive materials aren't involved, like X-rays for example, they are not regulated by the agency and instead are regulated by the individual states. International organizations, such as the International Atomic Energy Agency (IAEA), have regularly published different articles and guidelines for best practices in nuclear medicine as well as reporting on emerging technologies in nuclear medicine. Other factors that are considered in nuclear medicine include a patient's medical history as well as post-treatment management. Groups like International Commission on Radiological Protection have published information on how to manage the release of patients from a hospital with unsealed radionuclides. See also Human subject research List of Nuclear Medicine Societies Nuclear medicine physician Nuclear pharmacy Nuclear technology Radiographer References Further reading External links Solving the Medical Isotope Crisis Hearing before the Subcommittee on Energy and Environment of the Committee on Energy and Commerce, House of Representatives, One Hundred Eleventh Congress, First Session, September 9, 2009 Radiology Medicinal radiochemistry
Nuclear medicine
[ "Chemistry" ]
4,364
[ "Medicinal radiochemistry", "Medicinal chemistry" ]
14,661,792
https://en.wikipedia.org/wiki/Nicotinate-nucleotide%E2%80%94dimethylbenzimidazole%20phosphoribosyltransferase
In enzymology, a nicotinate-nucleotide-dimethylbenzimidazole phosphoribosyltransferase () is an enzyme that catalyzes the chemical reaction beta-nicotinate D-ribonucleotide + 5,6-dimethylbenzimidazole nicotinate + alpha-ribazole 5'-phosphate Thus, the two substrates of this enzyme are beta-nicotinate D-ribonucleotide and 5,6-dimethylbenzimidazole, whereas its two products are nicotinate and alpha-ribazole 5'-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is nicotinate-nucleotide:5,6-dimethylbenzimidazole phospho-D-ribosyltransferase. Other names in common use include CobT, nicotinate mononucleotide-dimethylbenzimidazole phosphoribosyltransferase, nicotinate ribonucleotide:benzimidazole (adenine) phosphoribosyltransferase, nicotinate-nucleotide:dimethylbenzimidazole phospho-D-ribosyltransferase, and nicotinate mononucleotide (NaMN):5,6-dimethylbenzimidazole phosphoribosyltransferase. This enzyme is part of the biosynthetic pathway to cobalamin (vitamin B12) in bacteria. Function This enzyme plays a central role in the synthesis of alpha-ribazole-5'-phosphate, an intermediate for the lower ligand of cobalamin. It is one of the enzymes of the anaerobic pathway of cobalamin biosynthesis, and one of the four proteins (CobU, CobT, CobC, and CobS) involved in the synthesis of the lower ligand and the assembly of the nucleotide loop. Biosynthesis of cobalamin Vitamin B12 (cobalamin) is used as a cofactor in a number of enzyme-catalysed reactions in bacteria, archaea and eukaryotes. The biosynthetic pathway to adenosylcobalamin from its five-carbon precursor, 5-aminolaevulinic acid, can be divided into three sections: (1) the biosynthesis of uroporphyrinogen III from 5-aminolaevulinic acid; (2) the conversion of uroporphyrinogen III into the ring-contracted, deacylated intermediate precorrin 6 or cobalt-precorrin 6; and (3) the transformation of this intermediate to form adenosylcobalamin. Cobalamin is synthesised by bacteria and archaea via two alternative routes that differ primarily in the steps of section 2 that lead to the contraction of the macrocycle and excision of the extruded carbon molecule (and its attached methyl group). One pathway (exemplified by Pseudomonas denitrificans) incorporates molecular oxygen into the macrocycle as a prerequisite to ring contraction, and has consequently been termed the aerobic pathway. The alternative, anaerobic, route (exemplified by Salmonella typhimurium) takes advantage of a chelated cobalt ion, in the absence of oxygen, to set the stage for ring contraction. Structural studies As of late 2007, 28 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , , , , , , , , , , , , , , , and . References Further reading Protein domains EC 2.4.2 Enzymes of known structure
Nicotinate-nucleotide—dimethylbenzimidazole phosphoribosyltransferase
[ "Biology" ]
819
[ "Protein domains", "Protein classification" ]
14,662,101
https://en.wikipedia.org/wiki/Biositemap
A Biositemap is a way for a biomedical research institution of organisation to show how biological information is distributed throughout their Information Technology systems and networks. This information may be shared with other organisations and researchers. The Biositemap enables web browsers, crawlers and robots to easily access and process the information to use in other systems, media and computational formats. Biositemaps protocols provide clues for the Biositemap web harvesters, allowing them to find resources and content across the whole interlink of the Biositemap system. This means that human or machine users can access any relevant information on any topic across all organisations throughout the Biositemap system and bring it to their own systems for assimilation or analysis. File framework The information is normally stored in a biositemap.rdf or biositemap.xml file which contains lists of information about the data, software, tools material and services provided or held by that organisation. Information is presented in metafields and can be created online through sites such as the biositemaps online editor. The information is a blend of sitemaps and RSS feeds and is created using the Information Model (IM) and Biomedical Resource Ontology (BRO). The IM is responsible for defining the data held in the metafields and the BRO controls the terminology of the data held in the resource_type field. The BRO is critical in aiding the interactivity of both the other organisations and third parties to search and refine those searches. Data formats The Biositemaps Protocol allows scientists, engineers, centers and institutions engaged in modeling, software tool development and analysis of biomedical and informatics data to broadcast and disseminate to the world the information about their latest computational biology resources (data, software tools and web services). The biositemap concept is based on ideas from Efficient, Automated Web Resource Harvesting and Crawler-friendly Web Servers, and it integrates the features of sitemaps and RSS feeds into a decentralized mechanism for computational biologists and bio-informaticians to openly broadcast and retrieve meta-data about biomedical resources. These site, institution, or investigator specific biositemap descriptions are published in RDF format online and are searched, parsed, monitored and interpreted by web search engines, web applications specific to biositemaps and ontologies, and other applications interested in discovering updated or novel resources for bioinformatics and biomedical research investigations. The biositemap mechanism separates the providers of biomedical resources (investigators or institutions) from the consumers of resource content (researchers, clinicians, news media, funding agencies, educational and research initiatives). A Biositemap is an RDF file that lists the biomedical and bioinformatics resources for a specific research group or consortium. It allows developers of biomedical resources to describe the functionality and usability of each of their software tools, databases or web-services. Biositemaps supplement and do not replace the existing frameworks for dissemination of data, tools and services. Using a biositemap does not guarantee that resources will be included in search indexes nor does it influence the way that tools are ranked or perceived by the community. What the Biositemaps protocol will do is provide clues, information and directives to all Biositemap web harvesters that point to the existence and content of biomedical resources at different sites. Biositemap Information Model The Biositemap protocol relies on an extensible information model that includes specific properties that are commonly used and necessary for characterizing biomedical resources: Name Description URL Stage of development Organization Resource Ontology Label Keywords License Up-to-date documentation on the information model is available at the Biositemaps website. See also Information visualization ITools Resourceome Sitemaps References External links Biomedical Resource Ontology Biositemaps online editor Domain-specific knowledge representation languages Biological techniques and tools Bioinformatics
Biositemap
[ "Engineering", "Biology" ]
798
[ "Bioinformatics", "Biological engineering", "nan" ]
14,668,195
https://en.wikipedia.org/wiki/Gel%20point
In polymer chemistry, the gel point is an abrupt change in the viscosity of a solution containing polymerizable components. At the gel point, a solution undergoes gelation, as reflected in a loss in fluidity. After the monomer/polymer solution has passed the gel point, internal stress builds up in the gel phase, which can lead to volume shrinkage. Gelation is characteristic of polymerizations that include crosslinkers that can form 2- or 3-dimensional networks. For example, the condensation of a dicarboxylic acid and a triol will give rise to a gel whereas the same dicarboxylic acid and a diol will not. The gel is often a small percentage of the mixture, even though it greatly influences the properties of the bulk. Mathematical definition An infinite polymer network appears at the gel point. Assuming that it is possible to measure the extent of reaction, , defined as the fraction of monomers that appear in cross-links, the gel point can be determined. The critical extent of reaction for the gel point to be formed is given by: For example, a polymer with N≈200 is able to reach the gel point with only 0.5% of monomers reacting. This shows the ease at which polymers are able to form infinite networks. The critical extent of reaction for gelation can be determined as a function of the properties of the monomer mixture, , , and : See also Pour point Cold filter plugging point Petroleum References Further reading Polymer physics Chemical properties
Gel point
[ "Chemistry", "Materials_science" ]
310
[ "Polymer physics", "Polymer chemistry", "nan" ]
14,670,096
https://en.wikipedia.org/wiki/Regulator%20of%20G%20protein%20signaling
Regulators of G protein signaling (RGS) are protein structural domains or the proteins that contain these domains, that function to activate the GTPase activity of heterotrimeric G-protein α-subunits. RGS proteins are multi-functional, GTPase-accelerating proteins that promote GTP hydrolysis by the α-subunit of heterotrimeric G proteins, thereby inactivating the G protein and rapidly switching off G protein-coupled receptor signaling pathways. Upon activation by receptors, G proteins exchange GDP for GTP, are released from the receptor, and dissociate into a free, active GTP-bound α-subunit and βγ-dimer, both of which activate downstream effectors. The response is terminated upon GTP hydrolysis by the α-subunit (), which can then re-bind the βγ-dimer ( ) and the receptor. RGS proteins markedly reduce the lifespan of GTP-bound α-subunits by stabilising the G protein transition state. Whereas receptors stimulate GTP binding, RGS proteins stimulate GTP hydrolysis. RGS proteins have been conserved in evolution. The first to be identified was Sst2 ("SuperSensiTivity to pheromone") in yeast (Saccharomyces cerevisiae). All RGS proteins contain an RGS-box (or RGS domain), which is required for activity. Some small RGS proteins such as RGS1 and RGS4 are little more than an RGS domain, while others also contain additional domains that confer further functionality. RGS domains in the G protein-coupled receptor kinases are able to bind to Gq family α-subunits, but do not accelerate their GTP hydrolysis. Instead, GRKs appear to reduce Gq signaling by sequestering the active α-subunits away from effectors such as phospholipase C-β. Plants have RGS proteins but do not have canonical G protein-coupled receptors. Thus G proteins and GTPase accelerating proteins appear to have evolved before any known G protein activator. RGS domains can be found within the same protein in combination with a variety of other domains, including: DEP for membrane targeting (), PDZ for binding to GPCRs (), PTB for phosphotyrosine-binding (), RBD for Ras-binding (), GoLoco for guanine nucleotide inhibitor activity (), PX for phosphoinositide-binding (), PXA that is associated with PX (), PH for phosphatidylinositol-binding (), and GGL (G protein gamma subunit-like) for binding G protein beta subunits ( Those RGS proteins that contain GGL domains can interact with G protein beta subunits to form novel dimers that prevent G protein gamma subunit binding and G protein alpha subunit association, thereby preventing heterotrimer formation. Examples Human proteins containing this domain include: AXIN1, AXIN2 GRK1, GRK2, GRK3, GRK4, GRK5, GRK6, GRK7 RGS1, RGS2, RGS3, RGS4, RGS5, RGS6, RGS7, RGS8, RGS9, RGS10, RGS11, RGS12, RGS13, RGS14, RGS16, RGS17, RGS18, RGS19, RGS20, RGS21 SNX13 See also GTP-binding protein regulators: GEF GAP References Further reading External links in PROSITE G proteins Protein domains Peripheral membrane proteins
Regulator of G protein signaling
[ "Chemistry", "Biology" ]
772
[ "G proteins", "Protein domains", "Protein classification", "Signal transduction" ]
14,670,825
https://en.wikipedia.org/wiki/Mark%E2%80%93Houwink%20equation
The Mark–Houwink equation, also known as the Mark–Houwink–Sakurada equation or the Kuhn–Mark–Houwink–Sakurada equation or the Landau–Kuhn–Mark–Houwink–Sakurada equation or the Mark-Chrystian equation gives a relation between intrinsic viscosity and molecular weight : From this equation the molecular weight of a polymer can be determined from data on the intrinsic viscosity and vice versa. The values of the Mark–Houwink parameters, and , depend on the particular polymer-solvent system as well as temperature. For solvents, a value of is indicative of a theta solvent. A value of is typical for good solvents. For most flexible polymers, . For semi-flexible polymers, . For polymers with an absolute rigid rod, such as Tobacco mosaic virus, . It is named after Herman F. Mark and Roelof Houwink. Applications The Mark-Houwink equation is used in size-exclusion chromatography (SEC) to construct the so called universal calibration curve which can be used to determine the molecular weight of a polymer A using a calibration done with polymer B. In SEC molecules are separated based on hydrodynamic volume, i.e. the size of the coil a given polymer forms in solution. The hydrodynamic volume, however, cannot simply be related to molecular weight (compare comb-like polystyrene vs. linear polystyrene). This means that the molecular weight associated with a given retention volume is substance specific and that in order to determine the molecular weight of a given polymer a molecular-weight size marker of the same substance must be available. However, the product of the intrinsic viscosity and the molecular weight, , is proportional to the hydrodynamic radius and therefore independent of substance. It follows that is true at any given retention volume. Substitution of using the Mark-Houwink equation gives: which can be used to relate the molecular weight of any two polymers using their Mark-Houwink constants (i.e. "universally" applicable for calibration). For example, if narrow molar mass distribution standards are available for polystyrene, these can be used to construct a calibration curve (typically vs. retention volume ) in eg. toluene at 40 °C. This calibration can then be used to determine the "polystyrene equivalent" molecular weight of a polyethylene sample if the Mark-Houwink parameters for both substances are known in this solvent at this temperature. References Polymer chemistry
Mark–Houwink equation
[ "Chemistry", "Materials_science", "Engineering" ]
538
[ "Materials science", "Polymer chemistry" ]
13,525,027
https://en.wikipedia.org/wiki/Dual%20norm
In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space. Definition Let be a normed vector space with norm and let denote its continuous dual space. The dual norm of a continuous linear functional belonging to is the non-negative real number defined by any of the following equivalent formulas: where and denote the supremum and infimum, respectively. The constant map is the origin of the vector space and it always has norm If then the only linear functional on is the constant map and moreover, the sets in the last two rows will both be empty and consequently, their supremums will equal instead of the correct value of Importantly, a linear function is not, in general, guaranteed to achieve its norm on the closed unit ball meaning that there might not exist any vector of norm such that (if such a vector does exist and if then would necessarily have unit norm ). R.C. James proved James's theorem in 1964, which states that a Banach space is reflexive if and only if every bounded linear function achieves its norm on the closed unit ball. It follows, in particular, that every non-reflexive Banach space has some bounded linear functional that does not achieve its norm on the closed unit ball. However, the Bishop–Phelps theorem guarantees that the set of bounded linear functionals that achieve their norm on the unit sphere of a Banach space is a norm-dense subset of the continuous dual space. The map defines a norm on (See Theorems 1 and 2 below.) The dual norm is a special case of the operator norm defined for each (bounded) linear map between normed vector spaces. Since the ground field of ( or ) is complete, is a Banach space. The topology on induced by turns out to be stronger than the weak-* topology on The double dual of a normed linear space The double dual (or second dual) of is the dual of the normed vector space . There is a natural map . Indeed, for each in define The map is linear, injective, and distance preserving. In particular, if is complete (i.e. a Banach space), then is an isometry onto a closed subspace of . In general, the map is not surjective. For example, if is the Banach space consisting of bounded functions on the real line with the supremum norm, then the map is not surjective. (See space). If is surjective, then is said to be a reflexive Banach space. If then the space is a reflexive Banach space. Examples Dual norm for matrices The defined by is self-dual, i.e., its dual norm is The , a special case of the induced norm when , is defined by the maximum singular values of a matrix, that is, has the nuclear norm as its dual norm, which is defined by for any matrix where denote the singular values. If the Schatten -norm on matrices is dual to the Schatten -norm. Finite-dimensional spaces Let be a norm on The associated dual norm, denoted is defined as (This can be shown to be a norm.) The dual norm can be interpreted as the operator norm of interpreted as a matrix, with the norm on , and the absolute value on : From the definition of dual norm we have the inequality which holds for all and The dual of the dual norm is the original norm: we have for all (This need not hold in infinite-dimensional vector spaces.) The dual of the Euclidean norm is the Euclidean norm, since (This follows from the Cauchy–Schwarz inequality; for nonzero the value of that maximises over is ) The dual of the -norm is the -norm: and the dual of the -norm is the -norm. More generally, Hölder's inequality shows that the dual of the -norm is the -norm, where satisfies that is, As another example, consider the - or spectral norm on . The associated dual norm is which turns out to be the sum of the singular values, where This norm is sometimes called the . Lp and ℓp spaces For -norm (also called -norm) of vector is If satisfy then the and norms are dual to each other and the same is true of the and norms, where is some measure space. In particular the Euclidean norm is self-dual since For , the dual norm is with positive definite. For the -norm is even induced by a canonical inner product meaning that for all vectors This inner product can expressed in terms of the norm by using the polarization identity. On this is the defined by while for the space associated with a measure space which consists of all square-integrable functions, this inner product is The norms of the continuous dual spaces of and satisfy the polarization identity, and so these dual norms can be used to define inner products. With this inner product, this dual space is also a Hilbert spaces. Properties Given normed vector spaces and let be the collection of all bounded linear mappings (or ) of into Then can be given a canonical norm. A subset of a normed space is bounded if and only if it lies in some multiple of the unit sphere; thus for every if is a scalar, then so that The triangle inequality in shows that for every satisfying This fact together with the definition of implies the triangle inequality: Since is a non-empty set of non-negative real numbers, is a non-negative real number. If then for some which implies that and consequently This shows that is a normed space. Assume now that is complete and we will show that is complete. Let be a Cauchy sequence in so by definition as This fact together with the relation implies that is a Cauchy sequence in for every It follows that for every the limit exists in and so we will denote this (necessarily unique) limit by that is: It can be shown that is linear. If , then for all sufficiently large integers and . It follows that for sufficiently all large Hence so that and This shows that in the norm topology of This establishes the completeness of When is a scalar field (i.e. or ) so that is the dual space of Let denote the closed unit ball of a normed space When is the scalar field then so part (a) is a corollary of Theorem 1. Fix There exists such that but, for every . (b) follows from the above. Since the open unit ball of is dense in , the definition of shows that if and only if for every . The proof for (c) now follows directly. As usual, let denote the canonical metric induced by the norm on and denote the distance from a point to the subset by If is a bounded linear functional on a normed space then for every vector where denotes the kernel of See also Notes References External links Notes on the proximal mapping by Lieven Vandenberge Functional analysis Linear algebra Mathematical optimization Linear functionals
Dual norm
[ "Mathematics" ]
1,432
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Linear algebra", "Algebra", "Mathematical optimization" ]
13,527,566
https://en.wikipedia.org/wiki/Heine%27s%20identity
In mathematical analysis, Heine's identity, named after Heinrich Eduard Heine is a Fourier expansion of a reciprocal square root which Heine presented as where is a Legendre function of the second kind, which has degree, m − , a half-integer, and argument, z, real and greater than one. This expression can be generalized for arbitrary half-integer powers as follows where is the Gamma function. References Special functions Mathematical identities
Heine's identity
[ "Mathematics" ]
90
[ "Special functions", "Combinatorics", "Mathematical problems", "Mathematical identities", "Mathematical theorems", "Algebra" ]
13,528,437
https://en.wikipedia.org/wiki/Benzyl%20cyanide
Benzyl cyanide (abbreviated BnCN) is an organic compound with the chemical formula C6H5CH2CN. This colorless oily aromatic liquid is an important precursor to numerous compounds in organic chemistry. It is also an important pheromone in certain species. Preparation Benzyl cyanide can be produced via Kolbe nitrile synthesis between benzyl chloride and sodium cyanide and by oxidative decarboxylation of phenylalanine. Benzyl cyanides can also be prepared by arylation of silyl-substituted acetonitrile. Reactions Benzyl cyanide undergoes many reactions characteristic of nitriles. It can be hydrolyzed to give phenylacetic acid or it can be used in the Pinner reaction to yield phenylacetic acid esters. Hydrogenation gives β-phenethylamine. The compound contains an "active methylene unit". Bromination occurs gives PhCHBrCN. A variety of base-induced reactions result in the formation of new carbon-carbon bonds. Uses Benzyl cyanide is used as a solvent and as a starting material in the synthesis of fungicides (e.g. Fenapanil), fragrances (phenethyl alcohol), antibiotics, and other pharmaceuticals. The partial hydrolysis of BnCN results in 2-phenylacetamide. Pharmaceuticals Benzyl cyanide is a useful precursor to numerous pharmaceuticals. Examples include: Antiarrhythmics (e.g. disopyramide) Antidepressants: E.g. Milnacipran & Lomevactone Antihistamines (e.g. levocabastine (para-fluoro), Pheniramine & Azatadine. Antitussives (e.g. isoaminile, oxeladin, butethamate, pentapiperide, and pentoxyverine) Diuretics (e.g. triamterene) Hypnotics (e.g. alonimid and phenobarbital) & Phenglutarimide Spasmolytics (e.g. pentapiperide and drofenine) Stimulants (e.g. methylphenidate) Opioids (e.g. ethoheptazine, pethidine, and phenoperidine) & methadone Regulation Because benzyl cyanide is a useful precursor to numerous drugs with recreational use potential, many countries strictly regulate the compound. United States Benzyl cyanide is regulated in the United States as a DEA List I chemical. China Benzyl cyanide is regulated in People's Republic of China as a Class III drug precursor since 7 June 2021. Safety Benzyl cyanide, like related benzyl derivatives, is an irritant to the skin and eyes. See also Bromobenzyl cyanide References External links EPA Chemical Profile for phenylacetonitrile Nitriles Benzyl compounds
Benzyl cyanide
[ "Chemistry" ]
641
[ "Highly-toxic chemical substances", "Nitriles", "Harmful chemical substances", "Functional groups" ]
13,529,591
https://en.wikipedia.org/wiki/Fossilworks
Fossilworks was a portal which provides query, download, and analysis tools to facilitate access to the Paleobiology Database, a large relational database assembled by hundreds of paleontologists from around the world. History Fossilworks was created in 1998 by John Alroy and housed at Macquarie University. It included many analysis and data visualization tools formerly included in the Paleobiology Database. References External links Paleontology websites Biological databases
Fossilworks
[ "Biology" ]
86
[ "Bioinformatics", "Biological databases" ]
13,530,734
https://en.wikipedia.org/wiki/NPAS2
Neuronal PAS domain protein 2 (NPAS2) also known as member of PAS protein 4 (MOP4) is a transcription factor protein that in humans is encoded by the NPAS2 gene. NPAS2 is paralogous to CLOCK, and both are key proteins involved in the maintenance of circadian rhythms in mammals. In the brain, NPAS2 functions as a generator and maintainer of mammalian circadian rhythms. More specifically, NPAS2 is an activator of transcription and translation of core clock and clock-controlled genes through its role in a negative feedback loop in the suprachiasmatic nucleus (SCN), the brain region responsible for the control of circadian rhythms. Discovery The mammalian and mouse Npas2 gene was first sequenced and characterized in 1997 Dr. Steven McKnight's lab and published by Yu-Dong Zhou et al. The gene’s cDNAs encoding mouse and human forms of NPAS2 were isolated and sequenced. RNA blotting assays were used to demonstrate the selective presence of the gene in brain and spinal cord tissues of mice. In situ hybridization indicated that the pattern of Npas2 mRNA distribution in mouse brain is broad and complex, and is largely non-overlapping with that of Npas1. Using Immunohistochemistry of human testis, Ramasamy et al. (2015) found the presence of NPAS2 protein in both germ cells within the tubules of the testes and in the interstitial space of Leydig cells. Structure In humans The Npas2 gene resides on chromosome 2 at the band q13. The gene is 176,679 bases long and contains 25 exons. The predicted 824-amino acid human NPAS2 protein shares 87% sequence identity with mouse Npas2. In mice The Npas2 gene has been found to reside on chromosome 1 at 17.98 centimorgans and is 169,505 bases long. Function In the brain The NPAS2 protein is a member of the basic helix-loop-helix (bHLH)-PAS transcription factor family and is expressed in the SCN. NPAS2 is a PAS domain-containing protein, which binds other proteins via their own protein-protein (PAS) binding domains. Like its paralogue, CLOCK (another PAS domain-containing protein), the NPAS2 protein can dimerize with the BMAL1 protein and engage in a transcription/translation negative feedback loop (TTFL) to activate transcription of the mammalian Per and Cry core clock genes. NPAS2 has been shown to form a heterodimer with BMAL1 in both the brain and in cell lines, suggesting its similarity in function to the CLOCK protein in this TTFL. Compensation is a key feature of TTFLs that regulate circadian rhythms. BMAL1 compensates for CLOCK in that if CLOCK is absent, BMAL1 will upregulate to maintain the mammalian circadian rhythms. NPAS2 has been shown to be analogous to the function of CLOCK in CLOCK-deficient mice. In Clock knockout mice, NPAS2 is upregulated to keep the rhythms intact. Npas2-mutant mice, which do not express functional NPAS2 protein, still maintain robust circadian rhythms in locomotion. However, like CLOCK-deficient mice in the CLOCK/BMAL1 TTFL, Npas2-mutant mice (in the NPAS2/BMAL1 TTFL) still have small defects in their circadian rhythms such as a shortened circadian period and an altered response to changes in the typical light-dark cycle. In addition, Npas2 knockout mice show sleep disturbances and have decreased expression of mPer2 in their forebrains. Mice without functional alleles of both Clock and Npas2 became arrhythmic once placed in constant darkness, suggesting that both genes have overlapping roles in maintaining circadian rhythms. In both wild-type and Clock knockout mice, Npas2 expression is observed at the same levels, confirming that Npas2 plays a role in maintaining these rhythms in the absence of Clock. In other tissues Npas2 is expressed everywhere in the periphery of the body. Special focus has been given to its function in liver tissues, and its mRNA is upregulated in Clock-mutant mice. However, studies have shown that Npas2 alone is unable to maintain circadian rhythms in peripheral tissues in the absence of CLOCK protein, unlike in the SCN. One theory to explain this observation is that neurons in the brain are characterized by intercellular coupling and can thus respond to deficiencies in key clock proteins in nearby neurons to maintain rhythms. In peripheral tissues such as the liver and lung, however, the lack of intercellular coupling does not allow for this compensatory mechanism to occur. A second theory as to why NPAS2 can maintain rhythms in CLOCK-deficient SCNs but not in CLOCK-deficient peripheral tissues, is that there exists an additional unknown factor in the SCN that is not present in peripheral tissues. Non-circadian function NPAS2-deficient mice have been shown to have long-term memory deficits, suggesting that the protein may play a key role in the acquisition of such memories. This theory was tested by inserting a reporter gene (lacZ) that resulted in the production of an NPAS2 protein lacking the bHLH domain. These mice were then given several tests, including the cued and contextual fear task, and showed long-term memory deficits in both tasks. Interactions NPAS2 has been shown to interact with: ARNTL (also known as BMAL1). Like Clock, Npas2 mRNA cycles with a similar phase to that of Bmal1, with both peaking 8 hours before the peak of Per2 mRNA expression. This is consistent with the observation that NPAS2 forms a heterodimer with BMAL1 to drive Per2 expression. EP300. NPAS2 and EP300 interact in a time-dependent, synchronized manner. EP300 is recruited to NPAS2 as a coactivator of clock gene expression. Retinoic acid receptor alpha (RARα) and retinoid X receptor alpha (RXRα). In peripheral clocks, RARα and RXRα interact with NPAS2 by inhibiting the NPAS2:BMAL1 heterodimer-mediated expression of clock genes. This interaction depends upon humoral signaling by retinoic acid and serves to phase-shift the clock. Small heterodimer partner (SHP). In the liver circadian clock, NPAS2 and SHP engage in a TTFL: NPAS2 controls the circadian rhythms of SHP by rhythmically binding to its promoter, while SHP inhibits transcription of Npas2 when present. Clinical significance Npas2 genotypes can be determined through tissue samples from which genomic DNA is extracted and assayed. The assay is performed under PCR conditions and can be used to determine specific mutations and polymorphisms. Polymorphisms and tumorigenesis Mounting evidence suggests that the NPAS2 protein and other circadian genes are involved in tumorigenesis and tumor growth, possibly through their control of cancer-related biologic pathways. A missense polymorphism in NPAS2 (Ala394Thr) has been shown to be associated with risk of human tumors including breast cancer. These findings provide evidence suggesting a possible role for the circadian Npas2 gene in cancer prognosis. These results have been confirmed in both breast and colorectal cancers. NPAS2 and mood disorders Current research has revealed an association between seasonal affective disorder (SAD) and general mood disorder related to NPAS2, ARNTL, and CLOCK polymorphisms. These genes may influence seasonal variations through metabolic factors such as body weight and appetite. Associated with a connection to mood disorders, NPAS2 has been found to be involved with dopamine degradation. This was first suggested by the observation that the clock components BMAL1 and NPAS2 transcriptionally activated a luciferase reporter driven by the murine monoamine oxidase A (MAOA) promoter in a circadian fashion. This suggested that these two clock components (BMAL1 and NPAS2) directly regulated MAOA transcription. Subsequent findings discovered positive transcriptional regulation of BMAL1/NPAS2 by PER2. In mice lacking PER2, both MAOA mRNA and MAOA protein levels were decreased. Therefore, dopamine degradation was reduced, and dopamine levels in the nucleus accumbens were increased. These findings indicate that degradation of monoamines is regulated by the circadian clock. It is very likely that the described clock-mediated regulation of monoamines is relevant for humans, because single-nucleotide polymorphisms in Per2, Bmal1, and Npas2 are associated in an additive fashion with seasonal affective disorder or winter depression. See also Clock (gene) Bmal1/Arntl (gene) Suprachiasmatic nucleus (SCN) Per (gene) Steven McKnight (scientist) References External links Steven McKnight, the first scientist to implicate Npas2 as a contributor to circadian rhythms Transcription factors PAS-domain-containing proteins
NPAS2
[ "Chemistry", "Biology" ]
1,918
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,530,757
https://en.wikipedia.org/wiki/RAR-related%20orphan%20receptor%20beta
RAR-related orphan receptor beta (ROR-beta), also known as NR1F2 (nuclear receptor subfamily 1, group F, member 2) is a nuclear receptor that in humans is encoded by the RORB gene. Function The protein encoded by this gene is a member of the NR1 subfamily of nuclear hormone receptors. It is a DNA-binding protein that can bind as a monomer or as a homodimer to hormone response elements upstream of several genes to enhance the expression of those genes. The specific functions of this protein are not known, but it has been shown to interact with NM23-2, a nucleoside-diphosphate kinase involved in organogenesis and differentiation. In the brain, ROR-beta is concentrated in layer 4 of the cerebral cortex, where it plays a role in the development of structures such as barrel columns. A mutation in this gene also results in the loss of spinal cord interneurons and of saltatorial locomotion, a type of hopping gait that in mammals can be found in rabbits, hares, kangaroos, and some species of rodents. Interactions RAR-related orphan receptor beta has been shown to interact with NME1. See also RAR-related orphan receptor References Further reading External links Intracellular receptors Transcription factors
RAR-related orphan receptor beta
[ "Chemistry", "Biology" ]
267
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,530,817
https://en.wikipedia.org/wiki/PER3
The PER3 gene encodes the period circadian protein homolog 3 protein in humans. PER3 is a paralog to the PER1 and PER2 genes. It is a circadian gene associated with delayed sleep phase syndrome in humans. History The Per3 gene was independently cloned by two research groups (Kobe University School of Medicine and Harvard Medical School) who both published their discovery in June 1998. The mammalian Per3 was discovered by searching for homologous cDNA sequences to Per2. The amino acid sequence of the mouse PERIOD3 protein (mPER3) is between 37-56% similar to the other two PER proteins. Function This gene is a member of the Period family of genes. It is expressed in a circadian pattern in the suprachiasmatic nucleus (SCN), the primary circadian pacemaker in the mammalian brain. Genes in this family encode components of the circadian rhythms of locomotor activity, metabolism, and behavior. Circadian expression in the SCN continues in constant darkness, and a shift in the light/dark cycle evokes a proportional shift of gene expression in the SCN. PER1 and PER2 are necessary for molecular timekeeping and light responsiveness in the master circadian clock in the SCN, but little data is shown on the concrete function for PER3. PER3 was found to be important for endogenous timekeeping in specific tissues and those tissue-specific changes in endogenous periods result in internal misalignment of circadian clocks in Per3 double knockout (-/-) mice. PER3 may have a stabilizing effect on PER1 and PER2, and this stabilizing effect may be reduced in the PER3-P415A/H417R polymorphism. Role in chronobiology The RNA levels of mPer3 oscillate with a circadian rhythm in both the SCN and in the eyes, as well as in peripheral tissues, including the liver, skeletal muscle, and testis. Unlike Per1 and Per2, of which the mRNA is induced in response to light, Per3 mRNA in the SCN does not respond to light. This suggests that Per3 may be regulated differently than either Per1 or Per2. The mPER3 protein contains a PAS domain, similar to mPER1 and mPER2. Likely, mPER3 binds to other proteins using this domain. However, while PER1/2 have been shown to be important in the transcription-translation feedback loop involved in the intracellular circadian clock, the influence of PER3 in this loop has not yet been fully elucidated, given that mPER3 does not appear to be functionally redundant to mPER1 and mPER2. mPer3 may not be a member of the core clock loop at all. Animal studies While the Per3 gene is a paralog to the PER1 and PER2 genes, studies in animals generally show that it does not contribute significantly to circadian rhythms. Functional Per3-/- animals experience only small changes in free-running period, and do not respond significantly differently to light pulses. Per1-/- and Per2-/- animals experience a significant change in free-running period; however, knocking out Per3 in addition to either Per1 or Per2 has little effect on free-running rhythms. Furthermore, Per1-/-Per2-/- mice are completely arrhythmic, indicating that these two genes have much more importance to the biological clock than Per3. Per3 knockout mice experience a slightly shortened period of locomotor activity (by 0.5 hr) and are less sensitive to light, in that they entrain more slowly to changes in the light-dark cycle. PER3 may be involved in the suppression of behavioral activity in response to light, although mPer3 expression is not necessary for circadian rhythms. Clinical significance The PER3 “length” polymorphism in the 54-bp repeat sequence in exon 18 (GenBank accession no. AB047686) is a structural polymorphism due to an insertion or deletion of 18 amino acids in a region encoding a putative phosphorylation domain. The polymorphism has been associated with diurnal preference and delayed sleep phase syndrome. A longer allele polymorphism is associated with “morningness” and the short allele with “eveningness.” The short allele is also associated with delayed sleep phase syndrome. The length polymorphism has also been shown to inhibit adipogenesis and Per3 knockout mice were shown to have increased adipose tissue and decreased muscle tissue compared to wild type. Additionally, the presence of the length polymorphism has also been shown to be associated with type 2 diabetes mellitus (T2DM) patients as compared to non-diabetic control patients. The PER3-P415A/H417R polymorphism has been linked to familial advanced sleep phase syndrome in humans, as well as to seasonal affective disorder, though when knocked in to mice, the polymorphism causes a delayed sleep phase. Gene Orthologs The following is a list of some orthologs of the PER3 gene in other species: PER3 (P. troglodytes) PER3 (M. mulatta) PER3 (C. lupus) PER3 (H. sapiens) PER3 (B. taurus) Per3 (M. musculus) Per3 (R. norvegicus) PER3 (G. gallus) per3 (X. tropicalis) per3 (D. rerio) Paralogs PER1 PER2 Gene location The human PER3 gene is located on chromosome 1 at the following location: Start: 7,784,320 bp Finish: 7,845,181 bp Length: 60,862 bases Exons: 25 PER3 has 19 transcripts (splice variants). Protein structure The PER3 protein has been identified to have the following features: Size: 1201 amino acids Molecular mass: 131888 Da Quaternary structure: Homodimer Post translational modifications The following are some known post transcriptional modifications to the Per3 gene: Phosphorylation by CSNK1E is weak and appears to require association with PER1 and translocation to the nucleus. Ubiquitinated. Modification sites at PhosphoSitePlus Modification sites at neXtProt References External links Transcription factors Circadian rhythm PAS-domain-containing proteins
PER3
[ "Chemistry", "Biology" ]
1,346
[ "Behavior", "Transcription factors", "Gene expression", "Signal transduction", "Circadian rhythm", "Sleep", "Induced stem cells" ]
13,530,834
https://en.wikipedia.org/wiki/Rev-ErbA%20alpha
Rev-Erb alpha (Rev-Erbɑ), also known as nuclear receptor subfamily 1 group D member 1 (NR1D1), is one of two Rev-Erb proteins in the nuclear receptor (NR) family of intracellular transcription factors. In humans, REV-ERBɑ is encoded by the NR1D1 gene, which is highly conserved across animal species. Rev-Erbɑ plays an important role in regulation of the core circadian clock through repression of the positive clock element Bmal1. It also regulates several physiological processes under circadian control, including metabolic and immune pathways. Rev-Erbɑ mRNA demonstrates circadian oscillation in its expression, and it is highly expressed in mammals in the brain and metabolic tissues such as skeletal muscle, adipose tissue, and liver. Discovery Rev-Erbɑ was discovered in 1989 by Nobuyuki Miyajima and colleagues, who identified two erbA homologs on human chromosome 17 that were transcribed from opposite DNA strands in the same locus. One of the genes encoded a protein that was highly similar to chicken thyroid hormone receptor, and the other, which they termed ear-1, would later be described as Rev-Erbɑ. The protein was first referenced by the name Rev-Erbɑ in 1990 by Mitchell A. Lazar, Karen E. Jones, and William W. Chin, who isolated Rev-Erbɑ complementary DNA from a human fetal skeletal muscle library. Similar to the gene in rats, they found that human Rev-Erbɑ was transcribed from the strand opposite human thyroid hormone receptor alpha (THRA, c-erbAα). Rev-Erbɑ was first implicated in circadian control in 1998, when Aurelio Balsalobre, Francesca Damiola, and Ueli Schibler demonstrated that expression of Rev-Erbɑ in rat fibroblasts showed daily rhythms. Rev-Erbɑ was first identified as a key player in the transcription translation feedback loop (TTFL) in 2002, when experiments demonstrated that Rev-Erbɑ acted to repress transcription of the Bmal1 gene, and Rev-Erbɑ expression was controlled by other TTFL components. This established Rev-Erbɑ as the link between the positive and negative loops of the TTFL. Genetics and evolution The NR1D1 (nuclear receptor subfamily 1 group D member 1) gene, located on chromosome 17, encodes the protein REV-ERBɑ in humans. It is transcribed from the opposite strand of the human thyroid hormone receptor alpha (THRA, c-erbAα) so that NR1D1 and THRA cDNA are complementary on 269 bases. The gene consists of 7,797 bases with 8 exons, forming only 1 splice variant. The NR1D1 promoter itself contains a REV-ERB response element (RevRE), which allows for regulation of gene expression both through autoregulation and regulation by retinoic acid receptor-related orphan receptor alpha (RORɑ), another nuclear receptor transcription factor. NR1D1 also contains an E-box at its promoter, which allows for regulation by BMAL1. In humans, NR1D1 (REV-ERBɑ) is highly expressed in the brain and metabolic tissues, including skeletal muscle, adipose tissue, and the liver. Genomic analysis suggests that the NR1D1 gene was present in the most recent common ancestor of all animals, with orthologs present in 378 species tested, including chimpanzees, dogs, mice, rats, chickens, zebrafish, frogs, and fruit flies. Comparison to the rat ortholog, Nr1d1, indicates high conservation in the DNA binding and carboxy-terminal domains, as well as conservation of transcription of c-erbA alpha-2 and Rev-Erbɑ on opposite strands. In humans, NR1D1 has only one paralog, NR1D2 (REV-ERBβ), which is located on chromosome 3 and likely arose from a duplication event. However, both NR1D1 and NR1D2 are members of the nuclear receptor family, indicating they share common ancestry. As such, NR1D1 is functionally related to other nuclear receptor genes, such as peroxisome proliferator activated receptor delta (PPARD) and retinoic acid receptor alpha (RARA). Furthermore, studies have shown that the NR1D1/THRA genetic locus is genetically linked to the RARA gene. Protein structure The human NR1D1 gene produces a protein product (REV-ERBα) of 614 amino acids. REV-ERBα has 3 major functional domains, including a DNA-binding domain (DBD) and a ligand-binding domain (LBD) at the C-terminus, and a N-terminus domain which allows for activity modulation. These three domains are a common feature of nuclear receptor proteins. The Rev-Erb proteins are unique from other nuclear receptors in that they do not have a helix in the C-terminal that is necessary for coactivator recruitment and activation by nuclear receptors via their LBD. Instead, Rev-Erbα interacts via its LBD with Nuclear Receptor Co-Repressor (NCoR) and another closely related co-repressor Silencing Mediator of Retinoid and Thyroid Receptors (SMRT), although the interaction with NCoR is stronger due to its structural compatibility. Heme, an endogenous ligand of Rev-Erbα, further stabilizes the interaction with NCoR. The repression by Rev-Erbα also requires interaction with the class I histone deactylase 3 (HDAC3) - NCoR complex. The catalytic activity of HDAC3 is activated only when it complexes with NCoR or SMRT, so Rev-Erbα must interact with this complex in order for gene repression to occur via histone deacetylation. It is still unknown whether other HDACs play a role in the function of Rev-Erbα. Rev-Erbα recruits the NCoR-HDAC3 complex through binding a specific DNA sequence commonly referred to as RORE due to its interaction with the transcriptional activator Retinoic Acid Receptor-related Orphan Receptor (ROR). This sequence consists of an "AGGTCA" half-site preceded by an A/T sequence.. Rev-Erbα binds in the major groove of this sequence via its DBD domain, which contains two C4-type zinc fingers. Rev-Erbα can repress gene activation as a monomer through competitive binding at this RORE site, but two Rev-Erbα molecules are required for interaction with NCoR and active gene repression. This can occur by two Rev-Erbα molecules binding separate ROREs or as a stronger interaction through binding a response element that is a direct repeat of the RORE (RevDR2). In mice, it has been shown that the N-terminal regulatory domain contains an important site for phosphorylation by casein kinase 1 epsilon (Csnk1e), which aids in proper localization of Rev-Erbα, and furthermore, that this domain is necessary for activation of the gap junction protein 1 (GJA1) gene. Function Circadian oscillator Rev-Erbα has been proposed to coordinate circadian metabolic responses. Circadian rhythms are driven by interlocking transcription/translation feedback regulatory loops (TTFLs) that generate and maintain these daily rhythms, and Rev-Erbα is involved in a secondary TTFL in mammals. The primary TTFL features transcriptional activator proteins CLOCK and BMAL1 that contribute to the rhythmic expression of genes within this loop, notably per and cry. The expression of these genes then act through negative feedback to inhibit CLOCK:BMAL1 transcription. The secondary TTFL, featuring Rev-Erbα working in conjunction with Rev-Erbβ and the orphan receptor RORα, is thought to strengthen this primary TTFL by further regulating BMAL1. RORα shares the same response elements as Rev-Erbα but exerts opposite effects on gene transcription; BMAL1 expression is repressed by Rev-Erbα and activated by RORα. CLOCK:BMAL1 expression activates the transcription of NR1D1, encoding the Rev-Erbα protein. Increased Rev-Erbα expression in turn, represses transcription of BMAL1, stabilizing the loop. The oscillating expression of RORα and Rev-Erbα in the suprachiasmatic nucleus, the principal circadian timekeeper in mammals, leads to the circadian pattern of BMAL1 expression. The occupancy of the BMAL1 promoter by these two receptors is key for proper timing of the core clock machinery in mammals. Metabolism Rev-erbα plays a role in the regulation of whole body metabolism through controlling lipid metabolism, bile acid metabolism, and glucose metabolism. Rev-Erbα relays circadian signals into metabolic and inflammatory regulatory responses and vice versa, although the precise mechanisms underlying this relationship are not entirely understood. Rev-erbα regulates the expression of liver apolipoproteins, sterol regulatory element binding protein, and the fatty acid elongase elovl3 through its repressional activity In addition, the silencing of Rev-erbα is associated with the reduction of fatty acid synthase, a key regulator of lipogenesis. Rev-erbα deficient mice exhibit dyslipidemia due to elevated triglyceride levels and Rev-erbα polymorphisms in humans have been associated with obesity. Rev-erbα also regulates adipogenesis of white and brown adipocytes. Rev-Erbα transcription is induced during the adipogenic process, and over-expression of Rev-erbα enhances adipogenesis. Researchers have proposed that Rev-erbα's role in adipocyte function may affect the timing of processes such as lipid storage and lipolysis, contributing to long term issues with BMI control. Rev-erbα also regulates bile acid metabolism by indirectly down-regulating Cyp7A1, which encodes the first and rate controlling enzyme of the major bile acid biosynthetic pathway. Rev-erbα plays both indirect and direct roles in glucose metabolism. BMAL1 heavily influences glucose production and glycogen synthesis, thus through the regulation of BMAL1, Rev-erbα indirectly regulates glucose synthesis. More directly, Rev-erbα's expression in the pancreas regulates the function of α-cells and β-cells, which produce glucagon and insulin, respectively. Muscle and cartilage Rev-erbα plays a role in myogenesis through interaction with the transcription complex Nuclear Factor-T. It also represses the expression of genes involved in muscle cell differentiation and is expressed in a circadian manner in mouse skeletal muscle. Loss of Rev-erbα function reduces mitochondrial content and function, leading to an impaired exercise capacity. Over-expression leads to improvement. This protein has also been implicated in the integrity of cartilage. Out of all known nuclear receptors, Rev-erbα is the most highly expressed in osteoarthritic cartilage. One study found that in patients with osteoarthritis has reduced Rev-erbα levels compared to normal cartilage. Research on rheumatoid arthritis (RA) has implicated the potential for treatment with Rev-erbα agonists to RA patients due to their suppression of bone and cartilage destruction. Immune system Rev-erbα contributes to the inflammatory response in mammals. In mouse smooth muscle cells, the protein up-regulates expression of interleukin 6 (IL-6) and cyclooxygenase-2. In humans, it controls the lipopolysaccharide (LPS) induced endotoxic response through repressing toll-like receptor (TLR-4), which triggers the immune response to LPS. In the brain, Rev-erbα deletion causes a disruption in the oscillation of microglial activation and increases the expression of pro-inflammatory transcripts. Many immune and inflammatory proteins exhibit circadian oscillatory behavior, and research has shown that Rev-erbα deficient mice no longer exhibit these oscillations, notably in IL-6, IL-12, CCL5, and CXCL1, and CCL2. Rev-erbα has also been implicated in the development of group 3 innate lymphoid cells (ILC3), which play a role in regulating intestinal health and are responsible for lymphoid development. REV-ERBα promotes RORγt expression, and RORγt is required for ILC3 expression. Rev-erbα is highly expressed in ILC3 subsets. Mood and behavior Rev-erbα has been implicated in the regulation of memory and mood. Rev-erbα knockout mice are deficient in short term, long term, and contextual memories, showing deficits in the function of their hippocampus. In addition, Rev-erbα has been proposed to play a role in the regulation of midbrain dopamine production and mood-related behavior in mice through repression of tyrosine hydroxylase gene transcription. Dopamine related dysfunction is associated with mood disorders, notably major depressive disorder, seasonal affective disorder, and bipolar disorder. Genetic variations in human NR1D1 loci are also associated with bipolar disorder onset. Rev-erbα has been proposed as a target in the treatment of bipolar disorder through lithium, which indirectly regulates the protein at a post-translational level. Lithium inhibits glycogen synthase kinase (GSK 3β), an enzyme that phosphorylates and stabilizes Rev-erbα. Lithium binding to GSK 3β then destabilizes and alters the function of Rev-erbα. This research has been implicated in the development of therapeutic agents for affective disorders, such as lithium for bipolar disorder. References Further reading External links Intracellular receptors Transcription factors
Rev-ErbA alpha
[ "Chemistry", "Biology" ]
2,975
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,535,375
https://en.wikipedia.org/wiki/Mass%20spectrometry%20imaging
Mass spectrometry imaging (MSI) is a technique used in mass spectrometry to visualize the spatial distribution of molecules, as biomarkers, metabolites, peptides or proteins by their molecular masses. After collecting a mass spectrum at one spot, the sample is moved to reach another region, and so on, until the entire sample is scanned. By choosing a peak in the resulting spectra that corresponds to the compound of interest, the MS data is used to map its distribution across the sample. This results in pictures of the spatially resolved distribution of a compound pixel by pixel. Each data set contains a veritable gallery of pictures because any peak in each spectrum can be spatially mapped. Despite the fact that MSI has been generally considered a qualitative method, the signal generated by this technique is proportional to the relative abundance of the analyte. Therefore, quantification is possible, when its challenges are overcome. Although widely used traditional methodologies like radiochemistry and immunohistochemistry achieve the same goal as MSI, they are limited in their abilities to analyze multiple samples at once, and can prove to be lacking if researchers do not have prior knowledge of the samples being studied. Most common ionization technologies in the field of MSI are DESI imaging, MALDI imaging, secondary ion mass spectrometry imaging (SIMS imaging) and Nanoscale SIMS (NanoSIMS). History More than 50 years ago, MSI was introduced using secondary ion mass spectrometry (SIMS) to study semiconductor surfaces by Castaing and Slodzian. However, it was the pioneering work of Richard Caprioli and colleagues in the late 1990s, demonstrating how matrix-assisted laser desorption/ionization (MALDI) could be applied to visualize large biomolecules (as proteins and lipids) in cells and tissue to reveal the function of these molecules and how function is changed by diseases like cancer, which led to the widespread use of MSI. Nowadays, different ionization techniques have been used, including SIMS, MALDI and desorption electrospray ionization (DESI), as well as other technologies. Still, MALDI is the current dominant technology with regard to clinical and biological applications of MSI. Operation principle The MSI is based on the spatial distribution of the sample. Therefore, the operation principle depends on the technique that is used to obtain the spatial information. The two techniques used in MSI are: microprobe and microscope. Microprobe This technique is performed using a focused ionization beam to analyze a specific region of the sample by generating a mass spectrum. The mass spectrum is stored along with the spatial coordination where the measurement took place. Then, a new region is selected and analyzed by moving the sample or the ionization beam. These steps are repeated until the entire sample has been scanned. By coupling all individual mass spectra, a distribution map of intensities as a function of x and y locations can be plotted. As a result, reconstructed molecular images of the sample are obtained. Microscope In this technique, a 2D position-sensitive detector is used to measure the spatial origin of the ions generated at the sample surface by the ion optics of the instruments. The resolution of the spatial information will depend on the magnification of the microscope, the quality of the ions optics and the sensitivity of the detector. A new region still needs to be scanned, but the number of positions drastically reduces. The limitation of this mode is the finite depth of vision present with all microscopes. Ion source dependence The ionization techniques available for MSI are suited to different applications. Some of the criteria for choosing the ionization method are the sample preparation requirement and the parameters of the measurement, as resolution, mass range and sensitivity. Based on that, the most common used ionization method are MALDI, SIMS AND DESI which are described below. Still, other minor techniques used are laser ablation electrospray ionization (LAESI), laser-ablation-inductively coupled plasma (LA-ICP) and nanospray desorption electrospray ionization (nano-DESI). SIMS and NanoSIMS imaging Secondary ion mass spectrometry (SIMS) is used to analyze solid surfaces and thin films by sputtering the surface with a focused primary ion beam and collecting and analyzing ejected secondary ions. There are many different sources for a primary ion beam. However, the primary ion beam must contain ions that are at the higher end of the energy scale. Some common sources are: Cs+, O2+, O, Ar+ and Ga+. SIMS imaging is performed in a manner similar to electron microscopy; the primary ion beam is emitted across the sample while secondary mass spectra are recorded. SIMS proves to be advantageous in providing the highest image resolution but only over small area of samples. More, this technique is widely regarded as one of the most sensitive forms of mass spectrometry as it can detect elements in concentrations as small as 1012-1016 atoms per cubic centimeter. Multiplexed ion beam imaging (MIBI) is a SIMS method that uses metal isotope labeled antibodies to label compounds in biological samples. Developments within SIMS: Some chemical modifications have been made within SIMS to increase the efficiency of the process. There are currently two separate techniques being used to help increase the overall efficiency by increasing the sensitivity of SIMS measurements: matrix-enhanced SIMS (ME-SIMS) - This has the same sample preparation as MALDI does as this simulates the chemical ionization properties of MALDI. ME-SIMS does not sample nearly as much material. However, if the analyte being tested has a low mass value then it can produce a similar looking spectra to that of a MALDI spectra. ME-SIMS has been so effective that it has been able to detect low mass chemicals at sub cellular levels that was not possible prior to the development of the ME-SIMS technique. The second technique being used is called sample metallization (Meta-SIMS) - This is the process of gold or silver addition to the sample. This forms a layer of gold or silver around the sample and it is normally no more than 1-3 nm thick. Using this technique has resulted in an increase of sensitivity for larger mass samples. The addition of the metallic layer also allows for the conversion of insulating samples to conducting samples, thus charge compensation within SIMS experiments is no longer required. Subcellular (50 nm) resolution is enabled by NanoSIMS allowing for absolute quantitative analysis at the organelle level. MALDI imaging Matrix-assisted laser desorption ionization can be used as a mass spectrometry imaging technique for relatively large molecules. It has recently been shown that the most effective type of matrix to use is an ionic matrix for MALDI imaging of tissue. In this version of the technique the sample, typically a thin tissue section, is moved in two dimensions while the mass spectrum is recorded. Although MALDI has the benefit of being able to record the spatial distribution of larger molecules, it comes at the cost of lower resolution than the SIMS technique. The limit for the lateral resolution for most of the modern instruments using MALDI is 20 m. MALDI experiments commonly use either an Nd:YAG (355 nm) or N2 (337 nm) laser for ionization. Pharmacodynamics and toxicodynamics in tissue have been studied by MALDI imaging. DESI imaging Desorption electrospray Ionization is a less destructive technique, which couples simplicity and rapid analysis of the sample. The sample is sprayed with an electrically charged solvent mist at an angle that causes the ionization and desorption of various molecular species. Then, two-dimensional maps of the abundance of the selected ions in the surface of the sample in relation with the spatial distribution are generated. This technique is applicable to solid, liquid, frozen and gaseous samples. Moreover, DESI allows analyzing a wide range of organic and biological compounds, as animal and plant tissues and cell culture samples, without complex sample preparation Although, this technique has the poorest resolution among other, it can create high-quality image from a large area scan, as a whole body section scanning. Fn Comparative between the ionization techniques Combination of various MSI techniques and other imaging techniques Combining various MSI techniques can be beneficial, since each particular technique has its own advantage. For example, when information regards both proteins and lipids are necessary in the same tissue section, performing DESI to analyze the lipid, followed by MALDI to obtain information about the peptide, and finalize applying a stain (haematoxylin and eosin) for medical diagnosis of the structural characteristic of the tissue. On the other side of MSI with other imaging techniques, fluorescence staining with MSI and magnetic resonance imaging (MRI) with MRI can be highlighted. Fluorescence staining can give information of the appearance of some proteins present in any process inside a tissue, while MSI may give information about the molecular changes presented in that process. Combining both techniques, multimodal picture or even 3D images of the distribution of different molecules can be generated. In contrast, MRI with MSI combines the continuous 3D representation of MRI image with detailed structural representation using molecular information from MSI. Even though, MSI itself can generate 3D images, the picture is just part of the reality due to the depth limitation in the analysis, while MRI provides, for example, detailed organ shape with additional anatomical information. This coupled technique can be beneficial for cancer precise diagnosis and neurosurgery. Data processing Standard data format for mass spectrometry imaging datasets The imzML was proposed to exchange data in a standardized XML file based on the mzML format. Several imaging MS software tools support it. The advantage of this format is the flexibility to exchange data between different instruments and data analysis software. Software There are many free software packages available for visualization and mining of imaging mass spectrometry data. Converters from Thermo Fisher format, Analyze format, GRD format and Bruker format to imzML format were developed by the Computis project. Some software modules are also available for viewing mass spectrometry images in imzML format: Biomap (Novartis, free), Datacube Explorer (AMOLF, free), EasyMSI (CEA), Mirion (JLU), MSiReader (NCSU, free) and SpectralAnalysis. For processing .imzML files with the free statistical and graphics language R, a collection of R scripts is available, which permits parallel-processing of large files on a local computer, a remote cluster or on the Amazon cloud. Another free statistical package for processing imzML and Analyze 7.5 data in R exists, Cardinal. SPUTNIK is an R package containing various filters to remove peaks characterized by an uncorrelated spatial distribution with the sample location or spatial randomness. Applications A remarkable ability of MSI is to find out the localization of biomolecules in tissues, even though there are no previous information about them. This feature has made MSI a unique tool for clinical research and pharmacological research. It provides information about biomolecular changes related with diseases by tracking proteins, lipids, and cell metabolism. For example, identifying biomarkers by MSI can show detailed cancer diagnosis. In addition, low cost imaging for pharmaceuticals studies can be acquired, such as images of molecular signatures that would be indicative of treatment response for a specific drug or the effectiveness of a particular drug delivery method. Ion colocalization has been studied as a way to infer local interactions between biomolecules. Similarly to colocalization in microscopy imaging, correlation has been used to quantify the similarity between ion images and generate network models. Advantages, challenges and limitations The main advantage of MSI for studying the molecules location and distribution within the tissue is that this analysis can provide either greater selectivity, more information or more accuracy than others. Moreover, this tool requires less investment of time and resources for similar results. The table below shows a comparison of advantages and disadvantages of some available techniques, including MSI, correlated with drug distribution analysis. Notes Further reading "Imaging Trace Metals in Biological Systems" pp 81–134 in "Metals, Microbes and Minerals: The Biogeochemical Side of Life" (2021) pp xiv + 341. Authors Yu, Jyao; Harankhedkar, Shefali; Nabatilan, Arielle; Fahrni, Christopher; Walter de Gruyter, Berlin. Editors Kroneck, Peter M.H. and Sosa Torres, Martha. DOI 10.1515/9783110589771-004 References Mass spectrometry
Mass spectrometry imaging
[ "Physics", "Chemistry" ]
2,625
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
13,535,930
https://en.wikipedia.org/wiki/Iteron
Iterons are directly repeated DNA sequences which play an important role in regulation of plasmid copy number in bacterial cells. It is one among the three negative regulatory elements found in plasmids which control its copy number. The others are antisense RNAs and ctRNAs. Iterons complex with cognate replication (Rep) initiator proteins to achieve the required regulatory effect. Regulation of replication Iterons have an important role in plasmid replication. An iteron-containing plasmid origin of replication can be found containing about five iterons about 20 base pairs in length total. These iterons provide a saturation site for initiator receptor proteins and promote replication, thus increasing plasmid copy number in a given cell. Limiting factors of initiation There are four main limiting factors leading to no initiation of replication in iterons: Transcriptional autorepression Initiator dimerization Initiator titration Handcuffing Transcriptional auto-repression is thought to reduce initiator synthesis by repressing the formation of the Rep proteins. Since these proteins work to promote binding of replication machinery, replication can be halted in this form. Another factor used to stop replication is known as dimerization. It works to dimerize these Rep proteins, and as a result, monomers of these proteins are no longer in a high enough concentration to initiate replication. Another limiting factor, titration, occurs after replication, and works to prevent saturation by distributing monomers to daughter origins so that none are fully saturated. Finally, handcuffing refers to pairing origins leading to inactivation. This is mediated by monomers, and inactivation is due to steric hindrance between the origins. Another less prevalent limitation thought to be present in these iterons is the presence of extra repeats. If a plasmid contains an extra supply of iterons outside of the saturation site, this can decrease plasmid copy number. In contrast, removing these extra iterons will increase copy number. Replicon structure Plasmids are known to have very similar structure when under control of iterons. This structure consists of an origin of replication upstream of a gene that codes for a replication initiator protein. The iterons themselves are known to cover about half of the origin of replication. Usually, iterons on the same plasmid are highly conserved, whereas comparing iterons on different plasmids still exhibit homology yet are not as highly conserved. This suggests that iterons could be evolutionarily related. Replication initiator proteins The replication initiator protein (Rep) plays a key role in initiation of replication in plasmids. In its monomer form, Rep binds an iteron and promotes replication. The protein itself is known to contain two independent N-terminal and C-terminal globular domains that subsequently bind to two domains of the iteron. The dimer version of the protein is generally inactive in iteron binding; however, it is known to bind to the repE operator. This operator contains half of the iteron sequence, making it able to bind the dimer and promote gene expression. Plasmids containing iterons are all organized very similarly in structure. The gene for Rep proteins is usually found directly downstream of the origin of replication. This means that the iterons themselves are known to regulate the synthesis of the rep proteins. References Genetics techniques Molecular biology
Iteron
[ "Chemistry", "Engineering", "Biology" ]
720
[ "Genetics techniques", "Biochemistry", "Genetic engineering", "Molecular biology" ]
3,084,295
https://en.wikipedia.org/wiki/Linear%20network%20coding
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations. Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network. It has been proven that, theoretically, linear coding is enough to achieve the upper bound in multicast problems with one source. However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding. Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard and even undecidable. Encoding and decoding In a linear network coding problem, a group of nodes are involved in moving the data from source nodes to sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them by coefficients chosen from a finite field, typically of size . More formally, each node, with indegree, , generates a message from the linear combination of received messages by the formula: Where the values are coefficients selected from . Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value along with the coefficients, , used in the level, . Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix. In reduced row echelon form, decoded packets correspond to the rows of the form Background A network is represented by a directed graph . is the set of nodes or vertices, is the set of directed links (or edges), and gives the capacity of each link of . Let be the maximum possible throughput from node to node . By the max-flow min-cut theorem, is upper bounded by the minimum capacity of all cuts, which is the sum of the capacities of the edges on a cut, between these two nodes. Karl Menger proved that there is always a set of edge-disjoint paths achieving the upper bound in a unicast scenario, known as the max-flow min-cut theorem. Later, the Ford–Fulkerson algorithm was proposed to find such paths in polynomial time. Then, Edmonds proved in the paper "Edge-Disjoint Branchings" the upper bound in the broadcast scenario is also achievable, and proposed a polynomial time algorithm. However, the situation in the multicast scenario is more complicated, and in fact, such an upper bound can't be reached using traditional routing ideas. Ahlswede et al. proved that it can be achieved if additional computing tasks (incoming packets are combined into one or several outgoing packets) can be done in the intermediate nodes. The Butterfly Network The butterfly network is often used to illustrate how linear network coding can outperform routing. Two source nodes (at the top of the picture) have information A and B that must be transmitted to the two destination nodes (at the bottom). Each destination node wants to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot). If only routing were allowed, then the central link would be only able to carry A or B, but not both. Supposing we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B to both destinations simultaneously. Meanwhile, it takes four time slots in total for both destination nodes to know A and B. Using a simple code, as shown, A and B can be transmitted to both destinations simultaneously by sending the sum of the symbols through the two relay nodes – encoding A and B using the formula "A+B". The left destination receives A and A + B, and can calculate B by subtracting the two values. Similarly, the right destination will receive B and A + B, and will also be able to determine both A and B. Therefore, with network coding, it takes only three time slots and improves the throughput. Random Linear Network Coding Random linear network coding (RLNC) is a simple yet powerful encoding scheme, which in broadcast transmission schemes allows close to optimal throughput using a decentralized algorithm. Nodes transmit random linear combinations of the packets they receive, with coefficients chosen randomly, with a uniform distribution from a Galois field. If the field size is sufficiently large, the probability that the receiver(s) will obtain linearly independent combinations (and therefore obtain innovative information) approaches 1. It should however be noted that, although random linear network coding has excellent throughput performance, if a receiver obtains an insufficient number of packets, it is extremely unlikely that they can recover any of the original packets. This can be addressed by sending additional random linear combinations until the receiver obtains the appropriate number of packets. Operation and key parameters There are three key parameters in RLNC. The first one is the generation size. In RLNC, the original data transmitted over the network is divided into packets. The source and intermediate nodes in the network can combine and recombine the set of original and coded packets. The original packets form a block, usually called a generation. The number of original packets combined and recombined together is the generation size. The second parameter is the packet size. Usually, the size of the original packets is fixed. In the case of unequally-sized packets, these can be zero-padded if they are shorter or split into multiple packets if they are longer. In practice, the packet size can be the size of the maximum transmission unit (MTU) of the underlying network protocol. For example, it can be around 1500 bytes in an Ethernet frame. The third key parameter is the Galois field used. In practice, the most commonly used Galois fields are binary extension fields. And the most commonly used sizes for the Galois fields are the binary field and the so-called binary-8 (). In the binary field, each element is one bit long, while in the binary-8, it is one byte long. Since the packet size is usually larger than the field size, each packet is seen as a set of elements from the Galois field (usually referred to as symbols) appended together. The packets have a fixed amount of symbols (Galois field elements), and since all the operations are performed over Galois fields, then the size of the packets does not change with subsequent linear combinations. The sources and the intermediate nodes can combine any subset of the original and previously coded packets performing linear operations. To form a coded packet in RLNC, the original and previously coded packets are multiplied by randomly chosen coefficients and added together. Since each packet is just an appended set of Galois field elements, the operations of multiplication and addition are performed symbol-wise over each of the individual symbols of the packets, as shown in the picture from the example. To preserve the statelessness of the code, the coding coefficients used to generate the coded packets are appended to the packets transmitted over the network. Therefore, each node in the network can see what coefficients were used to generate each coded packet. One novelty of linear network coding over traditional block codes is that it allows the recombination of previously coded packets into new and valid coded packets. This process is usually called recoding. After a recoding operation, the size of the appended coding coefficients does not change. Since all the operations are linear, the state of the recoded packet can be preserved by applying the same operations of addition and multiplication to the payload and the appended coding coefficients. In the following example, we will illustrate this process. Any destination node must collect enough linearly independent coded packets to be able to reconstruct the original data. Each coded packet can be understood as a linear equation where the coefficients are known since they are appended to the packet. In these equations, each of the original packets is the unknown. To solve the linear system of equations, the destination needs at least linearly independent equations (packets). Example In the figure, we can see an example of two packets linearly combined into a new coded packet. In the example, we have two packets, namely packet and packet . The generation size of our example is two. We know this because each packet has two coding coefficients () appended. The appended coefficients can take any value from the Galois field. However, an original, uncoded data packet would have appended the coding coefficients or , which means that they are constructed by a linear combination of zero times one of the packets plus one time the other packet. Any coded packet would have appended other coefficients. In our example, packet for instance has appended the coefficients . Since network coding can be applied at any layter of the communication protocol, these packets can have a header from the other layers, which is ignored in the network coding operations. Now, lets assume that the network node wants to produce a new coded packet combining packet and packet . In RLNC, it will randomly choose two coding coefficients, and in the example. The node will multiply each symbol of packet by , and each symbol of packet by . Then, it will add the results symbol-wise to produce the new coded data. It will perform the same operations of multiplication and addition to the coding coefficients of the coded packets. Misconceptions Linear network coding is still a relatively new subject. However, the topic has been vastly researched over the last twenty years. Nevertheless, there are still some misconceptions that are no longer valid: Decoding computational complexity: Network coding decoders have been improved over the years. Nowadays, the algorithms are highly efficient and parallelizable. In 2016, with Intel Core i5 processors with SIMD instructions enabled, the decoding goodput of network coding was 750 MB/s for a generation size of 16 packets and 250 MB/s for a generation size of 64 packets. Furthermore, today's algorithms can be vastly parallelizable, increasing the encoding and decoding goodput even further. Transmission Overhead: It is usually thought that the transmission overhead of network coding is high due to the need to append the coding coefficients to each coded packet. In reality, this overhead is negligible in most applications. The overhead due to coding coefficients can be computed as follows. Each packet has appended coding coefficients. The size of each coefficient is the number of bits needed to represent one element of the Galois field. In practice, most network coding applications use a generation size of no more than 32 packets per generation and Galois fields of 256 elements (binary-8). With these numbers, each packet needs bytes of appended overhead. If each packet is 1500 bytes long (i.e. the Ethernet MTU), then 32 bytes represent an overhead of only 2%. Overhead due to linear dependencies: Since the coding coefficients are chosen randomly in RLNC, there is a chance that some transmitted coded packets are not beneficial to the destination because they are formed using a linearly dependent combination of packets. However, this overhead is negligible in most applications. The linear dependencies depend on the Galois fields' size and are practically independent of the generation size used. We can illustrate this with the following example. Let us assume we are using a Galois field of elements and a generation size of packets. If the destination has not received any coded packet, we say it has degrees of freedom, and then almost any coded packet will be useful and innovative. In fact, only the zero-packet (only zeroes in the coding coefficients) will be non-innovative. The probability of generating the zero-packet is equal to the probability of each of the coding coefficient to be equal to the zero-element of the Galois field. I.e., the probability of a non-innovative packet is of . With each successive innovative transmission, it can be shown that the exponent of the probability of a non innovative packet is reduced by one. When the destination has received innovative packets (i.e., it needs only one more packet to fully decode the data). Then the probability of a non innovative packet is of . We can use this knowledge to calculate the expected number of linearly dependent packets per generation. In the worst-case scenario, when the Galois field used contains only two elements (), the expected number of linearly dependent packets per generation is of 1.6 extra packets. If our generation size if of 32 or 64 packets, this represents an overhead of 5% or 2.5%, respectively. If we use the binary-8 field (), then the expected number of linearly dependent packets per generation is practically zero. Since it is the last packets the major contributor to the overhead due to linear dependencies, there are RLNC-based protocols such as tunable sparse network coding that exploit this knowledge. These protocols introduce sparsity (zero-elements) in the coding coefficients at the beginning of the transmission to reduce the decoding complexity, and reduce the sparsity at the end of the transmission to reduce the overhead due to linear dependencies. Applications Over the years, multiple researchers and companies have integrated network coding solutions into their applications. We can list some of the applications of network coding in different areas: VoIP: The performance of streaming services such as VoIP over wireless mesh networks can be improved with network coding by reducing the network delay and jitter. Video and audio streaming and conferencing: The performance of MPEG-4 traffic in terms of delay, packet loss, and jitter over wireless networks prone to packet erasures can be improved with RLNC. In the case of audio streaming over wireless mesh networks, the packet delivery ratio, latency, and jitter performance of the network can be significantly increased when using RLNC instead of packet forwarding-based protocols such as simplified multicast forwarding and partial dominant pruning. The performance improvements of network coding for video conferencing are not only theoretical. In 2016, the authors of built a real-world testbed of 15 wireless Android devices to evaluate the feasibility of network-coding-based video conference systems. Their results showed large improvements in packet delivery ratio and overall user experience, especially over poor quality links compared to multicasting technologies based on packet forwarding. Software-defined wide area networks (SD-WAN): Large industrial IoT wireless networks can benefit from network coding. Researchers showed that network coding and its channel bundling capabilities improved the performance of SD-WANs with a large number of nodes with multiple cellular connections. Nowadays, companies such as Barracuda are employing RLNC-based solutions due to their advantages in low latency, small footprint on computing devices, and low overhead. Channel bundling: Due to the statelessness characteristics of RLNC, it can be used to efficiently perform channel bundling, i.e., the transmission of information through multiple network interfaces. Since the coded packets are randomly generated, and the state of the code traverses the network together with the coded packets, a source can achieve bundling without much planning just by sending coded packets through all its network interfaces. The destination can decode the information once enough coded packets arrive, irrespectively of the network interface. A video demonstrating the channel bundling capabilities of RLNC is available at. 5G private networks: RLNC can be integrated into the 5G NR standard to improve the performance of video delivery over 5G systems. In 2018, a demo presented at the Consumer Electronics Show demonstrated a practical deployment of RLNC with NFV and SDN technologies to improve video quality against packet loss due to congestion at the core network. Remote collaboration. Augmented reality remote support and training. Remote vehicle driving applications. Connected cars networks. Gaming applications such as low latency streaming and multiplayer connectivity. Healthcare applications. Industry 4.0. Satellite networks. Agricultural sensor fields. In-flight entertainment networks. Major security and firmware updates for mobile product families. Smart city infrastructure. Information-centric networking and named data networking.: Linear network coding can improve the network efficiency of information-centric networking solutions by exploiting the multi-source multi-cast nature of such systems. It has been shown, that RLNC can be integrated into distributed content delivery networks such as IPFS to increase data availability while reducing storage resources. Alternative to forward error correction and automatic repeat requests in traditional and wireless networks with packet loss, such as Coded TCP and Multi-user ARQ Protection against network attacks such as snooping, eavesdropping, replay, or data corruption. Digital file distribution and P2P file sharing, e.g. Avalanche filesystem from Microsoft Distributed storage Throughput increase in wireless mesh networks, e.g.: COPE, CORE, Coding-aware routing, and B.A.T.M.A.N. Buffer and delay reduction in spatial sensor networks: Spatial buffer multiplexing Wireless broadcast: RLNC can reduce the number of packet transmission for a single-hop wireless multicast network, and hence improve network bandwidth Distributed file sharing Low-complexity video streaming to mobile device Device-to-device extensions See also Secret sharing protocol Homomorphic signatures for network coding Triangular network coding References Fragouli, C.; Le Boudec, J. & Widmer, J. "Network coding: An instant primer" in Computer Communication Review, 2006. https://doi.org/10.1145/1111322.1111337 Ali Farzamnia, Sharifah K. Syed-Yusof, Norsheila Fisa "Multicasting Multiple Description Coding Using p-Cycle Network Coding", KSII Transactions on Internet and Information Systems, Vol 7, No 12, 2013. External links Network Coding Homepage A network coding bibliography Raymond W. Yeung, Information Theory and Network Coding, Springer 2008, http://iest2.ie.cuhk.edu.hk/~whyeung/book2/ Raymond W. Yeung et al., Network Coding Theory, now Publishers, 2005, http://iest2.ie.cuhk.edu.hk/~whyeung/netcode/monograph.html Christina Fragouli et al., Network Coding: An Instant Primer, ACM SIGCOMM 2006, http://infoscience.epfl.ch/getfile.py?mode=best&recid=58339. Avalanche Filesystem, http://research.microsoft.com/en-us/projects/avalanche/default.aspx Random Network Coding, https://web.archive.org/web/20060618083034/http://www.mit.edu/~medard/coding1.htm Digital Fountain Codes, http://www.icsi.berkeley.edu/~luby/ Coding-Aware Routing, https://web.archive.org/web/20081011124616/http://arena.cse.sc.edu/papers/rocx.secon06.pdf MIT offers a course: Introduction to Network Coding Network coding: Networking's next revolution? Coding-aware protocol design for wireless networks: http://scholarcommons.sc.edu/etd/230/ Coding theory Information theory Finite fields Network performance Wireless sensor network
Linear network coding
[ "Mathematics", "Technology", "Engineering" ]
4,081
[ "Discrete mathematics", "Coding theory", "Telecommunications engineering", "Applied mathematics", "Wireless networking", "Wireless sensor network", "Computer science", "Information theory" ]
3,087,385
https://en.wikipedia.org/wiki/Scalar%20theories%20of%20gravitation
Scalar theories of gravitation are field theories of gravitation in which the gravitational field is described using a scalar field, which is required to satisfy some field equation. Note: This article focuses on relativistic classical field theories of gravitation. The best known relativistic classical field theory of gravitation, general relativity, is a tensor theory, in which the gravitational interaction is described using a tensor field. Newtonian gravity The prototypical scalar theory of gravitation is Newtonian gravitation. In this theory, the gravitational interaction is completely described by the potential , which is required to satisfy the Poisson equation (with the mass density acting as the source of the field). To wit: , where G is the gravitational constant and is the mass density. This field theory formulation leads directly to the familiar law of universal gravitation, . Nordström's theories of gravitation The first attempts to present a relativistic (classical) field theory of gravitation were also scalar theories. Gunnar Nordström created two such theories. Nordström's first idea (1912) was to simply replace the divergence operator in the field equation of Newtonian gravity with the d'Alembertian operator . This gives the field equation . However, several theoretical difficulties with this theory quickly arose, and Nordström dropped it. A year later, Nordström tried again, presenting the field equation , where is the trace of the stress–energy tensor. Solutions of Nordström's second theory are conformally flat Lorentzian spacetimes. That is, the metric tensor can be written as , where ημν is the Minkowski metric, and is a scalar which is a function of position. This suggestion signifies that the inertial mass should depend on the scalar field. Nordström's second theory satisfies the weak equivalence principle. However: The theory fails to predict any deflection of light passing near a massive body (contrary to observation) The theory predicts an anomalous perihelion precession of Mercury, but this disagrees in both sign and magnitude with the observed anomalous precession (the part which cannot be explained using Newtonian gravitation). Despite these disappointing results, Einstein's critiques of Nordström's second theory played an important role in his development of general relativity. Einstein's scalar theory In 1913, Einstein (erroneously) concluded from his hole argument that general covariance was not viable. Inspired by Nordström's work, he proposed his own scalar theory. This theory employs a massless scalar field coupled to the stress–energy tensor, which is the sum of two terms. The first, represents the stress–momentum–energy of the scalar field itself. The second represents the stress-momentum-energy of any matter which may be present: where is the velocity vector of an observer, or tangent vector to the world line of the observer. (Einstein made no attempt, in this theory, to take account of possible gravitational effects of the field energy of the electromagnetic field.) Unfortunately, this theory is not diffeomorphism covariant. This is an important consistency condition, so Einstein dropped this theory in late 1914. Associating the scalar field with the metric leads to Einstein's later conclusions that the theory of gravitation he sought could not be a scalar theory. Indeed, the theory he finally arrived at in 1915, general relativity, is a tensor theory, not a scalar theory, with a 2-tensor, the metric, as the potential. Unlike his 1913 scalar theory, it is generally covariant, and it does take into account the field energy–momentum–stress of the electromagnetic field (or any other nongravitational field). Additional variations Kaluza–Klein theory involves the use of a scalar gravitational field in addition to the electromagnetic field potential in an attempt to create a five-dimensional unification of gravity and electromagnetism. Its generalization with a 5th variable component of the metric that leads to a variable gravitational constant was first given by Pascual Jordan. Brans–Dicke theory is a scalar-tensor theory, not a scalar theory, meaning that it represents the gravitational interaction using both a scalar field and a tensor field. We mention it here because one of the field equations of this theory involves only the scalar field and the trace of the stress–energy tensor, as in Nordström's theory. Moreover, the Brans–Dicke theory is equal to the independently derived theory of Jordan (hence it is often referred to as the Jordan-Brans–Dicke or JBD theory). The Brans–Dicke theory couples a scalar field with the curvature of space-time and is self-consistent and, assuming appropriate values for a tunable constant, this theory has not been ruled out by observation. The Brans–Dicke theory is generally regarded as a leading competitor of general relativity, which is a pure tensor theory. However, the Brans–Dicke theory seems to need too high a parameter, which favours general relativity). Zee combined the idea of the BD theory with the Higgs-Mechanism of Symmetry Breakdown for mass generation, which led to a scalar-tensor theory with Higgs field as scalar field, in which the scalar field is massive (short-ranged). An example of this theory was proposed by H. Dehnen and H. Frommert 1991, parting from the nature of Higgs field interacting gravitational- and Yukawa (long-ranged)-like with the particles that get mass through it. The Watt–Misner theory (1999) is a recent example of a scalar theory of gravitation. It is not intended as a viable theory of gravitation (since, as Watt and Misner point out, it is not consistent with observation), but as a toy theory which can be useful in testing numerical relativity schemes. It also has pedagogical value. See also Nordström's theory of gravitation References External links Goenner, Hubert F. M., "On the History of Unified Field Theories"; Living Rev. Relativ. 7(2), 2004, lrr-2004-2. Retrieved August 10, 2005. P. Jordan, Schwerkraft und Weltall, Vieweg (Braunschweig) 1955. Theories of gravity
Scalar theories of gravitation
[ "Physics" ]
1,336
[ "Theoretical physics", "Theories of gravity" ]
3,087,602
https://en.wikipedia.org/wiki/Topological%20order
In physics, topological order is a kind of order in the zero-temperature phase of matter (also known as quantum matter). Macroscopically, topological order is defined and described by robust ground state degeneracy and quantized non-abelian geometric phases of degenerate ground states. Microscopically, topological orders correspond to patterns of long-range quantum entanglement. States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition. Various topologically ordered states have interesting properties, such as (1) topological degeneracy and fractional statistics or non-abelian group statistics that can be used to realize a topological quantum computer; (2) perfect conducting edge states that may have important device applications; (3) emergent gauge field and Fermi statistics that suggest a quantum information origin of elementary particles; (4) topological entanglement entropy that reveals the entanglement origin of topological order, etc. Topological order is important in the study of several physical systems such as spin liquids, and the quantum Hall effect, along with potential applications to fault-tolerant quantum computation. Topological insulators and topological superconductors (beyond 1D) do not have topological order as defined above, their entanglements being only short-ranged, but are examples of symmetry-protected topological order. Background Matter composed of atoms can have different properties and appear in different forms, such as solid, liquid, superfluid, etc. These various forms of matter are often called states of matter or phases. According to condensed matter physics and the principle of emergence, the different properties of materials generally arise from the different ways in which the atoms are organized in the materials. Those different organizations of the atoms (or other particles) are formally called the orders in the materials. Atoms can organize in many ways which lead to many different orders and many different types of materials. Landau symmetry-breaking theory provides a general understanding of these different orders. It points out that different orders really correspond to different symmetries in the organizations of the constituent atoms. As a material changes from one order to another order (i.e., as the material undergoes a phase transition), what happens is that the symmetry of the organization of the atoms changes. For example, atoms have a random distribution in a liquid, so a liquid remains the same as we displace atoms by an arbitrary distance. We say that a liquid has a continuous translation symmetry. After a phase transition, a liquid can turn into a crystal. In a crystal, atoms organize into a regular array (a lattice). A lattice remains unchanged only when we displace it by a particular distance (integer times a lattice constant), so a crystal has only discrete translation symmetry. The phase transition between a liquid and a crystal is a transition that reduces the continuous translation symmetry of the liquid to the discrete symmetry of the crystal. Such a change in symmetry is called symmetry breaking. The essence of the difference between liquids and crystals is therefore that the organizations of atoms have different symmetries in the two phases. Landau symmetry-breaking theory has been a very successful theory. For a long time, physicists believed that Landau Theory described all possible orders in materials, and all possible (continuous) phase transitions. Discovery and characterization However, since the late 1980s, it has become gradually apparent that Landau symmetry-breaking theory may not describe all possible orders. In an attempt to explain high temperature superconductivity the chiral spin state was introduced. At first, physicists still wanted to use Landau symmetry-breaking theory to describe the chiral spin state. They identified the chiral spin state as a state that breaks the time reversal and parity symmetries, but not the spin rotation symmetry. This should be the end of the story according to Landau's symmetry breaking description of orders. However, it was quickly realized that there are many different chiral spin states that have exactly the same symmetry, so symmetry alone was not enough to characterize different chiral spin states. This means that the chiral spin states contain a new kind of order that is beyond the usual symmetry description. The proposed, new kind of order was named "topological order". The name "topological order" is motivated by the low energy effective theory of the chiral spin states which is a topological quantum field theory (TQFT). New quantum numbers, such as ground state degeneracy (which can be defined on a closed space or an open space with gapped boundaries, including both Abelian topological orders and non-Abelian topological orders) and the non-Abelian geometric phase of degenerate ground states, were introduced to characterize and define the different topological orders in chiral spin states. More recently, it was shown that topological orders can also be characterized by topological entropy. But experiments soon indicated that chiral spin states do not describe high-temperature superconductors, and the theory of topological order became a theory with no experimental realization. However, the similarity between chiral spin states and quantum Hall states allows one to use the theory of topological order to describe different quantum Hall states. Just like chiral spin states, different quantum Hall states all have the same symmetry and are outside the Landau symmetry-breaking description. One finds that the different orders in different quantum Hall states can indeed be described by topological orders, so the topological order does have experimental realizations. The fractional quantum Hall (FQH) state was discovered in 1982 before the introduction of the concept of topological order in 1989. But the FQH state is not the first experimentally discovered topologically ordered state. The superconductor, discovered in 1911, is the first experimentally discovered topologically ordered state; it has Z2 topological order. Although topologically ordered states usually appear in strongly interacting boson/fermion systems, a simple kind of topological order can also appear in free fermion systems. This kind of topological order corresponds to integral quantum Hall state, which can be characterized by the Chern number of the filled energy band if we consider integer quantum Hall state on a lattice. Theoretical calculations have proposed that such Chern numbers can be measured for a free fermion system experimentally. It is also well known that such a Chern number can be measured (maybe indirectly) by edge states. The most important characterization of topological orders would be the underlying fractionalized excitations (such as anyons) and their fusion statistics and braiding statistics (which can go beyond the quantum statistics of bosons or fermions). Current research works show that the loop and string like excitations exist for topological orders in the 3+1 dimensional spacetime, and their multi-loop/string-braiding statistics are the crucial signatures for identifying 3+1 dimensional topological orders. The multi-loop/string-braiding statistics of 3+1 dimensional topological orders can be captured by the link invariants of particular topological quantum field theory in 4 spacetime dimensions. Mechanism A large class of 2+1D topological orders is realized through a mechanism called string-net condensation. This class of topological orders can have a gapped edge and are classified by unitary fusion category (or monoidal category) theory. One finds that string-net condensation can generate infinitely many different types of topological orders, which may indicate that there are many different new types of materials remaining to be discovered. The collective motions of condensed strings give rise to excitations above the string-net condensed states. Those excitations turn out to be gauge bosons. The ends of strings are defects which correspond to another type of excitations. Those excitations are the gauge charges and can carry Fermi or fractional statistics. The condensations of other extended objects such as "membranes", "brane-nets", and fractals also lead to topologically ordered phases and "quantum glassiness". Mathematical formulation We know that group theory is the mathematical foundation of symmetry-breaking orders. What is the mathematical foundation of topological order? It was found that a subclass of 2+1D topological orders—Abelian topological orders—can be classified by a K-matrix approach. The string-net condensation suggests that tensor category (such as fusion category or monoidal category) is part of the mathematical foundation of topological order in 2+1D. The more recent researches suggest that (up to invertible topological orders that have no fractionalized excitations): 2+1D bosonic topological orders are classified by unitary modular tensor categories. 2+1D bosonic topological orders with symmetry G are classified by G-crossed tensor categories. 2+1D bosonic/fermionic topological orders with symmetry G are classified by unitary braided fusion categories over symmetric fusion category, that has modular extensions. The symmetric fusion category Rep(G) for bosonic systems and sRep(G) for fermionic systems. Topological order in higher dimensions may be related to n-Category theory. Quantum operator algebra is a very important mathematical tool in studying topological orders. Some also suggest that topological order is mathematically described by extended quantum symmetry. Applications The materials described by Landau symmetry-breaking theory have had a substantial impact on technology. For example, ferromagnetic materials that break spin rotation symmetry can be used as the media of digital information storage. A hard drive made of ferromagnetic materials can store gigabytes of information. Liquid crystals that break the rotational symmetry of molecules find wide application in display technology. Crystals that break translation symmetry lead to well defined electronic bands which in turn allow us to make semiconducting devices such as transistors. Different types of topological orders are even richer than different types of symmetry-breaking orders. This suggests their potential for exciting, novel applications. One theorized application would be to use topologically ordered states as media for quantum computing in a technique known as topological quantum computing. A topologically ordered state is a state with complicated non-local quantum entanglement. The non-locality means that the quantum entanglement in a topologically ordered state is distributed among many different particles. As a result, the pattern of quantum entanglements cannot be destroyed by local perturbations. This significantly reduces the effect of decoherence. This suggests that if we use different quantum entanglements in a topologically ordered state to encode quantum information, the information may last much longer. The quantum information encoded by the topological quantum entanglements can also be manipulated by dragging the topological defects around each other. This process may provide a physical apparatus for performing quantum computations. Therefore, topologically ordered states may provide natural media for both quantum memory and quantum computation. Such realizations of quantum memory and quantum computation may potentially be made fault tolerant. Topologically ordered states in general have a special property that they contain non-trivial boundary states. In many cases, those boundary states become perfect conducting channel that can conduct electricity without generating heat. This can be another potential application of topological order in electronic devices. Similarly to topological order, topological insulators also have gapless boundary states. The boundary states of topological insulators play a key role in the detection and the application of topological insulators. This observation naturally leads to a question: are topological insulators examples of topologically ordered states? In fact topological insulators are different from topologically ordered states defined in this article. Topological insulators only have short-ranged entanglements and have no topological order, while the topological order defined in this article is a pattern of long-range entanglement. Topological order is robust against any perturbations. It has emergent gauge theory, emergent fractional charge and fractional statistics. In contrast, topological insulators are robust only against perturbations that respect time-reversal and U(1) symmetries. Their quasi-particle excitations have no fractional charge and fractional statistics. Strictly speaking, topological insulator is an example of symmetry-protected topological (SPT) order, where the first example of SPT order is the Haldane phase of spin-1 chain. But the Haldane phase of spin-2 chain has no SPT order. Potential impact Landau symmetry-breaking theory is a cornerstone of condensed matter physics. It is used to define the territory of condensed matter research. The existence of topological order appears to indicate that nature is much richer than Landau symmetry-breaking theory has so far indicated. So topological order opens up a new direction in condensed matter physics—a new direction of highly entangled quantum matter. We realize that quantum phases of matter (i.e. the zero-temperature phases of matter) can be divided into two classes: long range entangled states and short range entangled states. Topological order is the notion that describes the long range entangled states: topological order = pattern of long range entanglements. Short range entangled states are trivial in the sense that they all belong to one phase. However, in the presence of symmetry, even short range entangled states are nontrivial and can belong to different phases. Those phases are said to contain SPT order. SPT order generalizes the notion of topological insulator to interacting systems. Some suggest that topological order (or more precisely, string-net condensation) in local bosonic (spin) models has the potential to provide a unified origin for photons, electrons and other elementary particles in our universe. See also AKLT model Fractionalization Herbertsmithite Implicate order Quantum topology Spin liquid String-net liquid Symmetry-protected topological order Topological defect Topological degeneracy Topological entropy in physics Topological quantum field theory Topological quantum number Topological string theory Notes References References by categories Fractional quantum Hall states Chiral spin states Early characterization of FQH states Off-diagonal long-range order, oblique confinement, and the fractional quantum Hall effect, S. M. Girvin and A. H. MacDonald, Phys. Rev. Lett., 58, 1252 (1987) Effective-Field-Theory Model for the Fractional Quantum Hall Effect, S. C. Zhang and T. H. Hansson and S. Kivelson, Phys. Rev. Lett., 62, 82 (1989) Topological order Xiao-Gang Wen, Phys. Rev. B, 40, 7387 (1989), "Vacuum Degeneracy of Chiral Spin State in Compactified Spaces" Xiao-Gang Wen, Quantum Field Theory of Many Body Systems – From the Origin of Sound to an Origin of Light and Electrons, Oxford Univ. Press, Oxford, 2004. Characterization of topological order D. Arovas and J. R. Schrieffer and F. Wilczek, Phys. Rev. Lett., 53, 722 (1984), "Fractional Statistics and the Quantum Hall Effect" Effective theory of topological order Mechanism of topological order Quantum computing Ady Stern and Bertrand I. Halperin, Phys. Rev. Lett., 96, 016802 (2006), Proposed Experiments to probe the Non-Abelian nu=5/2 Quantum Hall State Emergence of elementary particles Xiao-Gang Wen, Phys. Rev. D68, 024501 (2003), Quantum order from string-net condensations and origin of light and massless fermions See also Zheng-Cheng Gu and Xiao-Gang Wen, gr-qc/0606100, A lattice bosonic model as a quantum theory of gravity, Quantum operator algebra Landsman N. P. and Ramazan B., Quantization of Poisson algebras associated to Lie algebroids, in Proc. Conf. on Groupoids in Physics, Analysis and Geometry(Boulder CO, 1999)', Editors J. Kaminker et al.,159{192 Contemp. Math. 282, Amer. Math. Soc., Providence RI, 2001, (also math{ph/001005.) Non-Abelian Quantum Algebraic Topology (NAQAT) 20 Nov. (2008),87 pages, Baianu, I.C. Levin A. and Olshanetsky M., Hamiltonian Algebroids and deformations of complex structures on Riemann curves, hep-th/0301078v1. Xiao-Gang Wen, Yong-Shi Wu and Y. Hatsugai., Chiral operator product algebra and edge excitations of a FQH droplet (pdf),Nucl. Phys. B422, 476 (1994): Used chiral operator product algebra to construct the bulk wave function, characterize the topological orders and calculate the edge states for some non-Abelian FQH states. Xiao-Gang Wen and Yong-Shi Wu., Chiral operator product algebra hidden in certain FQH states (pdf),Nucl. Phys. B419, 455 (1994): Demonstrated that non-Abelian topological orders are closely related to chiral operator product algebra (instead of conformal field theory). Non-Abelian theory. . R. Brown, P.J. Higgins, P. J. and R. Sivera, "Nonabelian Algebraic Topology: filtered spaces, crossed complexes, cubical homotopy groupoids" EMS Tracts in Mathematics Vol 15 (2011), A Bibliography for Categories and Algebraic Topology Applications in Theoretical Physics Quantum Algebraic Topology (QAT) Quantum phases Condensed matter physics Statistical mechanics
Topological order
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,592
[ "Quantum phases", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
17,481,271
https://en.wikipedia.org/wiki/Fluorine
Fluorine is a chemical element; it has symbol F and atomic number 9. It is the lightest halogen and exists at standard conditions as pale yellow diatomic gas. Fluorine is extremely reactive as it reacts with all other elements except for the light inert gases. It is highly toxic. Among the elements, fluorine ranks 24th in cosmic abundance and 13th in crustal abundance. Fluorite, the primary mineral source of fluorine, which gave the element its name, was first described in 1529; as it was added to metal ores to lower their melting points for smelting, the Latin verb meaning gave the mineral its name. Proposed as an element in 1810, fluorine proved difficult and dangerous to separate from its compounds, and several early experimenters died or sustained injuries from their attempts. Only in 1886 did French chemist Henri Moissan isolate elemental fluorine using low-temperature electrolysis, a process still employed for modern production. Industrial production of fluorine gas for uranium enrichment, its largest application, began during the Manhattan Project in World War II. Owing to the expense of refining pure fluorine, most commercial applications use fluorine compounds, with about half of mined fluorite used in steelmaking. The rest of the fluorite is converted into hydrogen fluoride en route to various organic fluorides, or into cryolite, which plays a key role in aluminium refining. The carbon–fluorine bond is usually very stable. Organofluorine compounds are widely used as refrigerants, electrical insulation, and PTFE (Teflon). Pharmaceuticals such as atorvastatin and fluoxetine contain C−F bonds. The fluoride ion from dissolved fluoride salts inhibits dental cavities and so finds use in toothpaste and water fluoridation. Global fluorochemical sales amount to more than US$15 billion a year. Fluorocarbon gases are generally greenhouse gases with global-warming potentials 100 to 23,500 times that of carbon dioxide, and SF6 has the highest global warming potential of any known substance. Organofluorine compounds often persist in the environment due to the strength of the carbon–fluorine bond. Fluorine has no known metabolic role in mammals; a few plants and marine sponges synthesize organofluorine poisons (most often monofluoroacetates) that help deter predation. Characteristics Electron configuration Fluorine atoms have nine electrons, one fewer than neon, and electron configuration 1s22s22p5: two electrons in a filled inner shell and seven in an outer shell requiring one more to be filled. The outer electrons are ineffective at nuclear shielding, and experience a high effective nuclear charge of 9 − 2 = 7; this affects the atom's physical properties. Fluorine's first ionization energy is third-highest among all elements, behind helium and neon, which complicates the removal of electrons from neutral fluorine atoms. It also has a high electron affinity, second only to chlorine, and tends to capture an electron to become isoelectronic with the noble gas neon; it has the highest electronegativity of any reactive element. Fluorine atoms have a small covalent radius of around 60 picometers, similar to those of its period neighbors oxygen and neon. Reactivity The bond energy of difluorine is much lower than that of either or and similar to the easily cleaved peroxide bond; this, along with high electronegativity, accounts for fluorine's easy dissociation, high reactivity, and strong bonds to non-fluorine atoms. Conversely, bonds to other atoms are very strong because of fluorine's high electronegativity. Unreactive substances like powdered steel, glass fragments, and asbestos fibers react quickly with cold fluorine gas; wood and water spontaneously combust under a fluorine jet. Reactions of elemental fluorine with metals require varying conditions. Alkali metals cause explosions and alkaline earth metals display vigorous activity in bulk; to prevent passivation from the formation of metal fluoride layers, most other metals such as aluminium and iron must be powdered, and noble metals require pure fluorine gas at . Some solid nonmetals (sulfur, phosphorus) react vigorously in liquid fluorine. Hydrogen sulfide and sulfur dioxide combine readily with fluorine, the latter sometimes explosively; sulfuric acid exhibits much less activity, requiring elevated temperatures. Hydrogen, like some of the alkali metals, reacts explosively with fluorine. Carbon, as lamp black, reacts at room temperature to yield tetrafluoromethane. Graphite combines with fluorine above to produce non-stoichiometric carbon monofluoride; higher temperatures generate gaseous fluorocarbons, sometimes with explosions. Carbon dioxide and carbon monoxide react at or just above room temperature, whereas paraffins and other organic chemicals generate strong reactions: even completely substituted haloalkanes such as carbon tetrachloride, normally incombustible, may explode. Although nitrogen trifluoride is stable, nitrogen requires an electric discharge at elevated temperatures for reaction with fluorine to occur, due to the very strong triple bond in elemental nitrogen; ammonia may react explosively. Oxygen does not combine with fluorine under ambient conditions, but can be made to react using electric discharge at low temperatures and pressures; the products tend to disintegrate into their constituent elements when heated. Heavier halogens react readily with fluorine as does the noble gas radon; of the other noble gases, only xenon and krypton react, and only under special conditions. Argon does not react with fluorine gas; however, it does form a compound with fluorine, argon fluorohydride. Phases At room temperature, fluorine is a gas of diatomic molecules, pale yellow when pure (sometimes described as yellow-green). It has a characteristic halogen-like pungent and biting odor detectable at 20 ppb. Fluorine condenses into a bright yellow liquid at , a transition temperature similar to those of oxygen and nitrogen. Fluorine has two solid forms, α- and β-fluorine. The latter crystallizes at and is transparent and soft, with the same disordered cubic structure of freshly crystallized solid oxygen, unlike the orthorhombic systems of other solid halogens. Further cooling to induces a phase transition into opaque and hard α-fluorine, which has a monoclinic structure with dense, angled layers of molecules. The transition from β- to α-fluorine is more exothermic than the condensation of fluorine, and can be violent. Isotopes Only one isotope of fluorine occurs naturally in abundance, the stable isotope . It has a high magnetogyric ratio and exceptional sensitivity to magnetic fields; because it is also the only stable isotope, it is used in magnetic resonance imaging. Eighteen radioisotopes with mass numbers 13–31 have been synthesized, of which is the most stable with a half-life of 109.734 minutes. is a natural trace radioisotope produced by cosmic ray spallation of atmospheric argon as well as by reaction of protons with natural oxygen: 18O + p → 18F + n. Other radioisotopes have half-lives less than 70 seconds; most decay in less than half a second. The isotopes and undergo β+ decay and electron capture, lighter isotopes decay by proton emission, and those heavier than undergo β− decay (the heaviest ones with delayed neutron emission). Two metastable isomers of fluorine are known, , with a half-life of 162(7) nanoseconds, and , with a half-life of 2.2(1) milliseconds. Occurrence Universe Among the lighter elements, fluorine's abundance value of 400 ppb (parts per billion) – 24th among elements in the universe – is exceptionally low: other elements from carbon to magnesium are twenty or more times as common. This is because stellar nucleosynthesis processes bypass fluorine, and any fluorine atoms otherwise created have high nuclear cross sections, allowing collisions with hydrogen or helium to generate oxygen or neon respectively. Beyond this transient existence, three explanations have been proposed for the presence of fluorine: during type II supernovae, bombardment of neon atoms by neutrinos could transmute them to fluorine; the solar wind of Wolf–Rayet stars could blow fluorine away from any hydrogen or helium atoms; or fluorine is borne out on convection currents arising from fusion in asymptotic giant branch stars. Earth Fluorine is the 13th most abundant element in Earth's crust at 600–700 ppm (parts per million) by mass. Though believed not to occur naturally, elemental fluorine has been shown to be present as an occlusion in antozonite, a variant of fluorite. Most fluorine exists as fluoride-containing minerals. Fluorite, fluorapatite and cryolite are the most industrially significant. Fluorite (), also known as fluorspar, abundant worldwide, is the main source of fluoride, and hence fluorine. China and Mexico are the major suppliers. Fluorapatite (Ca5(PO4)3F), which contains most of the world's fluoride, is an inadvertent source of fluoride as a byproduct of fertilizer production. Cryolite (), used in the production of aluminium, is the most fluorine-rich mineral. Economically viable natural sources of cryolite have been exhausted, and most is now synthesised commercially. Other minerals such as topaz contain fluorine. Fluorides, unlike other halides, are insoluble and do not occur in commercially favorable concentrations in saline waters. Trace quantities of organofluorines of uncertain origin have been detected in volcanic eruptions and geothermal springs. The existence of gaseous fluorine in crystals, suggested by the smell of crushed antozonite, is contentious; a 2012 study reported the presence of 0.04% by weight in antozonite, attributing these inclusions to radiation from the presence of tiny amounts of uranium. History Early discoveries In 1529, Georgius Agricola described fluorite as an additive used to lower the melting point of metals during smelting. He penned the Latin word fluorēs (fluor, flow) for fluorite rocks. The name later evolved into fluorspar (still commonly used) and then fluorite. The composition of fluorite was later determined to be calcium difluoride. Hydrofluoric acid was used in glass etching from 1720 onward. Andreas Sigismund Marggraf first characterized it in 1764 when he heated fluorite with sulfuric acid, and the resulting solution corroded its glass container. Swedish chemist Carl Wilhelm Scheele repeated the experiment in 1771, and named the acidic product fluss-spats-syran (fluorspar acid). In 1810, the French physicist André-Marie Ampère suggested that hydrogen and an element analogous to chlorine constituted hydrofluoric acid. He also proposed in a letter to Sir Humphry Davy dated August 26, 1812 that this then-unknown substance may be named fluorine from fluoric acid and the -ine suffix of other halogens. This word, often with modifications, is used in most European languages; however, Greek, Russian, and some others, following Ampère's later suggestion, use the name ftor or derivatives, from the Greek φθόριος (phthorios, destructive). The New Latin name fluorum gave the element its current symbol F; Fl was used in early papers. Isolation Initial studies on fluorine were so dangerous that several 19th-century experimenters were deemed "fluorine martyrs" after misfortunes with hydrofluoric acid. Isolation of elemental fluorine was hindered by the extreme corrosiveness of both elemental fluorine itself and hydrogen fluoride, as well as the lack of a simple and suitable electrolyte. Edmond Frémy postulated that electrolysis of pure hydrogen fluoride to generate fluorine was feasible and devised a method to produce anhydrous samples from acidified potassium bifluoride; instead, he discovered that the resulting (dry) hydrogen fluoride did not conduct electricity. Frémy's former student Henri Moissan persevered, and after much trial and error found that a mixture of potassium bifluoride and dry hydrogen fluoride was a conductor, enabling electrolysis. To prevent rapid corrosion of the platinum in his electrochemical cells, he cooled the reaction to extremely low temperatures in a special bath and forged cells from a more resistant mixture of platinum and iridium, and used fluorite stoppers. In 1886, after 74 years of effort by many chemists, Moissan isolated elemental fluorine. In 1906, two months before his death, Moissan received the Nobel Prize in Chemistry, with the following citation: Later uses The Frigidaire division of General Motors (GM) experimented with chlorofluorocarbon refrigerants in the late 1920s, and Kinetic Chemicals was formed as a joint venture between GM and DuPont in 1930 hoping to market Freon-12 () as one such refrigerant. It replaced earlier and more toxic compounds, increased demand for kitchen refrigerators, and became profitable; by 1949 DuPont had bought out Kinetic and marketed several other Freon compounds. Polytetrafluoroethylene (Teflon) was serendipitously discovered in 1938 by Roy J. Plunkett while working on refrigerants at Kinetic, and its superlative chemical and thermal resistance lent it to accelerated commercialization and mass production by 1941. Large-scale production of elemental fluorine began during World War II. Germany used high-temperature electrolysis to make tons of the planned incendiary chlorine trifluoride and the Manhattan Project used huge quantities to produce uranium hexafluoride for uranium enrichment. Since is as corrosive as fluorine, gaseous diffusion plants required special materials: nickel for membranes, fluoropolymers for seals, and liquid fluorocarbons as coolants and lubricants. This burgeoning nuclear industry later drove post-war fluorochemical development. Compounds Fluorine has a rich chemistry, encompassing organic and inorganic domains. It combines with metals, nonmetals, metalloids, and most noble gases. Fluorine's high electron affinity results in a preference for ionic bonding; when it forms covalent bonds, these are polar, and almost always single. Oxidation states In compounds, fluorine almost exclusively assumes an oxidation state of −1. Fluorine in is defined to have oxidation state 0. The unstable species and , which decompose at around 40 K, have intermediate oxidation states; and a few related species are predicted to be stable. Metals Alkali metals form ionic and highly soluble monofluorides; these have the cubic arrangement of sodium chloride and analogous chlorides. Alkaline earth difluorides possess strong ionic bonds but are insoluble in water, with the exception of beryllium difluoride, which also exhibits some covalent character and has a quartz-like structure. Rare earth elements and many other metals form mostly ionic trifluorides. Covalent bonding first comes to prominence in the tetrafluorides: those of zirconium, hafnium and several actinides are ionic with high melting points, while those of titanium, vanadium, and niobium are polymeric, melting or decomposing at no more than . Pentafluorides continue this trend with their linear polymers and oligomeric complexes. Thirteen metal hexafluorides are known, all octahedral, and are mostly volatile solids but for liquid and , and gaseous . Rhenium heptafluoride, the only characterized metal heptafluoride, is a low-melting molecular solid with pentagonal bipyramidal molecular geometry. Metal fluorides with more fluorine atoms are particularly reactive. Hydrogen Hydrogen and fluorine combine to yield hydrogen fluoride, in which discrete molecules form clusters by hydrogen bonding, resembling water more than hydrogen chloride. It boils at a much higher temperature than heavier hydrogen halides and unlike them is miscible with water. Hydrogen fluoride readily hydrates on contact with water to form aqueous hydrogen fluoride, also known as hydrofluoric acid. Unlike the other hydrohalic acids, which are strong, hydrofluoric acid is a weak acid at low concentrations. However, it can attack glass, something the other acids cannot do. Other reactive nonmetals Binary fluorides of metalloids and p-block nonmetals are generally covalent and volatile, with varying reactivities. Period 3 and heavier nonmetals can form hypervalent fluorides. Boron trifluoride is planar and possesses an incomplete octet. It functions as a Lewis acid and combines with Lewis bases like ammonia to form adducts. Carbon tetrafluoride is tetrahedral and inert; its group analogues, silicon and germanium tetrafluoride, are also tetrahedral but behave as Lewis acids. The pnictogens form trifluorides that increase in reactivity and basicity with higher molecular weight, although nitrogen trifluoride resists hydrolysis and is not basic. The pentafluorides of phosphorus, arsenic, and antimony are more reactive than their respective trifluorides, with antimony pentafluoride the strongest neutral Lewis acid known, only behind gold pentafluoride. Chalcogens have diverse fluorides: unstable difluorides have been reported for oxygen (the only known compound with oxygen in an oxidation state of +2), sulfur, and selenium; tetrafluorides and hexafluorides exist for sulfur, selenium, and tellurium. The latter are stabilized by more fluorine atoms and lighter central atoms, so sulfur hexafluoride is especially inert. Chlorine, bromine, and iodine can each form mono-, tri-, and pentafluorides, but only iodine heptafluoride has been characterized among possible interhalogen heptafluorides. Many of them are powerful sources of fluorine atoms, and industrial applications using chlorine trifluoride require precautions similar to those using fluorine. Noble gases Noble gases, having complete electron shells, defied reaction with other elements until 1962 when Neil Bartlett reported synthesis of xenon hexafluoroplatinate; xenon difluoride, tetrafluoride, hexafluoride, and multiple oxyfluorides have been isolated since then. Among other noble gases, krypton forms a difluoride, and radon and fluorine generate a solid suspected to be radon difluoride. Binary fluorides of lighter noble gases are exceptionally unstable: argon and hydrogen fluoride combine under extreme conditions to give argon fluorohydride. Helium has no long-lived fluorides, and no neon fluoride has ever been observed; helium fluorohydride has been detected for milliseconds at high pressures and low temperatures. Organic compounds The carbon–fluorine bond is organic chemistry's strongest, and gives stability to organofluorines. It is almost non-existent in nature, but is used in artificial compounds. Research in this area is usually driven by commercial applications; the compounds involved are diverse and reflect the complexity inherent in organic chemistry. Discrete molecules The substitution of hydrogen atoms in an alkane by progressively more fluorine atoms gradually alters several properties: melting and boiling points are lowered, density increases, solubility in hydrocarbons decreases and overall stability increases. Perfluorocarbons, in which all hydrogen atoms are substituted, are insoluble in most organic solvents, reacting at ambient conditions only with sodium in liquid ammonia. The term perfluorinated compound is used for what would otherwise be a perfluorocarbon if not for the presence of a functional group, often a carboxylic acid. These compounds share many properties with perfluorocarbons such as stability and hydrophobicity, while the functional group augments their reactivity, enabling them to adhere to surfaces or act as surfactants. Fluorosurfactants, in particular, can lower the surface tension of water more than their hydrocarbon-based analogues. Fluorotelomers, which have some unfluorinated carbon atoms near the functional group, are also regarded as perfluorinated. Polymers Polymers exhibit the same stability increases afforded by fluorine substitution (for hydrogen) in discrete molecules; their melting points generally increase too. Polytetrafluoroethylene (PTFE), the simplest fluoropolymer and perfluoro analogue of polyethylene with structural unit ––, demonstrates this change as expected, but its very high melting point makes it difficult to mold. Various PTFE derivatives are less temperature-tolerant but easier to mold: fluorinated ethylene propylene replaces some fluorine atoms with trifluoromethyl groups, perfluoroalkoxy alkanes do the same with trifluoromethoxy groups, and Nafion contains perfluoroether side chains capped with sulfonic acid groups. Other fluoropolymers retain some hydrogen atoms; polyvinylidene fluoride has half the fluorine atoms of PTFE and polyvinyl fluoride has a quarter, but both behave much like perfluorinated polymers. Production Elemental fluorine and virtually all fluorine compounds are produced from hydrogen fluoride or its aqueous solution, hydrofluoric acid. Hydrogen fluoride is produced in kilns by the endothermic reaction of fluorite (CaF2) with sulfuric acid: CaF2 + H2SO4 → 2 HF(g) + CaSO4 The gaseous HF can then be absorbed in water or liquefied. About 20% of manufactured HF is a byproduct of fertilizer production, which produces hexafluorosilicic acid (H2SiF6), which can be degraded to release HF thermally and by hydrolysis: H2SiF6 → 2 HF + SiF4 SiF4 + 2 H2O → 4 HF + SiO2 Industrial routes to F2 Moissan's method is used to produce industrial quantities of fluorine, via the electrolysis of a potassium bifluoride/hydrogen fluoride mixture: hydrogen ions are reduced at a steel container cathode and fluoride ions are oxidized at a carbon block anode, under 8–12 volts, to generate hydrogen and fluorine gas respectively. Temperatures are elevated, KF•2HF melting at and being electrolyzed at . KF, which acts to provide electrical conductivity, is essential since pure HF cannot be electrolyzed because it is virtually non-conductive. Fluorine can be stored in steel cylinders that have passivated interiors, at temperatures below ; otherwise nickel can be used. Regulator valves and pipework are made of nickel, the latter possibly using Monel instead. Frequent passivation, along with the strict exclusion of water and greases, must be undertaken. In the laboratory, glassware may carry fluorine gas under low pressure and anhydrous conditions; some sources instead recommend nickel-Monel-PTFE systems. Laboratory routes While preparing for a 1986 conference to celebrate the centennial of Moissan's achievement, Karl O. Christe reasoned that chemical fluorine generation should be feasible since some metal fluoride anions have no stable neutral counterparts; their acidification potentially triggers oxidation instead. He devised a method which evolves fluorine at high yield and atmospheric pressure: 2 KMnO4 + 2 KF + 10 HF + 3 H2O2 → 2 K2MnF6 + 8 H2O + 3 O2↑ 2 K2MnF6 + 4 SbF5 → 4 KSbF6 + 2 MnF3 + F2↑ Christe later commented that the reactants "had been known for more than 100 years and even Moissan could have come up with this scheme." As late as 2008, some references still asserted that fluorine was too reactive for any chemical isolation. Industrial applications Fluorite mining, which supplies most global fluorine, peaked in 1989 when 5.6 million metric tons of ore were extracted. Chlorofluorocarbon restrictions lowered this to 3.6 million tons in 1994; production has since been increasing. Around 4.5 million tons of ore and revenue of US$550 million were generated in 2003; later reports estimated 2011 global fluorochemical sales at $15 billion and predicted 2016–18 production figures of 3.5 to 5.9 million tons, and revenue of at least $20 billion. Froth flotation separates mined fluorite into two main metallurgical grades of equal proportion: 60–85% pure metspar is almost all used in iron smelting whereas 97%+ pure acidspar is mainly converted to the key industrial intermediate hydrogen fluoride. At least 17,000 metric tons of fluorine are produced each year. It costs only $5–8 per kilogram as uranium or sulfur hexafluoride, but many times more as an element because of handling challenges. Most processes using free fluorine in large amounts employ in situ generation under vertical integration. The largest application of fluorine gas, consuming up to 7,000 metric tons annually, is in the preparation of for the nuclear fuel cycle. Fluorine is used to fluorinate uranium tetrafluoride, itself formed from uranium dioxide and hydrofluoric acid. Fluorine is monoisotopic, so any mass differences between molecules are due to the presence of or , enabling uranium enrichment via gaseous diffusion or gas centrifuge. About 6,000 metric tons per year go into producing the inert dielectric for high-voltage transformers and circuit breakers, eliminating the need for hazardous polychlorinated biphenyls associated with devices. Several fluorine compounds are used in electronics: rhenium and tungsten hexafluoride in chemical vapor deposition, tetrafluoromethane in plasma etching and nitrogen trifluoride in cleaning equipment. Fluorine is also used in the synthesis of organic fluorides, but its reactivity often necessitates conversion first to the gentler , , or , which together allow calibrated fluorination. Fluorinated pharmaceuticals use sulfur tetrafluoride instead. Inorganic fluorides As with other iron alloys, around metspar is added to each metric ton of steel; the fluoride ions lower its melting point and viscosity. Alongside its role as an additive in materials like enamels and welding rod coats, most acidspar is reacted with sulfuric acid to form hydrofluoric acid, which is used in steel pickling, glass etching and alkane cracking. One-third of HF goes into synthesizing cryolite and aluminium trifluoride, both fluxes in the Hall–Héroult process for aluminium extraction; replenishment is necessitated by their occasional reactions with the smelting apparatus. Each metric ton of aluminium requires about of flux. Fluorosilicates consume the second largest portion, with sodium fluorosilicate used in water fluoridation and laundry effluent treatment, and as an intermediate en route to cryolite and silicon tetrafluoride. Other important inorganic fluorides include those of cobalt, nickel, and ammonium. Organic fluorides Organofluorides consume over 20% of mined fluorite and over 40% of hydrofluoric acid, with refrigerant gases dominating and fluoropolymers increasing their market share. Surfactants are a minor application but generate over $1 billion in annual revenue. Due to the danger from direct hydrocarbon–fluorine reactions above , industrial fluorocarbon production is indirect, mostly through halogen exchange reactions such as Swarts fluorination, in which chlorocarbon chlorines are substituted for fluorines by hydrogen fluoride under catalysts. Electrochemical fluorination subjects hydrocarbons to electrolysis in hydrogen fluoride, and the Fowler process treats them with solid fluorine carriers like cobalt trifluoride. Refrigerant gases Halogenated refrigerants, termed Freons in informal contexts, are identified by R-numbers that denote the amount of fluorine, chlorine, carbon, and hydrogen present. Chlorofluorocarbons (CFCs) like R-11, R-12, and R-114 once dominated organofluorines, peaking in production in the 1980s. Used for air conditioning systems, propellants and solvents, their production was below one-tenth of this peak by the early 2000s, after widespread international prohibition. Hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs) were designed as replacements; their synthesis consumes more than 90% of the fluorine in the organic industry. Important HCFCs include R-22, chlorodifluoromethane, and R-141b. The main HFC is R-134a with a new type of molecule HFO-1234yf, a Hydrofluoroolefin (HFO) coming to prominence owing to its global warming potential of less than 1% that of HFC-134a. Polymers About 180,000 metric tons of fluoropolymers were produced in 2006 and 2007, generating over $3.5 billion revenue per year. The global market was estimated at just under $6 billion in 2011. Fluoropolymers can only be formed by polymerizing free radicals. Polytetrafluoroethylene (PTFE), sometimes called by its DuPont name Teflon, represents 60–80% by mass of the world's fluoropolymer production. The largest application is in electrical insulation since PTFE is an excellent dielectric. It is also used in the chemical industry where corrosion resistance is needed, in coating pipes, tubing, and gaskets. Another major use is in PFTE-coated fiberglass cloth for stadium roofs. The major consumer application is for non-stick cookware. Jerked PTFE film becomes expanded PTFE (ePTFE), a fine-pored membrane sometimes referred to by the brand name Gore-Tex and used for rainwear, protective apparel, and filters; ePTFE fibers may be made into seals and dust filters. Other fluoropolymers, including fluorinated ethylene propylene, mimic PTFE's properties and can substitute for it; they are more moldable, but also more costly and have lower thermal stability. Films from two different fluoropolymers replace glass in solar cells. The chemically resistant (but expensive) fluorinated ionomers are used as electrochemical cell membranes, of which the first and most prominent example is Nafion. Developed in the 1960s, it was initially deployed as fuel cell material in spacecraft and then replaced mercury-based chloralkali process cells. Recently, the fuel cell application has reemerged with efforts to install proton exchange membrane fuel cells into automobiles. Fluoroelastomers such as Viton are crosslinked fluoropolymer mixtures mainly used in O-rings; perfluorobutane (C4F10) is used as a fire-extinguishing agent. Surfactants Fluorosurfactants are small organofluorine molecules used for repelling water and stains. Although expensive (comparable to pharmaceuticals at $200–2000 per kilogram), they yielded over $1 billion in annual revenues by 2006; Scotchgard alone generated over $300 million in 2000. Fluorosurfactants are a minority in the overall surfactant market, most of which is taken up by much cheaper hydrocarbon-based products. Applications in paints are burdened by compounding costs; this use was valued at only $100 million in 2006. Agrichemicals About 30% of agrichemicals contain fluorine, most of them herbicides and fungicides with a few crop regulators. Fluorine substitution, usually of a single atom or at most a trifluoromethyl group, is a robust modification with effects analogous to fluorinated pharmaceuticals: increased biological stay time, membrane crossing, and altering of molecular recognition. Trifluralin is a prominent example, with large-scale use in the U.S. as a weedkiller, but it is a suspected carcinogen and has been banned in many European countries. Sodium monofluoroacetate (1080) is a mammalian poison in which one sodium acetate hydrogen is replaced with fluorine; it disrupts cell metabolism by replacing acetate in the citric acid cycle. First synthesized in the late 19th century, it was recognized as an insecticide in the early 20th century, and was later deployed in its current use. New Zealand, the largest consumer of 1080, uses it to protect kiwis from the invasive Australian common brushtail possum. Europe and the U.S. have banned 1080. Medicinal applications Dental care Population studies from the mid-20th century onwards show topical fluoride reduces dental caries. This was first attributed to the conversion of tooth enamel hydroxyapatite into the more durable fluorapatite, but studies on pre-fluoridated teeth refuted this hypothesis, and current theories involve fluoride aiding enamel growth in small caries. After studies of children in areas where fluoride was naturally present in drinking water, controlled public water supply fluoridation to fight tooth decay began in the 1940s and is now applied to water supplying 6 percent of the global population, including two-thirds of Americans. Reviews of the scholarly literature in 2000 and 2007 associated water fluoridation with a significant reduction of tooth decay in children. Despite such endorsements and evidence of no adverse effects other than mostly benign dental fluorosis, opposition still exists on ethical and safety grounds. The benefits of fluoridation have lessened, possibly due to other fluoride sources, but are still measurable in low-income groups. Sodium monofluorophosphate and sometimes sodium or tin(II) fluoride are often found in fluoride toothpastes, first introduced in the U.S. in 1955 and now ubiquitous in developed countries, alongside fluoridated mouthwashes, gels, foams, and varnishes. Pharmaceuticals Twenty percent of modern pharmaceuticals contain fluorine. One of these, the cholesterol-reducer atorvastatin (Lipitor), made more revenue than any other drug until it became generic in 2011. The combination asthma prescription Seretide, a top-ten revenue drug in the mid-2000s, contains two active ingredients, one of which – fluticasone – is fluorinated. Many drugs are fluorinated to delay inactivation and lengthen dosage periods because the carbon–fluorine bond is very stable. Fluorination also increases lipophilicity because the bond is more hydrophobic than the carbon–hydrogen bond, and this often helps in cell membrane penetration and hence bioavailability. Tricyclics and other pre-1980s antidepressants had several side effects due to their non-selective interference with neurotransmitters other than the serotonin target; the fluorinated fluoxetine was selective and one of the first to avoid this problem. Many current antidepressants receive this same treatment, including the selective serotonin reuptake inhibitors: citalopram, its enantiomer escitalopram, and fluvoxamine and paroxetine. Quinolones are artificial broad-spectrum antibiotics that are often fluorinated to enhance their effects. These include ciprofloxacin and levofloxacin. Fluorine also finds use in steroids: fludrocortisone is a blood pressure-raising mineralocorticoid, and triamcinolone and dexamethasone are strong glucocorticoids. The majority of inhaled anesthetics are heavily fluorinated; the prototype halothane is much more inert and potent than its contemporaries. Later compounds such as the fluorinated ethers sevoflurane and desflurane are better than halothane and are almost insoluble in blood, allowing faster waking times. PET scanning Fluorine-18 is often found in radioactive tracers for positron emission tomography, as its half-life of almost two hours is long enough to allow for its transport from production facilities to imaging centers. The most common tracer is fluorodeoxyglucose which, after intravenous injection, is taken up by glucose-requiring tissues such as the brain and most malignant tumors; computer-assisted tomography can then be used for detailed imaging. Oxygen carriers Liquid fluorocarbons can hold large volumes of oxygen or carbon dioxide, more so than blood, and have attracted attention for their possible uses in artificial blood and in liquid breathing. Because fluorocarbons do not normally mix with water, they must be mixed into emulsions (small droplets of perfluorocarbon suspended in water) to be used as blood. One such product, Oxycyte, has been through initial clinical trials. These substances can aid endurance athletes and are banned from sports; one cyclist's near death in 1998 prompted an investigation into their abuse. Applications of pure perfluorocarbon liquid breathing (which uses pure perfluorocarbon liquid, not a water emulsion) include assisting burn victims and premature babies with deficient lungs. Partial and complete lung filling have been considered, though only the former has had any significant tests in humans. An Alliance Pharmaceuticals effort reached clinical trials but was abandoned because the results were not better than normal therapies. Biological role Fluorine is not essential for humans and other mammals, but small amounts are known to be beneficial for the strengthening of dental enamel (where the formation of fluorapatite makes the enamel more resistant to attack, from acids produced by bacterial fermentation of sugars). Small amounts of fluorine may be beneficial for bone strength, but the latter has not been definitively established. Both the WHO and the Institute of Medicine of the US National Academies publish recommended daily allowance (RDA) and upper tolerated intake of fluorine, which varies with age and gender. Natural organofluorines have been found in microorganisms, plants and, recently, animals. The most common is fluoroacetate, which is used as a defense against herbivores by at least 40 plants in Africa, Australia and Brazil. Other examples include terminally fluorinated fatty acids, fluoroacetone, and 2-fluorocitrate. An enzyme that binds fluorine to carbon – adenosyl-fluoride synthase – was discovered in bacteria in 2002. Toxicity Elemental fluorine is highly toxic to living organisms. Its effects in humans start at concentrations lower than hydrogen cyanide's 50 ppm and are similar to those of chlorine: significant irritation of the eyes and respiratory system as well as liver and kidney damage occur above 25 ppm, which is the immediately dangerous to life and health value for fluorine. The eyes and nose are seriously damaged at 100 ppm, and inhalation of 1,000 ppm fluorine will cause death in minutes, compared to 270 ppm for hydrogen cyanide. Hydrofluoric acid Hydrofluoric acid is the weakest of the hydrohalic acids, having a pKa of 3.2 at 25 °C. Pure hydrogen fluoride is a volatile liquid due to the presence of hydrogen bonding, while the other hydrogen halides are gases. It is able to attack glass, concrete, metals, and organic matter. Hydrofluoric acid is a contact poison with greater hazards than many strong acids like sulfuric acid even though it is weak: it remains neutral in aqueous solution and thus penetrates tissue faster, whether through inhalation, ingestion or the skin, and at least nine U.S. workers died in such accidents from 1984 to 1994. It reacts with calcium and magnesium in the blood leading to hypocalcemia and possible death through cardiac arrhythmia. Insoluble calcium fluoride formation triggers strong pain and burns larger than 160 cm2 (25 in2) can cause serious systemic toxicity. Exposure may not be evident for eight hours for 50% HF, rising to 24 hours for lower concentrations, and a burn may initially be painless as hydrogen fluoride affects nerve function. If skin has been exposed to HF, damage can be reduced by rinsing it under a jet of water for 10–15 minutes and removing contaminated clothing. Calcium gluconate is often applied next, providing calcium ions to bind with fluoride; skin burns can be treated with 2.5% calcium gluconate gel or special rinsing solutions. Hydrofluoric acid absorption requires further medical treatment; calcium gluconate may be injected or administered intravenously. Using calcium chloride – a common laboratory reagent – in lieu of calcium gluconate is contraindicated, and may lead to severe complications. Excision or amputation of affected parts may be required. Fluoride ion Soluble fluorides are moderately toxic: 5–10 g sodium fluoride, or 32–64 mg fluoride ions per kilogram of body mass, represents a lethal dose for adults. One-fifth of the lethal dose can cause adverse health effects, and chronic excess consumption may lead to skeletal fluorosis, which affects millions in Asia and Africa, and, in children, to reduced intelligence. Ingested fluoride forms hydrofluoric acid in the stomach which is easily absorbed by the intestines, where it crosses cell membranes, binds with calcium and interferes with various enzymes, before urinary excretion. Exposure limits are determined by urine testing of the body's ability to clear fluoride ions. Historically, most cases of fluoride poisoning have been caused by accidental ingestion of insecticides containing inorganic fluorides. Most current calls to poison control centers for possible fluoride poisoning come from the ingestion of fluoride-containing toothpaste. Malfunctioning water fluoridation equipment is another cause: one incident in Alaska affected almost 300 people and killed one person. Dangers from toothpaste are aggravated for small children, and the Centers for Disease Control and Prevention recommends supervising children below six brushing their teeth so that they do not swallow toothpaste. One regional study examined a year of pre-teen fluoride poisoning reports totaling 87 cases, including one death from ingesting insecticide. Most had no symptoms, but about 30% had stomach pains. A larger study across the U.S. had similar findings: 80% of cases involved children under six, and there were few serious cases. Environmental concerns Atmosphere The Montreal Protocol, signed in 1987, set strict regulations on chlorofluorocarbons (CFCs) and bromofluorocarbons due to their ozone damaging potential (ODP). The high stability which suited them to their original applications also meant that they were not decomposing until they reached higher altitudes, where liberated chlorine and bromine atoms attacked ozone molecules. Even with the ban, and early indications of its efficacy, predictions warned that several generations would pass before full recovery. With one-tenth the ODP of CFCs, hydrochlorofluorocarbons (HCFCs) are the current replacements, and are themselves scheduled for substitution by 2030–2040 by hydrofluorocarbons (HFCs) with no chlorine and zero ODP. In 2007 this date was brought forward to 2020 for developed countries; the Environmental Protection Agency had already prohibited one HCFC's production and capped those of two others in 2003. Fluorocarbon gases are generally greenhouse gases with global-warming potentials (GWPs) of about 100 to 10,000; sulfur hexafluoride has a value of around 20,000. An outlier is HFO-1234yf which is a new type of refrigerant called a Hydrofluoroolefin (HFO) and has attracted global demand due to its GWP of less than 1 compared to 1,430 for the current refrigerant standard HFC-134a. Biopersistence Organofluorines exhibit biopersistence due to the strength of the carbon–fluorine bond. Perfluoroalkyl acids (PFAAs), which are sparingly water-soluble owing to their acidic functional groups, are noted persistent organic pollutants; perfluorooctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) are most often researched. PFAAs have been found in trace quantities worldwide from polar bears to humans, with PFOS and PFOA known to reside in breast milk and the blood of newborn babies. A 2013 review showed a slight correlation between groundwater and soil PFAA levels and human activity; there was no clear pattern of one chemical dominating, and higher amounts of PFOS were correlated to higher amounts of PFOA. In the body, PFAAs bind to proteins such as serum albumin; they tend to concentrate within humans in the liver and blood before excretion through the kidneys. Dwell time in the body varies greatly by species, with half-lives of days in rodents, and years in humans. High doses of PFOS and PFOA cause cancer and death in newborn rodents but human studies have not established an effect at current exposure levels. See also Argon fluoride laser Electrophilic fluorination Fluoride selective electrode, which measures fluoride concentration Fluorine absorption dating Fluorous chemistry, a process used to separate reagents from organic solvents Krypton fluoride laser Radical fluorination Notes Sources Citations Indexed references . < External links Chemical elements Halogens Reactive nonmetals Diatomic nonmetals Fluorinating agents Oxidizing agents Industrial gases Gases with color
Fluorine
[ "Physics", "Chemistry", "Materials_science" ]
9,864
[ "Chemical elements", "Redox", "Reactive nonmetals", "Diatomic nonmetals", "Nonmetals", "Oxidizing agents", "Fluorinating agents", "Industrial gases", "Reagents for organic chemistry", "Chemical process engineering", "Atoms", "Matter" ]
17,482,912
https://en.wikipedia.org/wiki/Relations%20between%20heat%20capacities
In thermodynamics, the heat capacity at constant volume, , and the heat capacity at constant pressure, , are extensive properties that have the magnitude of energy divided by temperature. Relations The laws of thermodynamics imply the following relations between these two heat capacities (Gaskell 2003:23): Here is the thermal expansion coefficient: is the isothermal compressibility (the inverse of the bulk modulus): and is the isentropic compressibility: A corresponding expression for the difference in specific heat capacities (intensive properties) at constant volume and constant pressure is: where ρ is the density of the substance under the applicable conditions. The corresponding expression for the ratio of specific heat capacities remains the same since the thermodynamic system size-dependent quantities, whether on a per mass or per mole basis, cancel out in the ratio because specific heat capacities are intensive properties. Thus: The difference relation allows one to obtain the heat capacity for solids at constant volume which is not readily measured in terms of quantities that are more easily measured. The ratio relation allows one to express the isentropic compressibility in terms of the heat capacity ratio. Derivation If an infinitesimally small amount of heat is supplied to a system in a reversible way then, according to the second law of thermodynamics, the entropy change of the system is given by: Since where C is the heat capacity, it follows that: The heat capacity depends on how the external variables of the system are changed when the heat is supplied. If the only external variable of the system is the volume, then we can write: From this follows: Expressing dS in terms of dT and dP similarly as above leads to the expression: One can find the above expression for by expressing dV in terms of dP and dT in the above expression for dS. results in and it follows: Therefore, The partial derivative can be rewritten in terms of variables that do not involve the entropy using a suitable Maxwell relation. These relations follow from the fundamental thermodynamic relation: It follows from this that the differential of the Helmholtz free energy is: This means that and The symmetry of second derivatives of F with respect to T and V then implies allowing one to write: The r.h.s. contains a derivative at constant volume, which can be difficult to measure. It can be rewritten as follows. In general, Since the partial derivative is just the ratio of dP and dT for dV = 0, one can obtain this by putting dV = 0 in the above equation and solving for this ratio: which yields the expression: The expression for the ratio of the heat capacities can be obtained as follows: The partial derivative in the numerator can be expressed as a ratio of partial derivatives of the pressure w.r.t. temperature and entropy. If in the relation we put and solve for the ratio we obtain . Doing so gives: One can similarly rewrite the partial derivative by expressing dV in terms of dS and dT, putting dV equal to zero and solving for the ratio . When one substitutes that expression in the heat capacity ratio expressed as the ratio of the partial derivatives of the entropy above, it follows: Taking together the two derivatives at constant S: Taking together the two derivatives at constant T: From this one can write: Ideal gas This is a derivation to obtain an expression for for an ideal gas. An ideal gas has the equation of state: where P = pressure V = volume n = number of moles R = universal gas constant T = temperature The ideal gas equation of state can be arranged to give: or The following partial derivatives are obtained from the above equation of state: The following simple expressions are obtained for thermal expansion coefficient : and for isothermal compressibility : One can now calculate for ideal gases from the previously obtained general formula: Substituting from the ideal gas equation gives finally: where n = number of moles of gas in the thermodynamic system under consideration and R = universal gas constant. On a per mole basis, the expression for difference in molar heat capacities becomes simply R for ideal gases as follows: This result would be consistent if the specific difference were derived directly from the general expression for . See also Heat capacity ratio References David R. Gaskell (2008), Introduction to the thermodynamics of materials, Fifth Edition, Taylor & Francis. . Thermodynamics
Relations between heat capacities
[ "Physics", "Chemistry", "Mathematics" ]
906
[ "Thermodynamics", "Dynamical systems" ]
17,486,518
https://en.wikipedia.org/wiki/Neutron%20electric%20dipole%20moment
The neutron electric dipole moment (nEDM), denoted dn, is a measure for the distribution of positive and negative charge inside the neutron. A nonzero electric dipole moment can only exist if the centers of the negative and positive charge distribution inside the particle do not coincide. So far, no neutron EDM has been found. The current best measured limit for dn is . Theory A permanent electric dipole moment of a fundamental particle violates both parity (P) and time reversal symmetry (T). These violations can be understood by examining the neutron's magnetic dipole moment and hypothetical electric dipole moment. Under time reversal, the magnetic dipole moment changes its direction, whereas the electric dipole moment stays unchanged. Under parity, the electric dipole moment changes its direction but not the magnetic dipole moment. As the resulting system under P and T is not symmetric with respect to the initial system, these symmetries are violated in the case of the existence of an EDM. Having also CPT symmetry, the combined symmetry CP is violated as well. Standard Model prediction As it is depicted above, in order to generate a nonzero nEDM one needs processes that violate CP symmetry. CP violation has been observed in weak interactions and is included in the Standard Model of particle physics via the CP-violating phase in the CKM matrix. However, the amount of CP violation is very small and therefore also the contribution to the nEDM: . Matter–antimatter asymmetry From the asymmetry between matter and antimatter in the universe, one suspects that there must be a sizeable amount of CP-violation. Measuring a neutron electric dipole moment at a much higher level than predicted by the Standard Model would therefore directly confirm this suspicion and improve our understanding of CP-violating processes. Strong CP problem As the neutron is built up of quarks, it is also susceptible to CP violation stemming from strong interactions. Quantum chromodynamics – the theoretical description of the strong force – naturally includes a term that breaks CP-symmetry. The strength of this term is characterized by the angle θ. The current limit on the nEDM constrains this angle to be less than 10−10 radians. This fine-tuning of the angle θ, which is naturally expected to be of order 1, is the strong CP problem. SUSY CP problem Supersymmetric extensions to the Standard Model, such as the Minimal Supersymmetric Standard Model, generally lead to a large CP-violation. Typical predictions for the neutron EDM arising from the theory range between and . As in the case of the strong interaction, the limit on the neutron EDM is already constraining the CP violating phases. The fine-tuning is, however, not as severe yet. Experimental technique In order to extract the neutron EDM, one measures the Larmor precession of the neutron spin in the presence of parallel and antiparallel magnetic and electric fields. The precession frequency for each of the two cases is given by , the addition or subtraction of the frequencies stemming from the precession of the magnetic moment around the magnetic field and the precession of the electric dipole moment around the electric field. From the difference of those two frequencies one readily obtains a measure of the neutron EDM: The biggest challenge of the experiment (and at the same time the source of the biggest systematic false effects) is to ensure that the magnetic field does not change during these two measurements. History The first experiments searching for the electric dipole moment of the neutron used beams of thermal (and later cold) neutrons to conduct the measurement. It started with the experiment by James Smith, Purcell, and Ramsey in 1951 (and published in 1957) at ORNL's Graphite Reactor (as the three researchers were from Harvard University, this experiment is called ORNL/Harvard or something similar, see figure in this section), obtaining a limit of Beams of neutrons were used until 1977 for nEDM experiments. At this point, systematic effects related to the high velocities of the neutrons in the beam became insurmountable. The final limit obtained with a neutron beam amounts to . After that, experiments with ultracold neutrons (UCN) took over. It started in 1980 with an experiment at the (LNPI) obtaining a limit of . This experiment and especially the experiment starting in 1984 at the Institut Laue-Langevin (ILL) pushed the limit down by another two orders of magnitude yielding the best upper limit in 2006, revised in 2015. During these 70 years of experiments, six orders of magnitude have been covered, thereby putting stringent constraints on theoretical models. The latest best limit of has been published 2020 by the nEDM collaboration at Paul Scherrer Institute (PSI). Current experiments Currently, there are at least six experiments aiming at improving the current limit (or measuring for the first time) on the neutron EDM with a sensitivity down to over the next 10 years, thereby covering the range of prediction coming from supersymmetric extensions to the Standard Model. n2EDM of the nEDM collaboration under construction at the UCN source at the Paul Scherrer Institute. In February 2022 the apparatus was being set up at PSI, and commissioning with neutrons expected in late 2022. The apparatus is expected to reach sensitivity of after 500 days of operation. TUCAN, a UCN nEDM experiment under construction at TRIUMF nEDM@SNS experiment under construction (as of 2022) at the Spallation Neutron Source PNPI nEDM experiment awaiting operation approval at the Institut Laue-Langevin PanEDM experiment being built at the Institut Laue-Langevin LANL Electric Dipole Moment (LANL nEDM) at Los Alamos National Laboratory Beam EDM at University of Bern, Switzerland The Cryogenic neutron EDM experiment or CryoEDM was under development at the Institut Laue-Langevin but its activities were stopped in 2013/2014. See also Anomalous electric dipole moment Anomalous magnetic dipole moment Axion – a hypothetical particle proposed to explain the strong force's unexpected preservation of CP Electric dipole spin resonance Electron electric dipole moment – another electric dipole which should exist, but also should be too small to have yet been measured Electron magnetic moment Nucleon magnetic moment – the corresponding magnetic property, which has been measured References Electric dipole moment Electromagnetism Particle physics
Neutron electric dipole moment
[ "Physics", "Mathematics" ]
1,326
[ "Physical phenomena", "Electromagnetism", "Electric dipole moment", "Physical quantities", "Quantity", "Fundamental interactions", "Particle physics", "Moment (physics)" ]
16,234,982
https://en.wikipedia.org/wiki/3D%20reconstruction
In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction. Motivation and applications The research of 3D reconstruction has always been a difficult goal. By Using 3D reconstruction one can determine any object's 3D profile, as well as knowing the 3D coordinate of any point on the profile. The 3D reconstruction of objects is a generally scientific problem and core technology of a wide variety of fields, such as Computer Aided Geometric Design (CAGD), computer graphics, computer animation, computer vision, medical imaging, computational science, virtual reality, digital media, etc. For instance, the lesion information of the patients can be presented in 3D on the computer, which offers a new and accurate approach in diagnosis and thus has vital clinical value. Digital elevation models can be reconstructed using methods such as airborne laser altimetry or synthetic aperture radar. Active methods Active methods, i.e. range data methods, given the depth map, reconstruct the 3D profile by numerical approximation approach and build the object in scenario based on model. These methods actively interfere with the reconstructed object, either mechanically or radiometrically using rangefinders, in order to acquire the depth map, e.g. structured light, laser range finder and other active sensing techniques. A simple example of a mechanical method would use a depth gauge to measure a distance to a rotating object put on a turntable. More applicable radiometric methods emit radiance towards the object and then measure its reflected part. Examples range from moving light sources, colored visible light, time-of-flight lasers to microwaves or 3D ultrasound. See 3D scanning for more details. Passive methods Passive methods of 3D reconstruction do not interfere with the reconstructed object; they only use a sensor to measure the radiance reflected or emitted by the object's surface to infer its 3D structure through image understanding. Typically, the sensor is an image sensor in a camera sensitive to visible light and the input to the method is a set of digital images (one, two or more) or video. In this case we talk about image-based reconstruction and the output is a 3D model. By comparison to active methods, passive methods can be applied to a wider range of situations. Monocular cues methods Monocular cues methods refer to using one or more images from one viewpoint (camera) to proceed to 3D construction. It makes use of 2D characteristics(e.g. Silhouettes, shading and texture) to measure 3D shape, and that's why it is also named Shape-From-X, where X can be silhouettes, shading, texture etc. 3D reconstruction through monocular cues is simple and quick, and only one appropriate digital image is needed thus only one camera is adequate. Technically, it avoids stereo correspondence, which is fairly complex. Shape-from-shading Due to the analysis of the shade information in the image, by using Lambertian reflectance, the depth of normal information of the object surface is restored to reconstruct. Photometric Stereo This approach is more sophisticated than the shape-of-shading method. Images taken in different lighting conditions are used to solve the depth information. It is worth mentioning that more than one image is required by this approach. Shape-from-texture Suppose such an object with smooth surface covered by replicated texture units, and its projection from 3D to 2D causes distortion and perspective. Distortion and perspective measured in 2D images provide the hint for inversely solving depth of normal information of the object surface. Machine Learning Based Solutions Machine learning enables learning the correspondance between the subtle features in the input and the respective 3D equivalent. Deep neural networks have shown to be highly effective for 3D reconstruction from a single color image. This works even for non-photorealistic input images such as sketches. Thanks to the high level of accuracy in the reconstructed 3D features, deep learning based method has been employed for biomedical engineering applications to reconstruct CT imagery from X-ray. Stereo vision Stereo vision obtains the 3-dimensional geometric information of an object from multiple images based on the research of human visual system. The results are presented in form of depth maps. Images of an object acquired by two cameras simultaneously in different viewing angles, or by one single camera at different time in different viewing angles, are used to restore its 3D geometric information and reconstruct its 3D profile and location. This is more direct than Monocular methods such as shape-from-shading. Binocular stereo vision method requires two identical cameras with parallel optical axis to observe one same object, acquiring two images from different points of view. In terms of trigonometry relations, depth information can be calculated from disparity. Binocular stereo vision method is well developed and stably contributes to favorable 3D reconstruction, leading to a better performance when compared to other 3D construction. Unfortunately, it is computationally intensive, besides it performs rather poorly when baseline distance is large. Problem statement and basics The approach of using Binocular stereo vision to acquire object's 3D geometric information is on the basis of visual disparity. The following picture provides a simple schematic diagram of horizontally sighted Binocular Stereo Vision, where b is the baseline between projective centers of two cameras. The origin of the camera's coordinate system is at the optical center of the camera's lens as shown in the figure. Actually, the camera's image plane is behind the optical center of the camera's lens. However, to simplify the calculation, images are drawn in front of the optical center of the lens by f. The u-axis and v-axis of the image's coordinate system are in the same direction with x-axis and y-axis of the camera's coordinate system respectively. The origin of the image's coordinate system is located on the intersection of imaging plane and the optical axis. Suppose such world point whose corresponding image points are and respectively on the left and right image plane. Assume two cameras are in the same plane, then y-coordinates of and are identical, i.e.,. According to trigonometry relations, where are coordinates of in the left camera's coordinate system, is focal length of the camera. Visual disparity is defined as the difference in image point location of a certain world point acquired by two cameras, based on which the coordinates of can be worked out. Therefore, once the coordinates of image points is known, besides the parameters of two cameras, the 3D coordinate of the point can be determined. The 3D reconstruction consists of the following sections: Image acquisition 2D digital image acquisition is the information source of 3D reconstruction. Commonly used 3D reconstruction is based on two or more images, although it may employ only one image in some cases. There are various types of methods for image acquisition that depends on the occasions and purposes of the specific application. Not only the requirements of the application must be met, but also the visual disparity, illumination, performance of camera and the feature of scenario should be considered. Camera calibration Camera calibration in Binocular Stereo Vision refers to the determination of the mapping relationship between the image points and , and space coordinate in the 3D scenario. Camera calibration is a basic and essential part in 3D reconstruction via Binocular Stereo Vision. Feature extraction The aim of feature extraction is to gain the characteristics of the images, through which the stereo correspondence processes. As a result, the characteristics of the images closely link to the choice of matching methods. There is no such universally applicable theory for features extraction, leading to a great diversity of stereo correspondence in Binocular Stereo Vision research. Stereo correspondence Stereo correspondence is to establish the correspondence between primitive factors in images, i.e. to match and from two images. Certain interference factors in the scenario should be noticed, e.g. illumination, noise, surface physical characteristic, etc. Restoration According to precise correspondence, combined with camera location parameters, 3D geometric information can be recovered without difficulties. Due to the fact that accuracy of 3D reconstruction depends on the precision of correspondence, error of camera location parameters and so on, the previous procedures must be done carefully to achieve relatively accurate 3D reconstruction. 3D Reconstruction of medical images Clinical routine of diagnosis, patient follow-up, computer assisted surgery, surgical planning etc. are facilitated by accurate 3D models of the desired part of human anatomy. Main motivation behind 3D reconstruction includes Improved accuracy due to multi view aggregation. Detailed surface estimates. Can be used to plan, simulate, guide, or otherwise assist a surgeon in performing a medical procedure. The precise position and orientation of the patient's anatomy can be determined. Helps in a number of clinical areas, such as radiotherapy planning and treatment verification, spinal surgery, hip replacement, neurointerventions and aortic stenting. Applications: 3D reconstruction has applications in many fields. They include: Pavement engineering Medicine Free-viewpoint video reconstruction Robotic mapping City planning Tomographic reconstruction Gaming Virtual environments and virtual tourism Earth observation Archaeology Augmented reality Reverse engineering Motion capture 3D object recognition, gesture recognition and hand tracking Problem Statement: Mostly algorithms available for 3D reconstruction are extremely slow and cannot be used in real-time. Though the algorithms presented are still in infancy but they have the potential for fast computation. Existing Approaches: Delaunay and alpha-shapes Delaunay method involves extraction of tetrahedron surfaces from initial point cloud. The idea of ‘shape’ for a set of points in space is given by concept of alpha-shapes. Given a finite point set S, and the real parameter alpha, the alpha-shape of S is a polytope (the generalization to any dimension of a two dimensional polygon and a three-dimensional polyhedron) which is neither convex nor necessarily connected. For a large value, the alpha-shape is identical to the convex-hull of S. The algorithm proposed by Edelsbrunner and Mucke eliminates all tetrahedrons which are delimited by a surrounding sphere smaller than α. The surface is then obtained with the external triangles from the resulting tetrahedron. Another algorithm called Tight Cocone labels the initial tetrahedrons as interior and exterior. The triangles found in and out generate the resulting surface. Both methods have been recently extended for reconstructing point clouds with noise. In this method the quality of points determines the feasibility of the method. For precise triangulation since we are using the whole point cloud set, the points on the surface with the error above the threshold will be explicitly represented on reconstructed geometry. Zero set Methods Reconstruction of the surface is performed using a distance function which assigns to each point in the space a signed distance to the surface S. A contour algorithm is used to extracting a zero-set which is used to obtain polygonal representation of the object. Thus, the problem of reconstructing a surface from a disorganized point cloud is reduced to the definition of the appropriate function f with a zero value for the sampled points and different to zero value for the rest. An algorithm called marching cubes established the use of such methods. There are different variants for given algorithm, some use a discrete function f, while other use a polyharmonic radial basis function is used to adjust the initial point set. Functions like Moving Least Squares, basic functions with local support, based on the Poisson equation have also been used. Loss of the geometry precision in areas with extreme curvature, i.e., corners, edges is one of the main issues encountered. Furthermore, pretreatment of information, by applying some kind of filtering technique, also affects the definition of the corners by softening them. There are several studies related to post-processing techniques used in the reconstruction for the detection and refinement of corners but these methods increase the complexity of the solution. VR Technique Entire volume transparence of the object is visualized using VR technique. Images will be performed by projecting rays through volume data. Along each ray, opacity and color need to be calculated at every voxel. Then information calculated along each ray will to be aggregated to a pixel on image plane. This technique helps us to see comprehensively an entire compact structure of the object. Since the technique needs enormous amount of calculations, which requires strong configuration computers is appropriate for low contrast data. Two main methods for rays projecting can be considered as follows: Object-order method: Projecting rays go through volume from back to front (from volume to image plane). Image-order or ray-casting method: Projecting rays go through volume from front to back (from image plane to volume).There exists some other methods to composite image, appropriate methods depending on the user's purposes. Some usual methods in medical image are MIP (maximum intensity projection), MinIP (minimum intensity projection), AC (alpha compositing) and NPVR (non-photorealistic volume rendering). Voxel Grid In this filtering technique input space is sampled using a grid of 3D voxels to reduce the number of points. For each voxel, a centroid is chosen as the representative of all points. There are two approaches, the selection of the voxel centroid or select the centroid of the points lying within the voxel. To obtain internal points average has a higher computational cost, but offers better results. Thus, a subset of the input space is obtained that roughly represents the underlying surface. The Voxel Grid method presents the same problems as other filtering techniques: impossibility of defining the final number of points that represent the surface, geometric information loss due to the reduction of the points inside a voxel and sensitivity to noisy input spaces. See also 3D modeling 3D data acquisition and object reconstruction 3D reconstruction from multiple images 3D scanner 3D SEM surface reconstruction 4D reconstruction Depth map Kinect Photogrammetry Stereoscopy Structure from motion References External links Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes with Deep Generative Networks - Generate and reconstruct 3D shapes via modeling multi-view depth maps or silhouettes. External links http://www.nature.com/subjects/3d-reconstruction#news-and-comment http://6.869.csail.mit.edu/fa13/lectures/lecture11shapefromX.pdf http://research.microsoft.com/apps/search/default.aspx?q=3d+reconstruction https://research.google.com/search.html#q=3D%20reconstruction 3D computer graphics 3D imaging Computer vision
3D reconstruction
[ "Engineering" ]
3,000
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
16,239,927
https://en.wikipedia.org/wiki/Nimrod%20%28synchrotron%29
Nimrod (National Institute Machine Radiating on Downs, "the Mighty Hunter" Nimrod; name attributed to W. Galbraith) was a 7 GeV proton synchrotron operating in the Rutherford Appleton Laboratory in the United Kingdom between 1964 and 1978. Nimrod delivered its last particles at 17:00 hrs on 6 June 1978. Although roughly contemporary with the CERN PS its conservative design used the "weak focussing" principle instead of the much more cost-effective "strong-focussing" technique, which would have enabled a machine of the same cost to reach much higher energies. The design and construction of Nimrod was carried out at a capital cost of approximately £11 million. It was used for studies of nuclear and sub-nuclear phenomena. Nimrod was dismantled and the space it occupied reused for the synchrotron of the ISIS neutron source. Magnet power supply The magnet power supply included 2 motor-alternator-flywheel sets. Each drive motor was 5,000 HP. Each flywheel was 30 tonnes. Each alternator was 60 MVA 12.8kV. Magnet currents would pulse at 10,550 A. References External links http://www.isis.stfc.ac.uk/about-isis/target-station-2/publications/issue-1-september-20038209.pdf Nuclear research institutes Particle physics facilities Research institutes in Oxfordshire Synchrotron radiation facilities Vale of White Horse
Nimrod (synchrotron)
[ "Materials_science", "Engineering" ]
301
[ "Nuclear research institutes", "Materials testing", "Nuclear organizations", "Synchrotron radiation facilities" ]
16,244,315
https://en.wikipedia.org/wiki/Polar%20Class
Polar Class (PC) refers to the ice class assigned to a ship by a classification society based on the Unified Requirements for Polar Class Ships developed by the International Association of Classification Societies (IACS). Seven Polar Classes are defined in the rules, ranging from PC 1 for year-round operation in all polar waters to PC 7 for summer and autumn operation in thin first-year ice. The IACS Polar Class rules should not be confused with International Code for Ships Operating in Polar Waters (Polar Code) by the International Maritime Organization (IMO). Background The development of the Polar Class rules began in the 1990s with an international effort to harmonize the requirements for marine operations in the polar waters in order to protect life, property and the environment. The guidelines developed by the International Maritime Organization (IMO), which were later incorporated in the Polar Code, made reference to the compliance with Unified Requirements for Polar Ships developed by the International Association of Classification Societies (IACS). In May 1996, an "Ad-Hoc Group to establish Unified Requirements for Polar Ships (AHG/PSR)" was established with one working group concentrating on the structural requirements and another working on machinery-related issues. The first IACS Polar Class rules were published in 2007. Prior to the development of the unified requirements, each classification society had their own set of ice class rules ranging from Baltic ice classes intended for operation in first-year ice to higher vessel categories, including icebreakers, intended for operations in polar waters. When developing the upper and lower boundaries for the Polar Classes, it was agreed that the highest Polar Class vessels (PC 1) should be capable of operating safely anywhere in the Arctic or the Antarctic waters at any time of the year while the lower boundary was set to existing tonnage operating during the summer season, most of which followed the Baltic ice classes with some upgrades and additions. The lowest Polar Class (PC 7) was thus set to the similar level with the Finnish-Swedish ice class 1A. The definition of operational conditions for each Polar Class was intentionally left vague due to the wide variety of ship operations carried out in polar waters. Definition Polar Class notations The IACS has established seven different Polar Class notations, ranging from PC 1 (highest) to PC 7 (lowest), with each level corresponding to operational capability and strength of the vessel. The description of ice conditions where ships of each Polar Class are intended to operate are based on World Meteorological Organization (WMO) Sea Ice Nomenclature. These definitions are intended to guide owners, designers and administrations in selecting the appropriate Polar Class to match the intended voyage or service of the vessel. Ships with sufficient power and strength to undertake "aggressive operations in ice-covered waters", such as escort and ice management operations, can be assigned an additional notation "Icebreaker". The two lowest Polar Classes (PC 6 and PC 7) are roughly equivalent to the two highest Finnish-Swedish ice classes (1A Super and 1A, respectively). However, unlike the Baltic ice classes intended for operation only in first-year sea ice, even the lowest Polar Classes consider the possibility of encountering multi-year ice ("old ice inclusions"). Requirements In the Polar Class rules, the hull of the vessel is divided longitudinally into four regions: "bow", "bow intermediate", "midbody" and "stern". All longitudinal regions except the bow are further divided vertically into "bottom", "lower" and "icebelt" regions. For each region, a design ice load is calculated based on the dimensions, hull geometry, and ice class of the vessel. This ice load is then used to determine the scantlings and steel grades of structural elements such as shell plating and frames in each location. The design scenario used to determine the ice loads is a glancing collision with a floating ice floe. In addition to structural details, the Polar Class rules have requirements for machinery systems such as the main propulsion, steering gear, and systems essential for the safety of the crew and survivability of the vessel. For example, propeller-ice interaction should be taken into account in the propeller design, cooling systems and sea water inlets should be designed to work also in ice-covered waters, and the ballast tanks should be provided with effective means of preventing freezing. Although the rules generally require the ships to have suitable hull form and sufficient propulsion power to operate independently and at continuous speed in ice conditions corresponding to their Polar Class, the ice-going capability requirements of the vessel are not clearly defined in terms of speed or ice thickness. In practice, this means that the Polar Class of the vessel may not reflect the actual icebreaking capability of the vessel. Polar Class ships The IACS Polar Class rules apply for ships contracted for construction on or after 1 July 2007. This means that while vessels built prior to this date may have an equivalent or even higher level of ice strengthening, they are not officially assigned a Polar Class and may not in fact fulfill all the requirements in the unified requirements. In addition, particularly Russian ships and icebreakers are assigned ice classes only according to the requirements of the Russian Maritime Register of Shipping, which maintains its own ice class rules parallel to the IACS Polar Class rules. Although numerous ships have been built to the two least hardened Polar Classes, PC6 and PC7, only a small number of ships have been assigned ice class PC5 or higher. Polar Class 5 A number of research vessels intended for scientific missions in the polar regions are built to PC5 rating: the South African S. A. Agulhas II in 2012, the American Sikuliaq in 2014, and the British RRS Sir David Attenborough in 2020. In addition, a PC5 Antarctic vessel Almirante Viel is under construction for the Chilean Navy . In 2012, the Royal Canadian Navy awarded a shipbuilding contract for the construction of six to eight Arctic Offshore Patrol Ships (AOPS) rated at PC5. , HMCS Harry DeWolf and HMCS Margaret Brooke have entered service, HMCS Max Bernays is undergoing post-acceptance trials, and HMCS William Hall, HMCS Frédérick Rolette and HMCS Robert Hampton Gray are under construction. Two additional ships have been ordered for the Canadian Coast Guard. , four cruise ships have been built with PC5 rating: National Geographic Endurance (delivered in 2020) and National Geographic Resolution (2021) for Lindblad Expeditions, and SH Minerva (2021) and SH Vega (2022) for Swan Hellenic. Polar Class 4 The 2012-built drillship Stena IceMAX has a hull strengthened according to PC4 requirements. However, the long and wide vessel does not feature an icebreaking hull and is designed to operate primarily in pre-broken ("managed") ice. The Canadian shipping company Fednav operates two PC4 rated bulk carriers, 2014-built Nunavik and 2021-built Arvik I. The 28,000-tonne vessels are primarily used to transport nickel ore from Raglan Mine in the Canadian Arctic. In 2015, the hull of the Finnish 1986-built icebreaker Otso was reinforced with additional steel to PC4 level to allow the vessel to support seismic surveys in the Arctic during the summer months. The Finnish LNG-powered icebreaker Polaris, built in 2016, is rated PC4 with an additional Lloyd's Register class notation "Icebreaker(+)". The latter part of the notation refers to additional structural strengthening based on analysis of the vessel's operational profile and potential ice loading scenarios. The interim icebreakers CCGS Captain Molly Kool, CCGS Jean Goodwill, and CCGS Vincent Massey, built in 2000–01 and acquired by the Canadian Coast Guard 2018, will be upgraded to PC4 rating as part of the vessels' conversion to Canadian service. The new PC4 polar logistics vessel of the Argentine Navy intended to complement the country's existing icebreaker ARA Almirante Irízar in Antarctica is currently in design stage. The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) is in the process of acquiring a new PC4 rated icebreaker for researching the Arctic region. The Swedish Maritime Administration is in the process of acquiring 2–3 new icebreakers rated PC4 Icebreaker(+). The first icebreaker is expected to enter service in 2027. The new Canadian Coast Guard Multi-Purpose Vessels (MPV) will be rated PC4 Icebreaker(+). Sixteen vessels will be built by Seaspan in the 2020s and 2030s, and the first vessel is expected to enter service in 2028. Polar Class 3 The first PC3 vessels were two heavy load carriers, Audax and Pugnax, built for the Netherlands-based ZPMC-Red Box Energy Services in 2016. The long and wide vessels, capable of breaking up to ice independently, were built for year-round transportation of LNG liquefaction plant modules to Sabetta. Although usually referred to by their Russian Maritime Register of Shipping ice class Arc7, the fifteen first-generation Yamalmax LNG carriers built in 2016–2019 as well as the arctic condensate tankers Boris Sokolov (built in 2018) and Yuriy Kuchiev (2019) serving the Yamal LNG project also have PC3 rating from Bureau Veritas. In April 2015, it was reported that Edison Chouest would build two PC3 anchor handling tug supply vessels (AHTS) for Alaskan operations. However, the construction of the vessels due for delivery by the end of 2016 was later cancelled following Shell Oil's decision to halt Arctic oil exploration. , three polar research vessels have been built with PC3 rating: Kronprins Haakon for the Norwegian Polar Institute in 2018, Xue Long 2 for the Polar Research Institute of China in 2019, and Nuyina for the Australian Antarctic Division in 2021. Kronprins Haakon also has the additional notation "Icebreaker" while Nuyina notation includes Lloyd's Register's "Icebreaker(+)" notation. The Finnish multipurpose icebreakers Fennica and Nordica, built in the early 1990s, were assigned PC3 rating as part of the vessels' Polar Code certification in 2019. , there are no PC3 rated vessels under construction. Polar Class 2 , the only PC2 rated vessel in service is the expedition cruise ship operated by the French company Compagnie du Ponant. The 270-passenger vessel, capable of breaking up to thick multi-year ice and taking passengers to the North Pole, was delivered in 2021. The United States Coast Guard has ordered two out of three planned PC2 rated heavy polar icebreakers referred to as Polar Security Cutters. Construction of the first vessel, , has been delayed by several years and now is not expected to be delivered to the U.S. Coast Guard until at least 2028. While the vessels these Polar Security Cutters are intended to replace, and , are sometimes referred to as s, these mid-1970s icebreakers do not carry a PC rating. The future Canadian Coast Guard polar icebreakers and are designed to PC2 rating with an additional notation "Icebreaker(+)". While a single vessel was initially scheduled for delivery in 2017, the National Shipbuilding Strategy has since been revised to include two such icebreakers, the first of which is planned to enter service by December 2029. Germany has signed the order of a replacement vessel for the 1982-built research icebreaker Polarstern in December 2024. While the old Polarstern was built to Germanischer Lloyd ice class ARC3, the replacement Polarstern 2 will be a PC2 ship. Polar Class 1 , no ships have been built, under construction or planned to PC1, the highest ice class specified by the IACS. Notes References External links Unified Requirements for Polar Class ships, International Association of Classification Societies (IACS) Shipbuilding Icebreakers Sea ice
Polar Class
[ "Physics", "Engineering" ]
2,424
[ "Physical phenomena", "Earth phenomena", "Sea ice", "Shipbuilding", "Marine engineering" ]
16,250,616
https://en.wikipedia.org/wiki/Epicentral%20distance
Epicentral distance refers to the ground distance from the epicenter to a specified point. Generally, the smaller the epicentral distance of an earthquake of the same scale, the heavier the damage caused by the earthquake. On the contrary, with the increase of epicentral distance, the damage caused by the earthquake is gradually reduced. Due to the limitation of seismometers designed in the early years, some seismic magnitude scales began to show errors when the epicentral distance exceeded a certain range from the observation points. In seismology, the unit of far earthquakes is usually ° (degree), while the unit of near earthquakes is km. But regardless of distance, Δ is used as a symbol for the epicentral distance. Measuring method S-P time difference method Even if the depth of focus of an earthquake is very deep, it can still have a very short epicentral distance. When measuring the epicentral distance of an earthquake with a small epicentral distance, first measure the reading of the initial motion of P wave, and then confirm the arrival of S wave. The value of the epicenter distance Δ is found on the travel timetable according to the arrival time difference between the P wave and S wave. Other Methods If the source is very far away, that is, when the epicenter distance is greater than 105 °, the epicenter distance cannot be determined according to the S-P move out method so it must be determined by P, PKP, PP, SKS, PS, and other waves. Correlation with seismic measurement Definition of near earthquake magnitude In 1935, in the absence of a mature seismic magnitude scales, two seismologists from the California Institute of Technology, Charles Francis Richter and Bino Gutenberg, designed the Richter magnitude scale to study the earthquakes that occurred in California, USA. In order to keep the result from being negative, Richter defined an earthquake with a maximum horizontal displacement of 1 μ m (which is also the highest accuracy and precision of the Wood Anderson torsion seismometer) recorded by the seismometer at the observation point at the epicentral distance of 100 km as a magnitude 0 earthquake. According to this definition, if the amplitude of the seismic wave measured by the Wood Anderson torsion seismometer at the epicentral distance of 100 km is 1 mm, then the magnitude is 3. Although Richter et al. attempted to make the results non-negative, modern precision seismographs often record earthquakes with negative scales due to the lack of clear upper or lower limits on the magnitude of nearby earthquakes. Moreover, due to the limitation of the Wood Anderson torsion seismometer used in the original design of the Richter scale, if the local earthquake scale ML is greater than 6.8 or the epicentral distance exceeds about 600 km the observation point, it is not applicable. Calculation of surface wave magnitude The epicentral distance is one of the important parameters for calculating surface-wave magnitude. The equation for calculating surface wave magnitude is In this equation, represents the maximum particle displacement in the surface wave (sum of two horizontal Euclidean vectors), in micrometers; T represents the corresponding period, in seconds; Δ Is the epicentral distance, in degrees; and is a gauge function. Generally, the expression for the gauge function is According to GB 17740-1999, two horizontal displacements must be measured at the same time or one-eighth of a period. If two displacements have different cycles, weighted summation must be used. Among them, AN represents the displacement in the north-south direction, in micrometers; AE represents the displacement in the east-west direction, in micrometers; TN represents the period of the corresponding AN, in seconds; TE represents the period corresponding to AE, in seconds. It can be seen that the seismic surface wave period value selected for different epicentral distances is different. Generally, the cycle values can be selected by referring to the table below. Rapid report of large earthquakes with surface wave magnitude In addition to the calculation of surface wave magnitude (Δ≤15°) body wave attenuation characteristics and better conversion relationship between MB and MS are effective ways to improve the longitude of Body wave magnitude MB rapid report of large earthquakes. This is also a meaningful quantitative work for carrying out research on the measurement of Body wave magnitude MB recorded by short period instrument DD-1 and VGK. Correlation with epicenter Before the 20th century, the method of determining the epicenter was generally the geometric center method. Since the beginning of the 20th century, as the technology of seismometers and other instruments gradually matured, the single station measurement method and network measurement method were born. Compared to the three methods, due to the influence of uneven crustal structure on the propagation of seismic rays, the network measurement method has the highest accuracy, while the geometric center method has the lowest accuracy. Geometric center method Before the 20th century, in the absence of instrument records, the epicenter position of earthquakes was determined by the macroscopic epicenter based on the extent of damage, which was the geometric center of the epicenter area (the area near the epicenter where the damage was most severe). Due to the inability to determine the precise range of the polar region, errors were often caused. Single station measurement method Due to the varying propagation speeds of various seismic waves in different regions and depths, those with fast wave speeds or diameters first arrive at the station, followed by other waves, resulting in a time difference. The epicentral distance, source depth, and time difference of various recorded waves can be compiled into time distance curves and travel timetables suitable for local use. When an earthquake occurs in a certain place, the analyst can measure the time difference of various waves of the earthquake from the seismogram and calculate the epicentral distance by comparing it with the prepared travel timetable or applying the formula. Subsequently, it is necessary to determine the azimuth angle. Transforming the initial motion amplitudes in two horizontal directions into ground motion displacements, the azimuth angle can be determined using a trigonometric function. After the azimuth and epicentral distance are calculated, the epicenter position can be easily found. This method is called the single station measurement method. Network measurement method When the epicentral distance is calculated by at least three seismic stations, the location of the epicenter can be determined by trilateral measurement. This method of measuring epicenters through instruments, commonly known as microscopic epicenters, is called network measurement method. The specific method is done by drawing a circle on the map with the three stations as the center of the circle and the epicentral distance is calculated with the radius according to the corresponding proportion. Then, the intersection of each two circles is connected, and the intersection points of the three strings are the obtained epicenter. Then, the latitude and longitude are calculated (Geographic coordinate system). Others Seismic classification Epicentral distance also plays a unique role in earthquake classification. The same earthquake is called differently when observed at different distances, near and far. According to epicentral distance, earthquakes can be divided into three categories: Local earthquake: Δ<100km Near earthquake: 100km ≤ Δ ≤ 1000km Distant earthquake: Δ>1000km Seismic phase study The epicentral distance is different, and the seismic phases are reflected in different patterns on the seismic record map due to the combined effects of the source, the source depth, and the propagation of seismic rays. Therefore, with the different epicentral distances, the determination of seismic parameters will be different. Given the epicentral distance from the observation points, it is easier to distinguish complex and different seismic phases, which are generally judged according to the overall situation of seismic records on the record map. The size, distance, and depth of earthquakes have distinct characteristics. The closer the source is, the shorter the duration of the vibration; the farther the source, the longer the duration. Notes References Earthquakes Earthquakes in the United States Measurement Earth sciences Seismology
Epicentral distance
[ "Physics", "Mathematics" ]
1,630
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
4,213,750
https://en.wikipedia.org/wiki/Salt%20water%20chlorination
Salt water chlorination is a process that uses dissolved salt (1000–4000 ppm or 1–4 g/L) for the chlorination of swimming pools and hot tubs. The chlorine generator (also known as salt cell, salt generator, salt chlorinator, or SWG) uses electrolysis in the presence of dissolved salt to produce chlorine gas or its dissolved forms, hypochlorous acid and sodium hypochlorite, which are already commonly used as sanitizing agents in pools. Hydrogen is produced as byproduct too. Distinction from traditional pool chlorination The presence of chlorine in traditional swimming pools can be described as a combination of free available chlorine (FAC) and combined available chlorine (CAC). While FAC is composed of the free chlorine that is available for disinfecting the water, the CAC includes chloramines, which are formed by the reaction of FAC with amines (introduced into the pool by human perspiration, saliva, mucus, urine, and other biologics, and by insects and other pests). Chloramines are responsible for the "chlorine smell" of pools, as well as skin and eye irritation. These problems are the result of insufficient levels of free available chlorine, and indicate a pool that must be "shocked" by the addition of 5–10 times the normal amount of chlorine. In saltwater pools, the generator uses electrolysis to continuously produce free chlorine. As such, a saltwater pool or hot tub is not actually chlorine-free; it simply utilizes added salt and a chlorine generator instead of direct addition of chlorine. It also burns off chloramines in the same manner as traditional shock (oxidizer). As with traditionally chlorinated pools, saltwater pools must be monitored in order to maintain proper water chemistry. Low chlorine levels can be caused by insufficient salt, incorrect (low) chlorine-generation setting on the SWG unit, higher-than-normal chlorine demand, low stabilizer, sun exposure, insufficient pump speed, or mechanical issues with the chlorine generator. Salt count can be lowered due to splash-out, backwashing, and dilution via rainwater. Health concerns Research has shown that because saltwater pools still use chlorine sanitization, they generate the same disinfection byproducts (DBPs) that are present in traditional pools. Of highest concern are haloketones and trihalomethanes (THMs) of those the predominant form being bromoform. Very high levels of bromoform—up to 1.3 mg per liter, or 13 times the World Health Organization's guideline values—have been found in some public saltwater swimming pools. Manufacturers have been producing saltwater chlorine generators in the United States since the early 1980s, and they first appeared commercially in New Zealand in the early 1970s (the Aquatech IG4500). Operation The chlorinator cell consists of parallel titanium plates coated with ruthenium and sometimes iridium. Older models make use of perforated (or mesh) plates rather than solid plates. Electrolysis naturally attracts calcium and other minerals to the plates. Thus, depending on water chemistry and magnitude of use, the cell will require periodic cleaning in a mild acid solution (1 part HCl to 15 parts water) which will remove the buildup of calcium compound crystals, such as calcium carbonate or calcium nitrate. Excessive buildup can reduce the effectiveness of the cell. Running the chlorinator for long periods with insufficient salt in the pool can strip the coating off the cell which then requires an expensive replacement, as can using too strong an acid wash. Saltwater pools can also require stabilizer (cyanuric acid) to help stop the sun's UV rays from breaking down free chlorine in the pool. Usual levels are 20–50 ppm. They also require the pH to be kept between 7.2 and 7.8 with the chlorine being more effective if the pH is kept closer to 7.2. The average salt levels are usually in the 3000-5000 ppm range, much less than the ocean, which has salt levels of around 35,000 ppm. In swimming pools, salt is typically poured across the bottom and swept with the pool brush until it dissolves; if concentrated brine is allowed into the return-water system it can cause the chlorinator cell to malfunction due to overconductivity. Salt water chlorination produces an excess of hydroxide ions, and this requires the frequent addition of hydrochloric acid (HCl, also known as muriatic acid) to maintain pH. The initial chlorine chemistry is as follows. 4NaCl → 4Na+ + 4Cl− Salt dissolves in water. 4Na+ + 4Cl− → 4Na+ + 2Cl2 By electrolysis. 4Na+ + 4H2O → 4Na+ + 4OH− + 2H2 By electrolysis. 2Cl2 + 2H2O → 2HClO + 2H+ + 2Cl− Hydrolysis of aqueous Chlorine gas. 2HClO → HClO + ClO− + H+ Dissociation of hypochlorous acid at pH 7.5 and 25C. 4NaCl + 3H2O → 4Na+ + HClO + ClO− + OH− + 2Cl− + 2H2 Net of all the above. Addition of Hydrochloric Acid to restore the pH to 7.5 HCl + 4Na+ + HClO + ClO− + OH− + 2Cl− +2H2 → HClO + OCl− + H2O + 4Na+ + 3Cl− + 2H2. 4NaCl + HCl + 2H2O → HClO + OCl− + 4Na+ + 3Cl− + 2H2 Net of the last two. Benefits and disadvantages The benefits of salt systems in pools are the convenience and the constant delivery of pure chlorine-based sanitizer. The reduction of irritating chloramines versus traditional chlorinating methods and the "softening" effect of electrolysis reducing dissolved alkali minerals in the water are also perceived as benefits. For some people that have sensitivities to chlorine, these systems may be less offensive. Disadvantages are the initial cost of the system, maintenance, and the cost of replacement cells. Salt is corrosive and will damage some metals and some improperly-sealed stone. However, as the ideal saline concentration of a salt-chlorinated pool is very low (<3,500ppm, the threshold for human perception of salt by taste; seawater is about ten times this concentration), damage usually occurs due to improperly-maintained pool chemistry or improper maintenance of the electrolytic cell. Pool equipment manufacturers typically will not warrant stainless steel products damaged by saline pools. Calcium and other alkali precipitate buildup will occur naturally on the cathode plate, and sometimes in the pool itself as "scaling". Regular maintenance of the cell is necessary; failure to do so will reduce the effectiveness of the cell. Certain designs of saline chlorinators use a "reverse-polarity" method that will regularly switch the roles of the two electrodes between anode and cathode, causing this calcium buildup to dissolve off the accumulating electrode. Such systems reduce but do not eliminate the need to clean the electrolytic cell and the occurrence of calcium scale in the water. As chlorine is generated, pH will rise causing the chlorine to be less effective. Many systems with chemistry automation can sense the rising pH and automatically introduce either CO2 or hydrochloric acid in order to bring the pH back to the target level.Automation systems will also manage levels of sanitizer by monitoring the ORP or redox levels of the water. This allows only the needed amount of chlorine to be generated based on the demand. Sodium bromide can be used instead of sodium chloride, which produces a bromine pool. The benefits and downsides are the same as those of a salt system. It is not necessary to use a chloride-based acid to balance the pH. Also, bromine is only effective as a sanitizer, not as an oxidizer, leaving a need for adding a "shock" such as hydrogen peroxide or any chlorine-based shock to burn off inorganic waste and free up combined bromines. This extra step is not needed in a sodium chloride system, as chlorine is effective as both a sanitizer and an oxidizer. A user would only need to "super chlorinate" or increase chlorine production of the cell occasionally. That would normally be less than once a week or after heavy bather loads. References Swimming pools Water treatment Chlorine
Salt water chlorination
[ "Chemistry", "Engineering", "Environmental_science" ]
1,876
[ "Water technology", "Water treatment", "Water pollution", "Environmental engineering" ]
4,214,075
https://en.wikipedia.org/wiki/Intrinsic%20safety
Intrinsic safety (IS) is a protection technique for safe operation of electrical equipment in hazardous areas by limiting the energy, electrical and thermal, available for ignition. In signal and control circuits that can operate with low currents and voltages, the intrinsic safety approach simplifies circuits and reduces installation cost over other protection methods. Areas with dangerous concentrations of flammable gases or dust are found in applications such as petrochemical refineries and mines. As a discipline, it is an application of inherent safety in instrumentation. High-power circuits such as electric motors or lighting cannot use intrinsic safety methods for protection. Intrinsic safety devices, can be subdivided in to: Intrinsically safe apparatus Associated apparatus Intrinsically safe apparatus Intrinsically safe apparatuses are electrical devices that have connected circuits that are intrinsically safe circuits whilst in the hazardous area. Associated apparatus Associated apparatuses are electrical devices that have both intrinsically safe and non-intrinsically safe circuits and is designed in a way that the non-intrinsically safe circuits cannot negatively affect the intrinsically safe circuits. The apparatus is normally Intrinsically safe circuit An intrinsically safe circuit is designed to not be capable of causing ignition of a given explosive atmosphere, by any spark or any thermal effect under normal operation and specified fault conditions. Operating and design principles In normal use, electrical equipment often creates tiny electric arcs (internal sparks) in switches, motor brushes, connectors, and in other places. Compact electrical equipment generates heat as well, which under some circumstances can become an ignition source. There are multiple ways to make equipment safe for use in explosive-hazardous areas. Intrinsic safety (denoted by "i" in the ATEX and IECEx Explosion Classifications) is one of several available methods for electrical equipment. see Types of protection for more info. For handheld electronics, intrinsic safety is the only realistic method that allows a functional device to be explosion protected. A device which is termed "intrinsically safe" has been designed to be incapable of producing heat or spark sufficient to ignite an explosive atmosphere, even if the device has experienced deterioration or has been damaged. There are several considerations in designing intrinsically safe electronics devices: reducing or eliminating internal sparking. controlling component temperatures. eliminating component spacing that would allow dust to short a circuit. Elimination of spark potential within components is accomplished by limiting the available energy in any given circuit and the system as a whole. Temperature, under certain fault conditions such as an internal short in a semiconductor device, becomes an issue as the temperature of a component can rise to a level that can ignite some explosive gasses, even in normal use. Safeguards, such as current limiting by resistors and fuses, must be employed to ensure that in no circumstance can a component reach a temperature that could cause autoignition of a combustible atmosphere. In the highly compact electronic devices used today PCBs often have component spacing that create the possibility of an arc between components if dust or other particulate matter works into the circuitry, thus component spacing, siting and isolation become important to the design. The primary concept behind intrinsic safety is the restriction of available electrical and thermal energy in the system so that ignition of a hazardous atmosphere (explosive gas or dust) cannot occur. This is achieved by ensuring that only low voltages and currents enter the hazardous area, and that no significant energy storage is possible. One of the most common methods for protection is to limit electric current by using series resistors (using types of resistors that always fail open); and limit the voltage with multiple zener diodes. In zener barriers dangerous incoming potentials are grounded, with galvanic isolation barriers there is no direct connection between the safe- and hazardous-area circuits by interposing a layer of insulation between the two. Certification standards for intrinsic safety designs (mainly IEC 60079-11 but since 2015 also IEC TS 60079-39) generally require that the barrier do not exceed approved levels of voltage and current with specified damage to limiting components. Equipment or instrumentation for use in a hazardous area will be designed to operate with low voltage and current, and will be designed without any large capacitors or inductors that could discharge in a spark. The instrument will be connected, using approved wiring methods, back to a control panel in a non-hazardous area that contains safety barriers. The safety barriers ensure that, in normal operation, and with the application of faults according to the equipment protection level (EPL), even if accidental contact occurs between the instrument circuit and other power sources, no more than the approved voltage and current enters the hazardous area. For example, during marine transfer operations when flammable products are transferred between the marine terminal and tanker ships or barges, two-way radio communication needs to be constantly maintained in case the transfer needs to stop for unforeseen reasons such as a spill. The United States Coast Guard requires that the two way radio must be certified as intrinsically safe. Another example is intrinsically safe or explosion-proof mobile phones used in explosive atmospheres, such as refineries. Intrinsically safe mobile phones must meet special battery design criteria in order to achieve UL, ATEX directive, or IECEx certification for use in explosive atmospheres. Only properly designed battery-operated, self-contained devices can be intrinsically safe by themselves. Other field devices and wiring are intrinsically safe only when employed in a properly designed IS system. Requirements for intrinsically safe electrical systems are given in the IEC 60079 series of standards. Certifying agencies Standards for intrinsic protection are mainly developed by International Electrotechnical Commission (IEC), but different agencies also develop standards for intrinsic safety. Agencies may be run by governments or may be composed of members from insurance companies, manufacturers, and industries with an interest in safety standards. Certifying agencies allow manufacturers to affix a label or mark to identify that the equipment has been designed to the relevant product safety standards. Examples of such agencies in North America are the Factory Mutual Research Corporation, which certifies radios, Underwriters Laboratories (UL) that certifies mobile phones, and in Canada the Canadian Standards Association. In the EU the standard for intrinsic safety certification is the CENELEC standard EN 60079-11 and shall be certified according to the ATEX directive, while in other countries around the world the IEC standards are followed. To facilitate world trade, standards agencies around the world engage in harmonization activity so that intrinsically safe equipment manufactured in one country eventually might be approved for use in another without redundant, expensive testing and documentation. See also ATEX directive References Intrinsic safety on-line assessment tool IEC 60079-11:2023 Further reading Redding, R.J., Intrinsic Safety: Safe Use of Electronics in Hazardous Locations. McGraw-Hill European technical and industrial programme. 1971. Paul, V., '"The earthing of intrinsically safe barriers on offshore transportable equipment". IMarEST. Proceedings of IMarEST - Part A - Journal of Marine Engineering and Technology, Volume 2009, Number 14, April 2009, pp. 3–17(15) . . Electrical safety Explosion protection Natural gas safety
Intrinsic safety
[ "Chemistry", "Engineering" ]
1,450
[ "Explosion protection", "Natural gas safety", "Combustion engineering", "Natural gas technology", "Explosions" ]
4,215,135
https://en.wikipedia.org/wiki/Atomic%20mirror
In physics, an atomic mirror is a device which reflects neutral atoms in a way similar to the way a conventional mirror reflects visible light. Atomic mirrors can be made of electric fields or magnetic fields, electromagnetic waves or just silicon wafer; in the last case, atoms are reflected by the attracting tails of the van der Waals attraction (see quantum reflection). Such reflection is efficient when the normal component of the wavenumber of the atoms is small or comparable to the effective depth of the attraction potential (roughly, the distance at which the potential becomes comparable to the kinetic energy of the atom). To reduce the normal component, most atomic mirrors are blazed at the grazing incidence. At grazing incidence, the efficiency of the quantum reflection can be enhanced by a surface covered with ridges (ridged mirror). The set of narrow ridges reduces the van der Waals attraction of atoms to the surfaces and enhances the reflection. Each ridge blocks part of the wavefront, causing Fresnel diffraction. Such a mirror can be interpreted in terms of the Zeno effect. We may assume that the atom is "absorbed" or "measured" at the ridges. Frequent measuring (narrowly spaced ridges) suppresses the transition of the particle to the half-space with absorbers, causing specular reflection. At large separation between thin ridges, the reflectivity of the ridged mirror is determined by dimensionless momentum , and does not depend on the origin of the wave; therefore, it is suitable for reflection of atoms. Applications Atomic interferometry See also Quantum reflection Ridged mirror Zeno effect Atomic nanoscope Atom laser References Atomic, molecular, and optical physics
Atomic mirror
[ "Physics", "Chemistry" ]
335
[ "Atomic", " molecular", " and optical physics" ]
4,216,002
https://en.wikipedia.org/wiki/Near-field%20scanning%20optical%20microscope
Near-field scanning optical microscopy (NSOM) or scanning near-field optical microscopy (SNOM) is a microscopy technique for nanostructure investigation that breaks the far field resolution limit by exploiting the properties of evanescent waves. In SNOM, the excitation laser light is focused through an aperture with a diameter smaller than the excitation wavelength, resulting in an evanescent field (or near-field) on the far side of the aperture. When the sample is scanned at a small distance below the aperture, the optical resolution of transmitted or reflected light is limited only by the diameter of the aperture. In particular, lateral resolution of 6 nm and vertical resolution of 2–5 nm have been demonstrated. As in optical microscopy, the contrast mechanism can be easily adapted to study different properties, such as refractive index, chemical structure and local stress. Dynamic properties can also be studied at a sub-wavelength scale using this technique. NSOM/SNOM is a form of scanning probe microscopy. History Edward Hutchinson Synge is given credit for conceiving and developing the idea for an imaging instrument that would image by exciting and collecting diffraction in the near field. His original idea, proposed in 1928, was based upon the usage of intense nearly planar light from an arc under pressure behind a thin, opaque metal film with a small orifice of about 100 nm. The orifice was to remain within 100 nm of the surface, and information was to be collected by point-by-point scanning. He foresaw the illumination and the detector movement being the biggest technical difficulties. John A. O'Keefe also developed similar theories in 1956. He thought the moving of the pinhole or the detector when it is so close to the sample would be the most likely issue that could prevent the realization of such an instrument. It was Ash and Nicholls at University College London who, in 1972, first broke the Abbe's diffraction limit using microwave radiation with a wavelength of 3 cm. A line grating was resolved with a resolution of λ0/60. A decade later, a patent on an optical near-field microscope was filed by Dieter Pohl, followed in 1984 by the first paper that used visible radiation for near field scanning. The near-field optical (NFO) microscope involved a sub-wavelength aperture at the apex of a metal coated sharply pointed transparent tip, and a feedback mechanism to maintain a constant distance of a few nanometers between the sample and the probe. Lewis et al. were also aware of the potential of an NFO microscope at this time. They reported first results in 1986 confirming super-resolution. In both experiments, details below 50 nm (about λ0/10) in size could be recognized. Theory According to Abbe's theory of image formation, developed in 1873, the resolving capability of an optical component is ultimately limited by the spreading out of each image point due to diffraction. Unless the aperture of the optical component is large enough to collect all the diffracted light, the finer aspects of the image will not correspond exactly to the object. The minimum resolution (d) for the optical component is thus limited by its aperture size, and expressed by the Rayleigh criterion: Here, λ0 is the wavelength in vacuum; NA is the numerical aperture for the optical component (maximum 1.3–1.4 for modern objectives with a very high magnification factor). Thus, the resolution limit is usually around λ0/2 for conventional optical microscopy. This treatment takes into account only the light diffracted into the far-field that propagates without any restrictions. NSOM makes use of evanescent or non propagating fields that exist only near the surface of the object. These fields carry the high frequency spatial information about the object and have intensities that drop off exponentially with distance from the object. Because of this, the detector must be placed very close to the sample in the near field zone, typically a few nanometers. As a result, near field microscopy remains primarily a surface inspection technique. The detector is then rastered across the sample using a piezoelectric stage. The scanning can either be done at a constant height or with regulated height by using a feedback mechanism. Modes of operation Aperture and apertureless operation There exist NSOM which can be operated in so-called aperture mode and NSOM for operation in a non-aperture mode. As illustrated, the tips used in the apertureless mode are very sharp and do not have a metal coating. Though there are many issues associated with the apertured tips (heating, artifacts, contrast, sensitivity, topology and interference among others), aperture mode remains more popular. This is primarily because apertureless mode is even more complex to set up and operate, and is not understood as well. There are five primary modes of apertured NSOM operation and four primary modes of apertureless NSOM operation. The major ones are illustrated in the next figure. Some types of NSOM operation utilize a campanile probe, which has a square pyramid shape with two facets coated with a metal. Such a probe has a high signal collection efficiency (>90%) and no frequency cutoff. Another alternative is "active tip" schemes, where the tip is functionalized with active light sources such as a fluorescent dye or even a light emitting diode that enables fluorescence excitation. The merits of aperture and apertureless NSOM configurations can be merged in a hybrid probe design, which contains a metallic tip attached to the side of a tapered optical fiber. At visible range (400 nm to 900 nm), about 50% of the incident light can be focused to the tip apex, which is around 5 nm in radius. This hybrid probe can deliver the excitation light through the fiber to realize tip-enhanced Raman spectroscopy (TERS) at tip apex, and collect the Raman signals through the same fiber. The lens-free fiber-in-fiber-out STM-NSOM-TERS has been demonstrated. Feedback mechanisms Feedback mechanisms are usually used to achieve high resolution and artifact free images since the tip must be positioned within a few nanometers of the surfaces. Some of these mechanisms are constant force feedback and shear force feedback Constant force feedback mode is similar to the feedback mechanism used in atomic force microscopy (AFM). Experiments can be performed in contact, intermittent contact, and non-contact modes. In shear force feedback mode, a tuning fork is mounted alongside the tip and made to oscillate at its resonance frequency. The amplitude is closely related to the tip-surface distance, and thus used as a feedback mechanism. Contrast It is possible to take advantage of the various contrast techniques available to optical microscopy through NSOM but with much higher resolution. By using the change in the polarization of light or the intensity of the light as a function of the incident wavelength, it is possible to make use of contrast enhancing techniques such as staining, fluorescence, phase contrast and differential interference contrast. It is also possible to provide contrast using the change in refractive index, reflectivity, local stress and magnetic properties amongst others. Instrumentation and standard setup The primary components of an NSOM setup are the light source, feedback mechanism, the scanning tip, the detector and the piezoelectric sample stage. The light source is usually a laser focused into an optical fiber through a polarizer, a beam splitter and a coupler. The polarizer and the beam splitter would serve to remove stray light from the returning reflected light. The scanning tip, depending upon the operation mode, is usually a pulled or stretched optical fiber coated with metal except at the tip or just a standard AFM cantilever with a hole in the center of the pyramidal tip. Standard optical detectors, such as avalanche photodiode, photomultiplier tube (PMT) or CCD, can be used. Highly specialized NSOM techniques, Raman NSOM for example, have much more stringent detector requirements. Near-field spectroscopy As the name implies, information is collected by spectroscopic means instead of imaging in the near field regime. Through near field spectroscopy (NFS), one can probe spectroscopically with sub-wavelength resolution. Raman SNOM and fluorescence SNOM are two of the most popular NFS techniques as they allow for the identification of nanosized features with chemical contrast. Some of the common near-field spectroscopic techniques are below. Direct local Raman NSOM is based on Raman spectroscopy. Aperture Raman NSOM is limited by very hot and blunt tips, and by long collection times. However, apertureless NSOM can be used to achieve high Raman scattering efficiency factors (around 40). Topological artifacts make it hard to implement this technique for rough surfaces. Tip-enhanced Raman spectroscopy (TERS) is an offshoot of surface enhanced Raman spectroscopy (SERS). This technique can be used in an apertureless shear-force NSOM setup, or by using an AFM tip coated with gold or silver. The Raman signal is found to be significantly enhanced under the AFM tip. This technique has been used to give local variations in the Raman spectra under a single-walled nanotube. A highly sensitive optoacoustic spectrometer must be used for the detection of the Raman signal. Fluorescence NSOM is a highly popular and sensitive technique which makes use of fluorescence for near field imaging, and is especially suited for biological applications. The technique of choice here is apertureless back to the fiber emission in constant shear force mode. This technique uses merocyanine-based dyes embedded in an appropriate resin. Edge filters are used for removal of all primary laser light. Resolution as low as 10 nm can be achieved using this technique. Near field infrared spectrometry and near-field dielectric microscopy use near-field probes to combine sub-micron microscopy with localized IR spectroscopy. The nano-FTIR method is a broadband nanoscale spectroscopy that combines apertureless NSOM with broadband illumination and FTIR detection to obtain a complete infrared spectrum at every spatial location. Sensitivity to a single molecular complex and nanoscale resolution up to 10 nm has been demonstrated with nano-FTIR. The nanofocusing technique can create a nanometer-scale "white" light source at the tip apex, which can be used to illuminate a sample at near-field for spectroscopic analysis. The interband optical transitions in individual single-walled carbon nanotubes are imaged and a spatial resolution around 6 nm has been reported. Artifacts NSOM can be vulnerable to artifacts that are not from the intended contrast mode. The most common root for artifacts in NSOM are tip breakage during scanning, striped contrast, displaced optical contrast, local far field light concentration, and topographic artifacts. In apertureless NSOM, also known as scattering-type SNOM or s-SNOM, many of these artifacts are eliminated or can be avoided by proper technique application. Limitations One limitation is a very short working distance and extremely shallow depth of field. It is normally limited to surface studies; however, it can be applied for subsurface investigations within the corresponding depth of field. In shear force mode and other contact operation it is not conducive for studying soft materials. It has long scan times for large sample areas for high resolution imaging. An additional limitation is the predominant orientation of the polarization state of the interrogating light in the near-field of the scanning tip. Metallic scanning tips naturally orient the polarization state perpendicular to the sample surface. Other techniques, like anisotropic terahertz microspectroscopy utilize in-plane polarimetry to study physical properties inaccessible to near-field scanning optical microscopes including the spatial dependence of intramolecular vibrations in anisotropic molecules. See also Fluorescence spectroscopy Nano-optics Near-field optics References External links Scanning probe microscopy Cell imaging Laboratory equipment Microscopy Optical microscopy
Near-field scanning optical microscope
[ "Chemistry", "Materials_science", "Biology" ]
2,466
[ "Optical microscopy", "Measuring instruments", "Microscopes", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Cell imaging" ]
4,217,326
https://en.wikipedia.org/wiki/Intracluster%20medium
In astronomy, the intracluster medium (ICM) is the superheated plasma that permeates a galaxy cluster. The gas consists mainly of ionized hydrogen and helium and accounts for most of the baryonic material in galaxy clusters. The ICM is heated to temperatures on the order of 10 to 100 megakelvins, emitting strong X-ray radiation. Composition The ICM is composed primarily of ordinary baryons, mainly ionised hydrogen and helium. This plasma is enriched with heavier elements, including iron. The average amount of heavier elements relative to hydrogen, known as metallicity in astronomy, ranges from a third to a half of the value in the sun. Studying the chemical composition of the ICMs as a function of radius has shown that cores of the galaxy clusters are more metal-rich than at larger radii. In some clusters (e.g. the Centaurus cluster) the metallicity of the gas can rise to above that of the sun. Due to the gravitational field of clusters, metal-enriched gas ejected from supernova remains gravitationally bound to the cluster as part of the ICM. By looking at varying redshift, which corresponds to looking at different epochs of the evolution of the Universe, the ICM can provide a history record of element production in a galaxy. Roughly 15% of a galaxy cluster's mass resides in the ICM. The stars and galaxies contribute only around 5% to the total mass. It is theorized that most of the mass in a galaxy cluster consists of dark matter and not baryonic matter. For the Virgo Cluster, the ICM contains roughly 3 × 1014 M☉ while the total mass of the cluster is estimated to be 1.2 × 1015 M☉. Although the ICM on the whole contains the bulk of a cluster's baryons, it is not very dense, with typical values of 10−3 particles per cubic centimeter. The mean free path of the particles is roughly 1016 m, or about one lightyear. The density of the ICM rises towards the centre of the cluster with a relatively strong peak. In addition, the temperature of the ICM typically drops to 1/2 or 1/3 of the outer value in the central regions. Once the density of the plasma reaches a critical value, enough interactions between the ions ensures cooling via X-ray radiation. Observing the intracluster medium As the ICM is at such high temperatures, it emits X-ray radiation, mainly by the bremsstrahlung process and X-ray emission lines from the heavy elements. These X-rays can be observed using an X-ray telescope and through analysis of this data, it is possible to determine the physical conditions, including the temperature, density, and metallicity of the plasma. Measurements of the temperature and density profiles in galaxy clusters allow for a determination of the mass distribution profile of the ICM through hydrostatic equilibrium modeling. The mass distributions determined from these methods reveal masses that far exceed the luminous mass seen and are thus a strong indication of dark matter in galaxy clusters. Inverse Compton scattering of low energy photons through interactions with the relativistic electrons in the ICM cause distortions in the spectrum of the cosmic microwave background radiation (CMB), known as the Sunyaev–Zel'dovich effect. These temperature distortions in the CMB can be used by telescopes such as the South Pole Telescope to detect dense clusters of galaxies at high redshifts. In December 2022, the James Webb Space Telescope is reported to be studying the faint light emitted in the intracluster medium. Which a 2018 study found to be an "accurate luminous tracer of dark matter". Cooling flows Plasma in regions of the cluster, with a cooling time shorter than the age of the system, should be cooling due to strong X-ray radiation where emission is proportional to the density squared. Since the density of the ICM is highest towards the center of the cluster, the radiative cooling time drops a significant amount. The central cooled gas can no longer support the weight of the external hot gas and the pressure gradient drives what is known as a cooling flow where the hot gas from the external regions flows slowly towards the center of the cluster. This inflow would result in regions of cold gas and thus regions of new star formation. Recently however, with the launch of new X-ray telescopes such as the Chandra X-ray Observatory, images of galaxy clusters with better spatial resolution have been taken. These new images do not indicate signs of new star formation on the order of what was historically predicted, motivating research into the mechanisms that would prevent the central ICM from cooling. Heating There are two popular explanations of the mechanisms that prevent the central ICM from cooling: feedback from active galactic nuclei through injection of relativistic jets of plasma and sloshing of the ICM plasma during mergers with subclusters. The relativistic jets of material from active galactic nuclei can be seen in images taken by telescopes with high angular resolution such as the Chandra X-ray Observatory. See also Interstellar medium References Large-scale structure of the cosmos Extragalactic astronomy Outer space Space plasmas Intergalactic media
Intracluster medium
[ "Physics", "Astronomy" ]
1,072
[ "Space plasmas", "Galaxy clusters", "Outer space", "Intergalactic media", "Astrophysics", "Extragalactic astronomy", "Astronomical objects", "Astronomical sub-disciplines" ]
4,218,673
https://en.wikipedia.org/wiki/Spatial%20relation
A spatial relation specifies how some object is located in space in relation to some reference object. When the reference object is much bigger than the object to locate, the latter is often represented by a point. The reference object is often represented by a bounding box. In Anatomy it might be the case that a spatial relation is not fully applicable. Thus, the degree of applicability is defined which specifies from 0 till 100% how strongly a spatial relation holds. Often researchers concentrate on defining the applicability function for various spatial relations. In spatial databases and geospatial topology the spatial relations are used for spatial analysis and constraint specifications. In cognitive development for walk and for catch objects, or for understand objects-behaviour; in robotic Natural Features Navigation; and many other areas, spatial relations plays a central role. Commonly used types of spatial relations are: topological, directional and distance relations. Topological relations The DE-9IM model expresses important space relations which are invariant to rotation, translation and scaling transformations. For any two spatial objects a and b, that can be points, lines and/or polygonal areas, there are 9 relations derived from DE-9IM: Directional relations Directional relations can again be differentiated into external directional relations and internal directional relations. An internal directional relation specifies where an object is located inside the reference object while an external relations specifies where the object is located outside of the reference objects. Examples for internal directional relations: left; on the back; athwart, abaft Examples for external directional relations: on the right of; behind; in front of, abeam, astern Distance relations Distance relations specify how far is the object away from the reference object. Examples are: at; nearby; in the vicinity; far away Relations by class Reference objects represented by a bounding box or another kind of "spatial envelope" that encloses its borders, can be denoted with the maximum number of dimensions of this envelope: '0' for punctual objects, '1' for linear objects, '2' for planar objects, '3' for volumetric objects. So, any object, in a 2D modeling, can by classified as point, line or area according to its delimitation. Then, a type of spatial relation can be expressed by the class of the objects that participate in the relation: point-point relations: ... point-line relations: point-area relations: line-line relations: line-area relations: area-area relations: More complex modeling schemas can represent an object as a composition of simple sub-objects. Examples: represent in an astronomical map a star by a point and a binary star by two points; represent in geographical map a river with a line, for its source stream, and with an strip-area, for the rest of the river. These schemas can use the above classes, uniform composition classes (multi-point, multi-line and multi-area) and heterogeneous composition (points+lines as "object of dimension 1", points+lines+areas as "object of dimension 2"). Two internal components of a complex object can express (the above) binary relations between them, and ternary relations, using the whole object as a frame of reference. Some relations can be expressed by an abstract component, such the center of mass of the binary star, or a center line of the river. Temporal references For human thinking, spatial relations include qualities like size, distance, volume, order, and, also, time: Stockdale and Possin discusses the many ways in which people with difficulty establishing spatial and temporal relationships can face problems in ordinary situations. See also Anatomical terms of location Dimensionally Extended nine-Intersection Model (DE-9IM) Water-level task Allen's interval algebra (temporal analog) Commonsense reasoning References Cognitive science Space
Spatial relation
[ "Physics", "Mathematics" ]
782
[ "Spacetime", "Space", "Geometry" ]
4,218,742
https://en.wikipedia.org/wiki/Q10%20%28temperature%20coefficient%29
{{DISPLAYTITLE:Q10 (temperature coefficient)}} The Q10 temperature coefficient is a measure of temperature sensitivity based on the chemical reactions. The Q10 is calculated as: where; R is the rate T is the temperature in Celsius degrees or kelvin. Rewriting this equation, the assumption behind Q10 is that the reaction rate R depends exponentially on temperature: Q10 is a unitless quantity, as it is the factor by which a rate changes, and is a useful way to express the temperature dependence of a process. For most biological systems, the Q10 value is ~ 2 to 3. In muscle performance The temperature of a muscle has a significant effect on the velocity and power of the muscle contraction, with performance generally declining with decreasing temperatures and increasing with rising temperatures. The Q10 coefficient represents the degree of temperature dependence a muscle exhibits as measured by contraction rates. A Q10 of 1.0 indicates thermal independence of a muscle whereas an increasing Q10 value indicates increasing thermal dependence. Values less than 1.0 indicate a negative or inverse thermal dependence, i.e., a decrease in muscle performance as temperature increases. Q10 values for biological processes vary with temperature. Decreasing muscle temperature results in a substantial decline of muscle performance such that a 10 degree Celsius temperature decrease results in at least a 50% decline in muscle performance. Persons who have fallen into icy water may gradually lose the ability to swim or grasp safety lines due to this effect, although other effects such as atrial fibrillation are a more immediate cause of drowning deaths. At some minimum temperature biological systems do not function at all, but performance increases with rising temperature (Q10 of 2-4) to a maximum performance level and thermal independence (Q10 of 1.0-1.5). With continued increase in temperature, performance decreases rapidly (Q10 of 0.2-0.8) up to a maximum temperature at which all biological function again ceases. Within vertebrates, different skeletal muscle activity has correspondingly different thermal dependencies. The rate of muscle twitch contractions and relaxations are thermally dependent (Q10 of 2.0-2.5), whereas maximum contraction, e.g., tetanic contraction, is thermally independent. Muscles of some ectothermic species. e.g., sharks, show less thermal dependence at lower temperatures than endothermic species See also Arrhenius equation Arrhenius plot Isotonic (exercise physiology) Isometric exercise Skeletal striated muscle Tetanic contraction References Ecological metrics Chemical kinetics
Q10 (temperature coefficient)
[ "Chemistry", "Mathematics" ]
529
[ "Chemical reaction engineering", "Metrics", "Ecological metrics", "Quantity", "Chemical kinetics" ]
4,218,901
https://en.wikipedia.org/wiki/Pachytene
The pachytene stage (/ˈpækɪtiːn/ PAK-i-teen; from Greek words meaning "thick threads".), also known as pachynema, is the third stage of prophase I during meiosis, the specialized cell division that reduces chromosome number by half to produce haploid gametes. It follows the zygotene stage and is followed by the stage Diplotene Synapsed chromosomes During pachytene, the homologous chromosomes are fully synapsed along their lengths by the completed synaptonemal complex protein structure formed in the previous stages. This holds the homologous closely paired, allowing intimate DNA interactions. Chromosome condensation The chromosomes reach their highest level of condensation during pachytene. Each chromosome consists of two closely associated sister chromatids along their entire length. The chromosomes appear as distinct, well-defined threadlike structures under the microscope. Sex chromosomes, however, are not wholly identical, and only exchange information over a small region of homology called the pseudoautosomal region. Recombination nodules Multiple recombination nodules are distinctly visible along the paired homologous chromosomes. These proteinaceous structures mark the sites of genetic crossover events between the non-sister chromatids that were initiated during zygotene. Proteins like MLH1 and MLH3 stabilize the crossover events, ensuring at least one obligatory crossover per chromosome arm. This gives each chromosome a minimum of two crossover sites. Additional crossovers are also possible but regulated. DNA repair During pachytene, any unresolved DNA double-strand breaks from previous recombination events are repaired. Mismatch repair proteins help correct any errors in base pairing between the homologs. Treatment of male mice during meiosis with gamma radiation causes DNA damage. Homologous recombination is the principal mechanism of DNA repair acting during meiosis. From the leptotene to early pachytene stages of meiosis exogenous damage triggered the massive presence of gamma H2AX (which forms when DNA double-strand breaks appear), H2AX was present throughout the nucleus, and this was associated with DNA repair mediated by homologous recombination components DMC1 and RAD51 proteins. The meiotic sex checkpoint Pachytene is also a stage where a critical checkpoint operates to monitor proper chromosome synapsis and recombination. Errors detected at this stage can arrest the meiotic cell cycle and trigger apoptosis (programmed cell death) of the defective cell. Transition to diplotene Once crossover events are stabilized, the synaptonemal complex disassembles and chromosomes begin to gradually desynapse as the cell transitions into the diplotene stage. Importance The pachytene stage is essential for the extensive genetic recombination and accurate chromosome segregation in meiosis. Defects at this stage can lead to aneuploidy and nondisjunction. References Meiosis Cellular processes
Pachytene
[ "Biology" ]
628
[ "Molecular genetics", "Cellular processes", "Meiosis" ]
11,977,518
https://en.wikipedia.org/wiki/Gyrochronology
Gyrochronology is a method for estimating the age of a low-mass (cool) main sequence star (spectral class F8 V or later) from its rotation period. The term is derived from the Greek words gyros, chronos and logos, roughly translated as rotation, age, and study respectively. It was coined in 2003 by Sydney Barnes to describe the associated procedure for deriving stellar ages, and developed extensively in empirical form in 2007. Gyrochronology builds on a work of Andrew Skumanich, who found that the average value of (v sin i) for several open clusters was inversely proportional to the square root of the cluster's age. In the expression (v sin i), (v) is the velocity on the star's equator and (i) is the inclination angle of the star's axis of rotation, which is generally an unmeasurable quantity. The gyrochronology method depends on the relationship between the rotation period and the mass of low mass main-sequence stars of the same age, which was verified by early work on the Hyades open cluster. The associated age estimate for a star is known as the gyrochronological age. Overview The basic idea underlying gyrochronology is that the rotation period P, of a cool main-sequence star is a deterministic function of its age t and its mass M (or a suitable substitute such as color). Although main sequence stars of a given mass form with a range of rotation periods, their periods increase rapidly and converge to a well defined value as they lose angular momentum through magnetically channelled stellar winds. Therefore, their periods converge to a certain function of age and mass, mathematically denoted by P=P(t,M). Consequently, cool stars do not occupy the entire 3-dimensional parameter space of (mass, age, period), but instead define a 2-dimensional surface in this P-t-M space. Therefore, measuring two of these variables yields the third. Of these quantities, the mass (color) and the rotation period are the easier variables to measure, providing access to the star's age, otherwise difficult to obtain. In order to determine the shape of this P=P(t,M) surface, the rotation periods and photometric colors (mass) of stars in clusters of known age are measured. Data has been accumulated from several clusters younger than one billion years (Gyr) of age and one cluster with an age of 2.5 Gyr. Another data point on the surface is from the Sun with an age of 4.56 Gyr and a rotation period of 25 days. Using these results, the ages of a large number of cool galactic field stars can be derived with 10% precision. Magnetic stellar wind breaking increases the rotation period of the star and it is important in stars with convective envelopes. Stars with a color index greater than about (B-V)=0.47 mag (the Sun has a color index of 0.66 mag) have convective envelopes, but more massive stars have radiative envelopes. Also, these lower mass stars spend a considerable amount of time on a pre main sequence Hayashi track where they are nearly fully convective. See also Nucleocosmochronology References Further reading Concepts in astrophysics Space science
Gyrochronology
[ "Physics", "Astronomy" ]
691
[ "Space science", "Outer space", "Concepts in astrophysics", "Astrophysics" ]
11,984,510
https://en.wikipedia.org/wiki/Integrated%20Ballistics%20Identification%20System
The Integrated Ballistics Identification System, or IBIS, is the brand of the Automated firearms identification system manufactured by Forensic Technology WAI, Inc., of Montreal, Canada. Use IBIS has been adopted as the platform of the National Integrated Ballistic Information Network (NIBIN) program, which is run by the United States Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF). NIBIN tracks about 100,000 guns used in crimes. The integration of technology into about 220 sites across the continental US and its territories facilitates sharing of information between different law enforcement groups. The rapid dissemination of ballistics information, in turn, allows for tracking of gun-specific information and connection of a particular firearm to multiple crimes irrespective of geographic location. A National Research Council report has found that with the NIBIN dataset, a bullet retrieved from a crime scene will generate about 10 possible matches, with about a 75-95% chance of a successful match. While some groups have advocated laws requiring all firearms sold be test-fired and registered in such a system, success has been mixed. In 2005, a Maryland State Police report recommended a law requiring all handguns sold in the state be registered in their IBIS system be repealed, as at the cost of $2.5 million the system had not produced "any meaningful hits". The Maryland system was shut down in 2015 due to its ineffectiveness. By 2008, the New York COBIS system, which costs $4 million per year, had not produced any hits leading to prosecutions in 7 years of operation. The system has been more successful when used to track guns used by and found on criminals. In Television IBIS is frequently mentioned in modern television programs, fictional and otherwise, that use forensics to aid in solving crimes. These television shows include CSI: Crime Scene Investigation and its spinoffs, amongst others. Forensic Technology helped develop an interactive exhibit, 'CSI: The Experience' that showcased the company's technology. See also National Ballistics Intelligence Service, a similar body in the United Kingdom References External links 1. https://web.archive.org/web/20070711154331/http://www.nibin.gov/ is the official Web site for the NIBIN, the National Integrated Ballistics Information Network. 2. http://www.fti-ibis.com is the Web site for the developer and supporter of IBIS technology, Forensic Technology Incorporated. Ballistics Forensic software
Integrated Ballistics Identification System
[ "Physics" ]
506
[ "Applied and interdisciplinary physics", "Ballistics" ]
7,344,293
https://en.wikipedia.org/wiki/Zero%20sound
Zero sound is the name given by Lev Landau in 1957 to the unique quantum vibrations in quantum Fermi liquids. The zero sound can no longer be thought of as a simple wave of compression and rarefaction, but rather a fluctuation in space and time of the quasiparticles' momentum distribution function. As the shape of Fermi distribution function changes slightly (or largely), zero sound propagates in the direction for the head of Fermi surface with no change of the density of the liquid. Predictions and subsequent experimental observations of zero sound was one of the key confirmation on the correctness of Landau's Fermi liquid theory. Derivation from Boltzmann transport equation The Boltzmann transport equation for general systems in the semiclassical limit gives, for a Fermi liquid, , where is the density of quasiparticles (here we ignore spin) with momentum and position at time , and is the energy of a quasiparticle of momentum ( and denote equilibrium distribution and energy in the equilibrium distribution). The semiclassical limit assumes that fluctuates with angular frequency and wavelength , which are much lower than and much longer than respectively, where and are the Fermi energy and momentum respectively, around which is nontrivial. To first order in fluctuation from equilibrium, the equation becomes . When the quasiparticle's mean free path (equivalently, relaxation time ), ordinary sound waves ("first sound") propagate with little absorption. But at low temperatures (where and scale as ), the mean free path exceeds , and as a result the collision functional . Zero sound occurs in this collisionless limit. In the Fermi liquid theory, the energy of a quasiparticle of momentum is , where is the appropriately normalized Landau parameter, and . The approximated transport equation then has plane wave solutions , with given by . This functional operator equation gives the dispersion relation for the zero sound waves with frequency and wave vector . The transport equation is valid in the regime where and . In many systems, only slowly depends on the angle between and . If is an angle-independent constant with (note that this constraint is stricter than the Pomeranchuk instability) then the wave has the form and dispersion relation where is the ratio of zero sound phase velocity to Fermi velocity. If the first two Legendre components of the Landau parameter are significant, and , the system also admits an asymmetric zero sound wave solution (where and are the azimuthal and polar angle of about the propagation direction ) and dispersion relation . See also Second sound Third sound References Further reading Statistical mechanics Condensed matter physics Lev Landau
Zero sound
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
546
[ "Phases of matter", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
7,344,637
https://en.wikipedia.org/wiki/Desert%20%28particle%20physics%29
In the Grand Unified Theory of particle physics (GUT), the desert refers to a theorized gap in energy scales, between approximately the electroweak energy scale–conventionally defined as roughly the vacuum expectation value or VeV of the Higgs field (about 246 GeV)–and the GUT scale, in which no unknown interactions appear. It can also be described as a gap in the lengths involved, with no new physics below 10−18 m (the currently probed length scale) and above 10−31 m (the GUT length scale). The idea of the desert was motivated by the observation of approximate, order of magnitude, gauge coupling unification at the GUT scale. When the values of the gauge coupling constants of the weak nuclear, strong nuclear, and electromagnetic forces are plotted as a function of energy, the 3 values appear to nearly converge to a common single value at very high energies. This was one theoretical motivation for Grand Unified Theories themselves, and adding new interactions at any intermediate energy scale generally disrupts this gauge coupling unification. The disruption arises from the new quantum fields- the new forces and particles- which introduce new coupling constants and new interactions that modify the existing Standard Model coupling constants at higher energies. The fact that the convergence in the Standard Model is actually inexact, however, is one of the key theoretical arguments against the Desert, since making the unification exact requires new physics below the GUT scale. Standard model particles All the Standard Model particles were discovered well below the energy scale of approximately 1012 eV or 1 TeV. The heaviest Standard Model particle is the top quark, with a mass of approximately 173 GeV. The desert Above these energies, desert theory with the assumption of supersymmetry predicts no particles will be discovered until reaching the scale of approximately 1025 eV. According to the theory, measurements of TeV-scale physics at the Large Hadron Collider (LHC) and the near-future International Linear Collider (ILC) will allow extrapolation all the way up to the GUT scale . The particle desert's negative implication is that experimental physics will simply have nothing more fundamental to discover, over a very long period of time. Depending on the rate of the increase in experiment energies, this period might be a hundred years or more. Presumably, even if the energy achieved in the LHC, ~ 1013 eV, were increased by up to 12 orders of magnitude, this would only result in producing more copious amounts of the particles known today, with no underlying structure being probed. The aforementioned timespan might be shortened by observing the GUT scale through a radical development in accelerator physics, or by a non-accelerator observational technology, such as examining tremendously high energy cosmic ray events, or another, yet undeveloped technology. Alternatives to the desert exhibit particles and interactions unfolding with every few orders of magnitude increase in the energy scale. MSSM desert With the Minimal Supersymmetric Standard Model, adjustment of parameters can make the grand unification exact. This unification is not unique. Such exact gauge unification is a generic feature of supersymmetric models, and remains a major theoretical motivation for developing them. Such models automatically introduce new particles ("superpartners") at a new energy scale associated with the breaking of the new symmetry, ruling out the conventional energy desert. They can, however, contain an analogous "desert" between the new energy scale and the GUT scale. Mirror matter desert Scenarios like the Katoptron model can also lead to exact unification after a similar energetic desert. If the known neutrino masses are due to a seesaw mechanism, the new heavy neutrino states must have masses below the GUT scale in order to produce the observed O(1 meV) masses. Indicative examples of the order of magnitude of the corresponding masses and fermion mixing parameters in accordance with experimental data have been calculated within the context of katoptrons. Evidence As of 2019, the LHC has excluded the existence of many new particles up to masses of a few TeV, or about 10x the mass of the top quark. Other indirect evidence in favor of a large energy desert for a certain distance above the electroweak scale (or even no particles at all beyond this scale) includes: The absence of any observed proton decays, which has already ruled out many new physics models that can produce them up to (and beyond) the GUT scale. Precision measurements of known particles and processes, such as extremely rare particle decays, have already indirectly probed energy scales up to 1 PeV (106 GeV) without finding any confirmed deviations from the Standard Model. This significantly constrains any new physics that might exist below those energies. Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. The observed Higgs boson decay modes and rates are so far consistent with the Standard Model. Counter evidence So far there is no direct evidence of new fundamental particles with masses between the electroweak and GUT scale, consistent with the desert. However, there are some theories about why such particles might exist: The leading theoretical explanations of neutrino masses, the various seesaw models, all require new heavy neutrino states below the GUT scale. Both weakly interacting massive particles (WIMP) and axion models for dark matter require the new, long-lived particles to have masses far below the GUT scale. In the Standard Model, there is no physics which stabilizes the Higgs boson mass to its actual observed value. Since the actual value is far below the GUT scale, whatever new physics ultimately does stabilize it must become apparent at lower energies too. Precision measurements have produced several outstanding discrepancies with the Standard Model in recent years. These include anomalies in certain B meson decays and a discrepancy in the measured value of the Muon g-2 (anomalous magnetic moment). Depending on the results of currently ongoing experiments, these effects may already indicate the existence of unknown new particles below about 100 TeV. References External links Grand Unified Theory Physics beyond the Standard Model
Desert (particle physics)
[ "Physics" ]
1,317
[ "Unsolved problems in physics", "Physics beyond the Standard Model", "Grand Unified Theory", "Particle physics" ]
7,349,102
https://en.wikipedia.org/wiki/Allegany%20Ballistics%20Laboratory
Allegany Ballistics Laboratory (ABL) located in Rocket Center, West Virginia, is a diverse industrial complex employing some 1,000 people across . The facility is a member of the Federal Laboratory Consortium and is operated by Northrop Grumman (former Alliant Techsystems) under contract with the Naval Sea Systems Command (NAVSEA). Current operation The ABL facility is a manufacturer of advanced composite structures for the F-22 Raptor and other aerospace projects. ATK also operates 6 of 11 known advanced fiber placement machines. In addition the site produces about 80 military products, including: 30mm shells for Apache helicopters, training grenades, fuze-proximity sensors, mortars and warheads, and tank ammunition. Also on the site is the Robert C. Byrd Hilltop Office Complex and the Robert C. Byrd Institute for Advanced Flexible Manufacturing. At the Robert C. Byrd Complex on the hill companies have rented space to do secure research, among them IBM (which recently acquired National Interest Security Company) who is digitizing data on hurricane cleanup, avian influenza, and weather records. It also plays a significant role in continuity of government operations. History ABL was established in 1944 on the site of a former ammunitions plant on land owned by the Army. After World War II, the plant was transferred to the Office of Scientific Research and Development and was involved in building propulsion devices and engines for the solid-rocket industry. Later in the decade, ownership of ABL was transferred to the Navy office of Naval Sea Systems Command. Since 1946 it was operated by the Hercules Powder Company. In 1956 when it was producing Altair rocket stages for Vanguard rockets, ABL was, "A subsidiary of the Navy operated by the Hercules Powder Company." The Navy now contracts out operation of the facility to ATK (Alliant Techsystems), a $3.4 billion corporation based in Edina, Minnesota. In 1998, ATK's Conventional Munitions Group was selected by Lockheed Martin Aeronautical Systems to produce the fiber-placed composite pivot shaft assembly for the F-22 Raptor air-dominance fighter. Work on the production program was performed at Alliant's automated fiber placement production facility at the Allegany Ballistics Laboratory before the production of F-22 aircraft ended in 2012. The fiber placement facility was constructed as part of a $177 million renovation and restoration program funded by the U.S. Naval Sea Systems Command (NAVSEA), which owns the Allegany Ballistics Laboratory. Local Perception As for the ecological impact, it is believed the facility contributes greatly to the pollution of the adjacent North Branch Potomac River. While this is unsupported, the company does have numerous runoff sites. Also, the groundwater in the surrounding community has been verified to contain many contaminants, although actions have since been taken to reduce these contaminants, and are part of a constant monitoring process. Companies The following privately owned ventures are located on the ABL site: See also Barton Business Park North Branch Industrial Complex Upper Potomac Industrial Park References External links ATK: Alliant Techsystems Robert C. Byrd Institute FLC: Federal Laboratory Consortium FLC: Federal Laboratory Consortium: Mid-Atlantic National Interest Security Company Ballistics Science parks in the United States Business parks of the United States Buildings and structures in Mineral County, West Virginia Military installations in West Virginia Military Superfund sites Superfund sites in West Virginia Continuity of government in the United States
Allegany Ballistics Laboratory
[ "Physics" ]
705
[ "Applied and interdisciplinary physics", "Ballistics" ]
7,349,264
https://en.wikipedia.org/wiki/Weak%20n-category
In category theory, a weak n-category is a generalization of the notion of strict n-category where composition and identities are not strictly associative and unital, but only associative and unital up to coherent equivalence. This generalisation only becomes noticeable at dimensions two and above where weak 2-, 3- and 4-categories are typically referred to as bicategories, tricategories, and tetracategories. The subject of weak n-categories is an area of ongoing research. History There is much work to determine what the coherence laws for weak n-categories should be. Weak n-categories have become the main object of study in higher category theory. There are basically two classes of theories: those in which the higher cells and higher compositions are realized algebraically (most remarkably Michael Batanin's theory of weak higher categories) and those in which more topological models are used (e.g. a higher category as a simplicial set satisfying some universality properties). In a terminology due to John Baez and James Dolan, a is a weak n-category, such that all h-cells for h > k are invertible. Some of the formalism for are much simpler than those for general n-categories. In particular, several technically accessible formalisms of (infinity, 1)-categories are now known. Now the most popular such formalism centers on a notion of quasi-category, other approaches include a properly understood theory of simplicially enriched categories and the approach via Segal categories; a class of examples of stable can be modeled (in the case of characteristics zero) also via pretriangulated A-infinity categories of Maxim Kontsevich. Quillen model categories are viewed as a presentation of an ; however not all can be presented via model categories. See also Bicategory Tricategory Tetracategory Infinity category Opetope Stabilization hypothesis External links n-Categories – Sketch of a Definition by John Baez Lectures on n-Categories and Cohomology by John Baez Tom Leinster, Higher operads, higher categories, math.CT/0305049 Jacob Lurie, Higher topos theory, math.CT/0608040, published version: pdf Higher category theory
Weak n-category
[ "Mathematics" ]
472
[ "Higher category theory", "Mathematical structures", "Category theory", "Category theory stubs" ]
5,615,284
https://en.wikipedia.org/wiki/Bretschneider%27s%20formula
In geometry, Bretschneider's formula is a mathematical expression for the area of a general quadrilateral. It works on both convex and concave quadrilaterals, whether it is cyclic or not. The formula also works on crossed quadrilaterals provided that directed angles are used. History The German mathematician Carl Anton Bretschneider discovered the formula in 1842. The formula was also derived in the same year by the German mathematician Karl Georg Christian von Staudt. Formulation Bretschneider's formula is expressed as: Here, , , , are the sides of the quadrilateral, is the semiperimeter, and and are any two opposite angles, since as long as directed angles are used so that or (when the quadrilateral is crossed). Proof Denote the area of the quadrilateral by . Then we have Therefore The law of cosines implies that because both sides equal the square of the length of the diagonal . This can be rewritten as Adding this to the above formula for yields Note that: (a trigonometric identity true for all ) Following the same steps as in Brahmagupta's formula, this can be written as Introducing the semiperimeter the above becomes and Bretschneider's formula follows after taking the square root of both sides: The second form is given by using the cosine half-angle identity yielding Emmanuel García has used the generalized half angle formulas to give an alternative proof. Related formulae Bretschneider's formula generalizes Brahmagupta's formula for the area of a cyclic quadrilateral, which in turn generalizes Heron's formula for the area of a triangle. The trigonometric adjustment in Bretschneider's formula for non-cyclicality of the quadrilateral can be rewritten non-trigonometrically in terms of the sides and the diagonals and to give Notes References & further reading C. A. Bretschneider. Untersuchung der trigonometrischen Relationen des geradlinigen Viereckes. Archiv der Mathematik und Physik, Band 2, 1842, S. 225-261 ( online copy, German) F. Strehlke: Zwei neue Sätze vom ebenen und sphärischen Viereck und Umkehrung des Ptolemaischen Lehrsatzes. Archiv der Mathematik und Physik, Band 2, 1842, S. 323-326 (online copy, German) External links Bretschneider's formula at proofwiki.org Bretschneider's Quadrilateral Area Formula & Brahmagupta's Formula at Dynamic Geometry Sketches, interactive geometry sketches. Theorems about quadrilaterals Area Articles containing proofs
Bretschneider's formula
[ "Physics", "Mathematics" ]
583
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Articles containing proofs", "Wikipedia categories named after physical quantities", "Area" ]
5,616,058
https://en.wikipedia.org/wiki/Ceteareth
The INCI names ceteareth-n (where n is a number) refer to polyoxyethylene ethers of a mixture of high molecular mass saturated fatty alcohols, mainly cetyl alcohol (m = 15) and stearyl alcohol (m = 17). The number n indicates the average number of ethylene oxide residues in the polyoxyethylene chain. These compounds are non-ionic surfactants that work by attracting both water and oil at the same time, frequently used as emulsifiers in soaps and cosmetics. List of ceteareth compounds Ceteareth-2 Ceteareth-3 Ceteareth-4 Ceteareth-5 Ceteareth-6 Ceteareth-7 Ceteareth-8 Ceteareth-9 Ceteareth-10 Ceteareth-11 Ceteareth-12 Ceteareth-13 Ceteareth-15 Ceteareth-16 Ceteareth-17 Ceteareth-18 Ceteareth-20 (CAS # 68439-49-6) Ceteareth-22 Ceteareth-23 Ceteareth-25 Ceteareth-27 Ceteareth-28 Ceteareth-29 Ceteareth-30 Ceteareth-33 Ceteareth-34 Ceteareth-40 Ceteareth-50 Ceteareth-55 Ceteareth-60 Ceteareth-80 Ceteareth-100 References Ethers Cosmetics chemicals
Ceteareth
[ "Chemistry" ]
332
[ "Organic compounds", "Functional groups", "Ethers" ]
5,617,574
https://en.wikipedia.org/wiki/Neurotrophic%20factors
Neurotrophic factors (NTFs) are a family of biomolecules – nearly all of which are peptides or small proteins – that support the growth, survival, and differentiation of both developing and mature neurons. Most NTFs exert their trophic effects on neurons by signaling through tyrosine kinases, usually a receptor tyrosine kinase. In the mature nervous system, they promote neuronal survival, induce synaptic plasticity, and modulate the formation of long-term memories. Neurotrophic factors also promote the initial growth and development of neurons in the central nervous system and peripheral nervous system, and they are capable of regrowing damaged neurons in test tubes and animal models. Some neurotrophic factors are also released by the target tissue in order to guide the growth of developing axons. Most neurotrophic factors belong to one of three families: (1) neurotrophins, (2) glial cell-line derived neurotrophic factor family ligands (GFLs), and (3) neuropoietic cytokines. Each family has its own distinct cell signaling mechanisms, although the cellular responses elicited often do overlap. Currently, neurotrophic factors are being intensely studied for use in bioartificial nerve conduits because they are necessary in vivo for directing axon growth and regeneration. In studies, neurotrophic factors are normally used in conjunction with other techniques such as biological and physical cues created by the addition of cells and specific topographies. The neurotrophic factors may or may not be immobilized to the scaffold structure, though immobilization is preferred because it allows for the creation of permanent, controllable gradients. In some cases, such as neural drug delivery systems, they are loosely immobilized such that they can be selectively released at specified times and in specified amounts. List of neurotrophic factors Although more information is being discovered about neurotrophic factors, their classification is based on different cellular mechanisms and they are grouped into three main families: the neurotrophins, the CNTF family, and GDNF family. Neurotrophins Brain-derived neurotrophic factor Brain-derived neurotrophic factor (BDNF) is structurally similar to NGF, NT-3, and NT-4/5, and shares the TrkB receptor with NT-4. The brain-derived neurotrophic factor/TrkB system promotes thymocyte survival, as studied in the thymus of mice. Other experiments suggest BDNF is more important and necessary for neuronal survival than other factors. However, this compensatory mechanism is still not known. Specifically, BDNF promotes survival of dorsal root ganglion neurons. Even when bound to a truncated TrkB, BDNF still shows growth and developmental roles. Without BDNF (homozygous (-/-)), mice do not survive past three weeks. Including development, BDNF has important regulatory roles in the development of the visual cortex, enhancing neurogenesis, and improving learning and memory. Specifically, BDNF acts within the hippocampus. Studies have shown that corticosterone treatment and adrenalectomy reduces or upregulated hippocampal BDNF expression. Consistent between human and animal studies, BDNF levels are decreased in those with untreated major depression. However, the correlation between BDNF levels and depression is controversial. Nerve growth factor Nerve growth factor (NGF) uses the high-affinity receptor TrkA to promote myelination and the differentiation of neurons. Studies have shown dysregulation of NGF causes hyperalgesia and pain. NGF production is highly correlated to the extent of inflammation. Even though it is clear that exogenous administration of NGF helps decrease tissue inflammation, the molecular mechanisms are still unknown. Moreover, blood NGF levels are increased in times of stress, during immune disease, and with asthma or arthritis, amongst other conditions. Neurotrophin-3 Whereas neurotrophic factors within the neurotrophin family commonly have a protein tyrosine kinase receptor (Trk), Neurotrophin-3 (NT-3) has the unique receptor, TrkC. In fact, the discovery of the different receptors helped differentiate scientists' understanding and classification of NT-3. NT-3 does share similar properties with other members of this class, and is known to be important in neuronal survival. The NT-3 protein is found within the thymus, spleen, intestinal epithelium but its role in the function of each organ is still unknown. Neurotrophin-4 CNTF family The CNTF family of neurotrophic factors includes ciliary neurotrophic factor (CNTF), leukemia inhibitory factor (LIF), interleukin-6 (IL-6), prolactin, growth hormone, leptin, interferons (i.e., interferon-α, ), and oncostatin M. Ciliary neurotrophic factor Ciliary neurotrophic factor affects embryonic motor neurons, dorsal root ganglion sensory neurons, and ciliary neuron hippocampal neurons. It is structurally related to leukemia inhibitory factor (LIF), interleukin 6 (IL-6), and oncostatin M (OSM). CNTF prevents degeneration of motor neurons in rats and mice which increases survival time and motor function of the mice. These results suggest exogenous CNTF could be used as a therapeutic treatment for human degenerative motor neuron diseases. It also has unexpected leptin-like characteristics as it causes weight loss. GDNF family The GDNF family of ligands includes glial cell line-derived neurotrophic factor (GDNF), artemin, neurturin, and persephin. Glial cell line-derived neurotrophic factor Glial cell line-derived neurotrophic factor (GDNF) was originally detected as survival promoter derived from a glioma cell. Later studies determined GDNF uses a receptor tyrosine kinase and a high-affinity ligand-binding co-receptor GFRα. GDNF has an especially strong affinity for dopaminergic (DA) neurons. Specifically, studies have shown GDNF plays a protective role against MPTP toxins for DA neurons. It has also been detected in motor neurons of embryonic rats and is suggested to aid development and to reduce axotomy. Artemin Neurturin Persephin Ephrins The ephrins are a family of neurotrophic factors that signal through eph receptors, a class of receptor tyrosine kinases; the family of ephrins include ephrin A1, A2, A3, A4, A5, B1, B2, and B3. EGF and TGF families The EGF and TGF families of neurotrophic factors are composed of epidermal growth factor, the neuregulins, transforming growth factor alpha (TGFα), and transforming growth factor beta (TGFβ). They signal through receptor tyrosine kinases and serine/threonine protein kinases. Other neurotrophic factors Several other biomolecules that have identified as neurotrophic factors include: glia maturation factor, insulin, insulin-like growth factor 1 (IGF-1), vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF), platelet-derived growth factor (PDGF), pituitary adenylate cyclase-activating peptide (PACAP), interleukin-1 (IL-1), interleukin-2 (IL-2), interleukin-3 (IL-3), interleukin-5 (IL-5), interleukin-8 (IL-8), macrophage colony-stimulating factor (M-CSF), granulocyte-macrophage colony-stimulating factor (GM-CSF), and neurotactin. References Neurochemistry
Neurotrophic factors
[ "Chemistry", "Biology" ]
1,776
[ "Biochemistry", "Neurotrophic factors", "Neurochemistry", "Signal transduction" ]
5,618,682
https://en.wikipedia.org/wiki/MAPK/ERK%20pathway
The MAPK/ERK pathway (also known as the Ras-Raf-MEK-ERK pathway) is a chain of proteins in the cell that communicates a signal from a receptor on the surface of the cell to the DNA in the nucleus of the cell. The signal starts when a signaling molecule binds to the receptor on the cell surface and ends when the DNA in the nucleus expresses a protein and produces some change in the cell, such as cell division. The pathway includes many proteins, such as mitogen-activated protein kinases (MAPKs), originally called extracellular signal-regulated kinases (ERKs), which communicate by adding phosphate groups to a neighboring protein (phosphorylating it), thereby acting as an "on" or "off" switch. When one of the proteins in the pathway is mutated, it can become stuck in the "on" or "off" position, a necessary step in the development of many cancers. In fact, components of the MAPK/ERK pathway were first discovered in cancer cells, and drugs that reverse the "on" or "off" switch are being investigated as cancer treatments. Background The signal that starts the MAPK/ERK pathway is the binding of extracellular mitogen to a cell surface receptor. This allows a Ras protein (a Small GTPase) to swap a GDP molecule for a GTP molecule, flipping the "on/off switch" of the pathway. The Ras protein can then activate MAP3K (e.g., Raf), which activates MAP2K, which activates MAPK. Finally, MAPK can activate a transcription factor, such as Myc. This process is described in more detail below. Ras activation Receptor-linked tyrosine kinases, such as the epidermal growth factor receptor (EGFR), are activated by extracellular ligands, such as the epidermal growth factor (EGF). Binding of EGF to the EGFR activates the tyrosine kinase activity of the cytoplasmic domain of the receptor. The EGFR becomes phosphorylated on tyrosine residues. Docking proteins such as GRB2 contain an SH2 domain that binds to the phosphotyrosine residues of the activated receptor. GRB2 binds to the guanine nucleotide exchange factor SOS by way of the two SH3 domains of GRB2. When the GRB2-SOS complex docks to phosphorylated EGFR, SOS becomes activated. Activated SOS then promotes the removal of GDP from a member of the Ras subfamily (most notably H-Ras or K-Ras). The Ras protein can then bind GTP and become active. Apart from EGFR, other cell surface receptors that can activate this pathway via GRB2 include Trk A/B, Fibroblast growth factor receptor (FGFR) and PDGFR. Kinase cascade Activated Ras then activates the protein kinase activity of a RAF kinase. The RAF kinase phosphorylates and activates a MAPK/ERK Kinase (MEK1 or MEK2). The MEK phosphorylates and activates a mitogen-activated protein kinase (MAPK). RAF and MAPK/ERK are both serine/threonine-specific protein kinases. MEK is a serine/tyrosine/threonine kinase. In a technical sense, RAF, MEK, and MAPK are all mitogen-activated kinases, as is MNK (see below). MAPKs were originally called "extracellular signal-regulated kinases" (ERKs) and "microtubule associated protein kinases" (MAPKs). One of the first proteins known to be phosphorylated by ERK was a microtubule-associated protein (MAP). As discussed below, many additional targets for phosphorylation by MAPK were later found, and the protein was renamed "mitogen-activated protein kinase" (MAPK). The series of kinases from RAF to MEK to MAPK is an example of a protein kinase cascade. Such series of kinases provide opportunities for feedback regulation and signal amplification. Regulation of translation and transcription Three of the many proteins that are phosphorylated by MAPK are shown in the figure to the right. One effect of MAPK activation is to alter the translation of mRNA to proteins. MAPK phosphorylates the 40S ribosomal protein S6 kinase (RSK). This activates RSK, which, in turn, phosphorylates ribosomal protein S6. Mitogen-activated protein kinases that phosphorylate ribosomal protein S6 were the first to be isolated. MAPK regulates the activities of several transcription factors. MAPK can phosphorylate C-myc. MAPK phosphorylates and activates MNK, which, in turn, phosphorylates CREB. MAPK also regulates the transcription of the C-Fos gene. By altering the levels and activities of transcription factors, MAPK leads to altered transcription of genes that are important for the cell cycle. The 22q11, 1q42, and 19p13 genes, by affecting the ERK pathway, are associated with schizophrenia, schizoaffective disorder, bipolar disorder, and migraines. Regulation of cell cycle entry and proliferation Role of mitogen signaling in cell cycle progression The ERK pathway plays an important role of integrating external signals from the presence of mitogens such as epidermal growth factor (EGF) into signaling events promoting cell growth and proliferation in many mammalian cell types. In a simplified model, the presence of mitogens and growth factors trigger the activation of canonical receptor tyrosine kinases such as EGFR leading to their dimerization and subsequent activation of the small GTPase Ras. This then leads to a series of phosphorylation events downstream in the MAPK cascade (Raf-MEK-ERK) ultimately resulting in the phosphorylation and activation of ERK. The phosphorylation of ERK results in an activation of its kinase activity and leads to phosphorylation of its many downstream targets involved in regulation of cell proliferation. In most cells, some form of sustained ERK activity is required for cells to activate genes that induce cell cycle entry and suppress negative regulators of the cell cycle. Two such important targets include Cyclin D complexes with Cdk4 and Cdk6 (Cdk4/6) which are both phosphorylated by ERK. The transition from G1 to S phase is coordinated by the activity of Cyclin D-Cdk4/6, which increases during late G1 phase as cells prepare to enter S-phase in response to mitogens. Cdk4/6 activation contributes to hyper-phosphorylation and the subsequent destabilization of retinoblastoma protein (Rb). Hypo-phosphorylated Rb, is normally bound to transcription factor E2F in early G1 and inhibits its transcriptional activity, preventing expression of S-phase entry genes including Cyclin E, Cyclin A2 and Emi1. ERK1/2 activation downstream of mitogen induced Ras signaling is necessary and sufficient to remove this cell cycle block and allow cells to progress to S-phase in most mammalian cells. Downstream feedback control and generation of a bistable G1/S switch The restriction point (R-point) marks the critical event when a mammalian cell commits to proliferation and becomes independent of growth stimulation. It is fundamental for normal differentiation and tissue homeostasis, and seems to be dysregulated in virtually all cancers. Although the R-point has been linked to various activities involved in the regulation of G1–S transition of the mammalian cell cycle, the underlying mechanism remains unclear. Using single-cell measurements, Yao et al., shows that the Rb–E2F pathway functions as a bistable switch to convert graded serum inputs into all-or-none E2F responses. Growth and mitogen signals are transmitted downstream of the ERK pathway are incorporated into multiple positive feedback loops to generate a bistable switch at the level of E2F activation. This occurs due to three main interactions during late G1 phase. The first is a result of mitogen stimulation though the ERK leading to the expression of the transcription factor Myc, which is a direct activator of E2F. The second pathway is a result of ERK activation leading to the accumulation of active complexes of Cyclin D and Cdk4/6 which destabilize Rb via phosphorylation and further serve to activate E2F and promote expression of its targets. Finally, these interactions are all reinforced by an additional positive feedback loop by E2F on itself, as its own expression leads to production of the active complex of Cyclin E and CDK2, which further serves to lock in a cell's decision to enter S-phase. As a result, when serum concentration is increased in a gradual manner, most mammalian cells respond in a switch-like manner in entering S-phase. This mitogen stimulated, bistable E2F switch is exhibits hysteresis, as cells are inhibited from returning to G1 even after mitogen withdrawal post E2F activation. Dynamic signal processing by the ERK pathway The EGFR-ERK/MAPK (epidermal growth factor receptor extracellular-regulated kinase/mitogen-activated protein kinase) pathway stimulated by EGF is critical for cellular proliferation, but the temporal separation between signal and response obscures the signal-response relationship in previous research.In 2013, Albeck et al. provided key experimental evidence to fill this gap of knowledge. They measured signal strength and dynamics with steady-state EGF stimulation, in which the signaling and output can be easily related. They further mapped the signal-response relationship across the pathway’s full dynamic range. Using high-content immunofluorescence (HCIF) detection of phosphorylated ERK (pERK) and live cell FRET biosensors, they monitored downstream output of the ERK pathway in both live cells and fixed cells. To further link the quantitative characteristics of ERK signaling to proliferation rates, they established a series of steady-state conditions using a range of EGF concentrations by applying EGF with different concentrations. Single cell imaging experiments have shown ERK to be activated in stochastic bursts in the presence of EGF. Furthermore, the pathway has been shown to encode the strength of signaling inputs though frequency modulated pulses of its activity. Using live cell FRET biosensors, cells induced with different concentrations of EGF illicit activity bursts of different frequency, where higher levels of EGF resulted in more frequent bursts of ERK activity. To figure out how S phase entry can be affected by sporadic pulses of ERK activity at low EGF concentrations, they used MCF-10A cells co-expressing EKAR-EV and RFP-geminin and identified the pulses of ERK activity with  the scoring and then align this ERK activity profiles with time of GFP-geminin induction. They found that longer periods of ERK activity stimulate S phase entry, as suggested by increased pulse length. To understand the dynamics of EGFR-ERK pathway, specifically how is the frequency and amplitude modulated, they applied the EGFR inhibitor gefitinib or the highly selective MAPK/ERK kinase (MEK) inhibitor PD0325901 (PD). Two inhibitors yield actually a little bit different result: gefitinib, at intermediate concentration, would induce pulsatory behavior and also bimodal shift, which is not observed with PD. They further combine EGF and PD together and draw the conclusion that the frequency of ERK activities is modulated by quantitative variation while the amplitude is modulated by MEK activity’s change. Lastly they turned to Fra-1, one of downstream effectors of ERK pathway, as it’s technically challenging to estimate ERK activities directly. To understand how the integrated ERK pathway output (which should be independent of either frequency or amplitude) affect the proliferation rate, they used the combination of a wide range of EGF and PD concentrations and find that there’s actually an inverted “L” shape single curvilinear relationship, which suggests that at low levels of ERK pathway output, small changes in signal intensity correspond to large changes in proliferative rate, while large changes in signal intensity near the high end of the dynamic range have little impact on proliferation. The fluctuation of ERK signaling highlights potential issues with current therapeutic approaches, providing new perspective in terms of thinking about drug targeting in the ERK pathway in cancer. Integration of mitogen and stress signals in proliferation Recent live cell imaging experiments in MCF10A and MCF7 cells have shown that a combination of mitogen signaling though ERK and stress signals through activation of p53 in mother cells contributes to the likelihood of whether newly formed daughter cells will immediately re-enter the cell cycle or enter quiescence (G0) preceding mitosis. Rather than daughter cells starting with no key signaling proteins after division, mitogen/ERK induced Cyclin D1 mRNA and DNA damage induced p53 protein, both long lived factors in cells, can be stably inherited from mother cells after cell division. The levels of these regulators vary from cell to cell after mitosis and stoichiometry between them strongly influences cell cycle commitment though activation of Cdk2. Chemical perturbations using inhibitors of ERK signaling or inducers p53 signaling in mother cells suggest daughter cells with high levels of p53 protein and low levels of Cyclin D1 transcripts were shown to primarily enter G0 whereas cells with high Cyclin D1 and low levels of p53 are most likely to reenter the cell cycle. These results illustrate a form of encoded molecular memory though the history of mitogen signaling through ERK and stress response though p53. Clinical significance Uncontrolled growth is a necessary step for the development of all cancers. In many cancers (e.g. melanoma), a defect in the MAP/ERK pathway leads to that uncontrolled growth. Many compounds can inhibit steps in the MAP/ERK pathway, and therefore are potential drugs for treating cancer, such as Hodgkin disease. The first drug licensed to act on this pathway is sorafenib — a Raf kinase inhibitor. Other Raf inhibitors include SB590885, PLX4720, XL281, RAF265, encorafenib, dabrafenib, and vemurafenib. Some MEK inhibitors include cobimetinib, CI-1040, PD0325901, binimetinib (MEK162), selumetinib, and trametinib (GSK1120212) It has been found that acupoint-moxibustion has a role in relieving alcohol-induced gastric mucosal injury in a mouse model, which may be closely associated with its effects in up-regulating activities of the epidermal growth factor/ERK signal transduction pathway. RAF-ERK pathway is also involved in the pathophysiology of Noonan syndrome, a polymalformative disease. Protein microarray analysis can be used to detect subtle changes in protein activity in signaling pathways. The developmental syndromes caused by germline mutations in genes that alter the RAS components of the MAP/ERK signal transduction pathway are called RASopathies. See also Janus kinase Phosphatase Signal transducing adaptor protein G protein-coupled receptor References External links MAP Kinase Resource . Kyoto Encyclopedia of Genes and Genomes — MAPK pathway Signal transduction Cell signaling
MAPK/ERK pathway
[ "Chemistry", "Biology" ]
3,311
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
5,620,279
https://en.wikipedia.org/wiki/Hydrological%20transport%20model
An hydrological transport model is a mathematical model used to simulate the flow of rivers, streams, groundwater movement or drainage front displacement, and calculate water quality parameters. These models generally came into use in the 1960s and 1970s when demand for numerical forecasting of water quality and drainage was driven by environmental legislation, and at a similar time widespread access to significant computer power became available. Much of the original model development took place in the United States and United Kingdom, but today these models are refined and used worldwide. There are dozens of different transport models that can be generally grouped by pollutants addressed, complexity of pollutant sources, whether the model is steady state or dynamic, and time period modeled. Another important designation is whether the model is distributed (i.e. capable of predicting multiple points within a river) or lumped. In a basic model, for example, only one pollutant might be addressed from a simple point discharge into the receiving waters. In the most complex of models, various line source inputs from surface runoff might be added to multiple point sources, treating a variety of chemicals plus sediment in a dynamic environment including vertical river stratification and interactions of pollutants with in-stream biota. In addition watershed groundwater may also be included. The model is termed "physically based" if its parameters can be measured in the field. Often models have separate modules to address individual steps in the simulation process. The most common module is a subroutine for calculation of surface runoff, allowing variation in land use type, topography, soil type, vegetative cover, precipitation and land management practice (such as the application rate of a fertilizer). The concept of hydrological modeling can be extended to other environments such as the oceans, but most commonly (and in this article) the subject of a river watershed is generally implied. History In 1850, T. J. Mulvany was probably the first investigator to use mathematical modeling in a stream hydrology context, although there was no chemistry involved. By 1892 M.E. Imbeau had conceived an event model to relate runoff to peak rainfall, again still with no chemistry. Robert E. Horton’s seminal work on surface runoff along with his coupling of quantitative treatment of erosion laid the groundwork for modern chemical transport hydrology. Types Physically based models Physically based models (sometimes known as deterministic, comprehensive or process-based models) try to represent the physical processes observed in the real world. Typically, such models contain representations of surface runoff, subsurface flow, evapotranspiration, and channel flow, but they can be far more complicated. "Large scale simulation experiments were begun by the U.S. Army Corps of Engineers in 1953 for reservoir management on the main stem of the Missouri River". This, and other early work that dealt with the River Nile and the Columbia River<ref>F.S. Brown, Water Resource Development – Columbia River Basin, in Report of Meeting of Columbia Basin Inter-Agency Committee, Portland, OR, Dec. 1958</ref> are discussed, in a wider context, in a book published by the Harvard Water Resources Seminar, that contains the sentence just quoted. Another early model that integrated many submodels for basin chemical hydrology was the Stanford Watershed Model (SWM). The SWMM (Storm Water Management Model), the HSPF (Hydrological Simulation Program – FORTRAN) and other modern American derivatives are successors to this early work. In Europe a favoured comprehensive model is the Système Hydrologique Européen (SHE),Vijay P. Singh,, Computer Models of Watershed Hydrology, Water Resource Publications, pgs. 563-594 (1995) which has been succeeded by MIKE SHE and SHETRAN. MIKE SHE is a watershed-scale physically based, spatially distributed model for water flow and sediment transport. Flow and transport processes are represented by either finite difference representations of partial differential equations or by derived empirical equations. The following principal submodels are involved: Evapotranspiration: Penman-Monteith formalism Erosion: Detachment equations for raindrop and overland flow Overland and Channel Flow: Saint-Venant equations of continuity and momentum Overland Flow Sediment Transport: 2D total sediment load conservation equation Unsaturated Flow: Richards equation Saturated Flow: Darcy's law and the mass conservation of 2D laminar flow Channel Sediment Transport 1D mass conservation equation. This model can analyze effects of land use and climate changes upon in-stream water quality, with consideration of groundwater interactions. Worldwide a number of basin models have been developed, among them RORB (Australia), Xinanjiang (China), Tank model (Japan), ARNO (Italy), TOPMODEL (Europe), UBC (Canada) and HBV (Scandinavia), MOHID Land (Portugal). However, not all of these models have a chemistry component. Generally speaking, SWM, SHE and TOPMODEL have the most comprehensive stream chemistry treatment and have evolved to accommodate the latest data sources including remote sensing and geographic information system data. In the United States, the Corps of Engineers, Engineer Research and Development Center in conjunction with a researchers at a number of universities have developed the Gridded Surface/Subsurface Hydrologic Analysis GSSHA model. GSSHA is widely used in the U.S. for research and analysis by U.S. Army Corps of Engineers districts and larger consulting companies to compute flow, water levels, distributed erosion, and sediment delivery in complex engineering designs. A distributed nutrient and contaminant fate and transport component is undergoing testing. GSSHA input/output processing and interface with GIS is facilitated by the Watershed Modeling System (WMS). Another model used in the United States and worldwide is Vflo, a physics-based distributed hydrologic model developed by Vieux & Associates, Inc. Vflo'' employs radar rainfall and GIS data to compute spatially distributed overland flow and channel flow. Evapotranspiration, inundation, infiltration, and snowmelt modeling capabilities are included. Applications include civil infrastructure operations and maintenance, stormwater prediction and emergency management, soil moisture monitoring, land use planning, water quality monitoring, and others. Stochastic models These models based on data are black box systems, using mathematical and statistical concepts to link a certain input (for instance rainfall) to the model output (for instance runoff). Commonly used techniques are regression, transfer functions, neural networks and system identification. These models are known as stochastic hydrology models. Data based models have been used within hydrology to simulate the rainfall-runoff relationship, represent the impacts of antecedent moisture and perform real-time control on systems. Model components Surface runoff modelling A key component of a hydrological transport model is the surface runoff element, which allows assessment of sediment, fertilizer, pesticide and other chemical contaminants. Building on the work of Horton, the unit hydrograph theory was developed by Dooge in 1959. It required the presence of the National Environmental Policy Act and kindred other national legislation to provide the impetus to integrate water chemistry to hydrology model protocols. In the early 1970s the U.S. Environmental Protection Agency (EPA) began sponsoring a series of water quality models in response to the Clean Water Act. An example of these efforts was developed at the Southeast Water Laboratory, one of the first attempts to calibrate a surface runoff model with field data for a variety of chemical contaminants. The attention given to surface runoff contaminant models has not matched the emphasis on pure hydrology models, in spite of their role in the generation of stream loading contaminant data. In the United States the EPA has had difficulty interpreting diverse proprietary contaminant models and has to develop its own models more often than conventional resource agencies, who, focused on flood forecasting, have had more of a centroid of common basin models. Example applications Liden applied the HBV model to estimate the riverine transport of three different substances, nitrogen, phosphorus and suspended sediment in four different countries: Sweden, Estonia, Bolivia and Zimbabwe. The relation between internal hydrological model variables and nutrient transport was assessed. A model for nitrogen sources was developed and analysed in comparison with a statistical method. A model for suspended sediment transport in tropical and semi-arid regions was developed and tested. It was shown that riverine total nitrogen could be well simulated in the Nordic climate and riverine suspended sediment load could be estimated fairly well in tropical and semi-arid climates. The HBV model for material transport generally estimated material transport loads well. The main conclusion of the study was that the HBV model can be used to predict material transport on the scale of the drainage basin during stationary conditions, but cannot be easily generalised to areas not specifically calibrated. In a different work, Castanedo et al. applied an evolutionary algorithm to automated watershed model calibration. The United States EPA developed the DSSAM Model to analyze water quality impacts from land use and wastewater management decisions in the Truckee River basin, an area which include the cities of Reno and Sparks, Nevada as well as the Lake Tahoe basin. The model satisfactorily predicted nutrient, sediment and dissolved oxygen parameters in the river. It is based on a pollutant loading metric called "Total Maximum Daily Load" (TMDL). The success of this model contributed to the EPA's commitment to the use of the underlying TMDL protocol in EPA's national policy for management of many river systems in the United States. The DSSAM Model is constructed to allow dynamic decay of most pollutants; for example, total nitrogen and phosphorus are allowed to be consumed by benthic algae in each time step, and the algal communities are given a separate population dynamic in each river reach (e.g. based upon river temperature). Regarding stormwater runoff in Washoe County, the specific elements within a new xeriscape ordinance were analyzed for efficacy using the model. For the varied agricultural uses in the watershed, the model was run to understand the principal sources of impact, and management practices were developed to reduce in-river pollution. Use of the model has specifically been conducted to analyze survival of two endangered species found in the Truckee River and Pyramid Lake: the Cui-ui sucker fish (endangered 1967) and the Lahontan cutthroat trout (threatened 1970). See also Aquifer Differential equation HBV model Hydrometry Infiltration Runoff model (reservoir) Storm Water Management Model United States Army Corps of Engineers WAFLEX model SWAT model References External links HBV model applied to climate change in the Rhine River basin TOPMODEL characteristics and parameters Xinanjiang model and its application in northern China Evolutionary Computation Technique Applied to HSPF Model Calibration of a Spanish Watershed Computer-aided engineering software Environmental chemistry Environmental soil science Soil science Water pollution Hydrology models
Hydrological transport model
[ "Chemistry", "Biology", "Environmental_science" ]
2,231
[ "Hydrology", "Biological models", "Environmental chemistry", "Water pollution", "Hydrology models", "nan", "Environmental soil science", "Environmental modelling" ]
5,622,569
https://en.wikipedia.org/wiki/Hansen%20solubility%20parameter
Hansen solubility parameters were developed by Charles M. Hansen in his Ph.D thesis in 1967 as a way of predicting if one material will dissolve in another and form a solution. They are based on the idea that like dissolves like where one molecule is defined as being 'like' another if it bonds to itself in a similar way. Specifically, each molecule is given three Hansen parameters, each generally measured in MPa0.5: The energy from dispersion forces between molecules The energy from dipolar intermolecular forces between molecules The energy from hydrogen bonds between molecules. These three parameters can be treated as co-ordinates for a point in three dimensions also known as the Hansen space. The nearer two molecules are in this three-dimensional space, the more likely they are to dissolve into each other. To determine if the parameters of two molecules (usually a solvent and a polymer) are within range, a value called interaction radius () is given to the substance being dissolved. This value determines the radius of the sphere in Hansen space and its center is the three Hansen parameters. To calculate the distance () between Hansen parameters in Hansen space the following formula is used: Combining this with the interaction radius gives the relative energy difference (RED) of the system: If the molecules are alike and will dissolve If the system will partially dissolve If the system will not dissolve Uses Historically Hansen solubility parameters (HSP) have been used in industries such as paints and coatings where understanding and controlling solvent–polymer interactions was vital. Over the years their use has been extended widely to applications such as: Environmental stress cracking of polymers Controlled dispersion of pigments, such as carbon black Understanding of solubility/dispersion properties of carbon nanotubes, Buckyballs, and quantum dots Adhesion to polymers Permeation of solvents and chemicals through plastics to understand issues such as glove safety, food packaging barrier properties and skin permeation Diffusion of solvents into polymers via understanding of surface concentration based on RED number Cytotoxicity via interaction with DNA Artificial noses (where response depends on polymer solubility of the test odor) Safer, cheaper, and faster solvent blends where an undesirable solvent can be rationally replaced by a mix of more desirable solvents whose combined HSP equals the HSP of the original solvent. Theoretical context HSP have been criticized for lacking the formal theoretical derivation of Hildebrand solubility parameters. All practical correlations of phase equilibrium involve certain assumptions that may or may not apply to a given system. In particular, all solubility parameter-based theories have a fundamental limitation that they apply only to associated solutions (i.e., they can only predict positive deviations from Raoult's law): they cannot account for negative deviations from Raoult's law that result from effects such as solvation (often important in water-soluble polymers) or the formation of electron donor acceptor complexes. Like any simple predictive theory, HSP are best used for screening with data used to validate the predictions. Hansen parameters have been used to estimate Flory-Huggins Chi parameters, often with reasonable accuracy. The factor of 4 in front of the dispersion term in the calculation of Ra has been the subject of debate. There is some theoretical basis for the factor of four (see Ch 2 of Ref 1 and also. However, there are clearly systems (e.g. Bottino et al., "Solubility parameters of poly(vinylidene fluoride)" J. Polym. Sci. Part B: Polymer Physics 26(4), 785-79, 1988) where the regions of solubility are far more eccentric than predicted by the standard Hansen theory. HSP effects can be over-ridden by size effects (small molecules such as methanol can give "anomalous results"). It has been shown that it is possible to calculate HSP via molecular dynamics techniques, though currently the polar and hydrogen bonding parameters cannot reliably be partitioned in a manner that is compatible with Hansen's values. Limitations The following are limitations according to Hansen: The parameters will vary with temperature The parameters are an approximation. Bonding between molecules is more subtle than the three parameters suggest. Molecular shape is relevant, as are other types of bonding such as induced dipole, metallic and electrostatic interactions. The size of the molecules also plays a significant role in whether two molecules actually dissolve in a given period. The parameters are hard to measure. 2008 work by Abbott and Hansen has helped address some of the above issues. Temperature variations can be calculated, the role of molar volume ("kinetics versus thermodynamics") is clarified, new chromatographic ways to measure HSP are available, large datasets for chemicals and polymers are available, 'Sphere' software for determining HSP values of polymers, inks, quantum dots etc. is available (or easy to implement in one's own software) and the new Stefanis-Panayiotou method for estimating HSP from Unifac groups is available in the literature and also automated in software. All these new capabilities are described in the e-book, software, datasets described in the external links but can be implemented independently of any commercial package. Sometimes Hildebrand solubility parameters are used for similar purposes. Hildebrand parameters are not suitable for use outside their original area which was non-polar, non-hydrogen-bonding solvents. The Hildebrand parameter for such non-polar solvents is usually close to the Hansen value. A typical example showing why Hildebrand parameters can be unhelpful is that two solvents, butanol and nitroethane, which have the same Hildebrand parameter, are each incapable of dissolving typical epoxy polymers. Yet a 50:50 mix gives a good solvency for epoxies. This is easily explainable knowing the Hansen parameter of the two solvents and that the Hansen parameter for the 50:50 mix is close to the Hansen parameter of epoxies. See also Solvent (has a chart of Hansen solubility parameters for various solvents) Hildebrand solubility parameter MOSCED References External links Interactive web app for finding solvents with matching solubility parameters Link Physical chemistry Polymer chemistry 1967 in science
Hansen solubility parameter
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,305
[ "Applied and interdisciplinary physics", "Materials science", "Polymer chemistry", "nan", "Physical chemistry" ]
11,000,160
https://en.wikipedia.org/wiki/CA19-9
Carbohydrate antigen 19-9 (CA19-9), also known as sialyl-LewisA, is a tetrasaccharide which is usually attached to O-glycans on the surface of cells. It is known to play a role in cell-to-cell recognition processes. It is also a tumor marker used primarily in the management of pancreatic cancer. Structure CA19-9 is the sialylated form of Lewis antigenA. It is a tetrasaccharide with the sequence Neu5Acα2-3Galβ1-3[Fucα1-4]GlcNAcβ. Clinical significance Tumor marker Guidelines from the American Society of Clinical Oncology discourage the use of CA19-9 as a screening test for cancer, particularly pancreatic cancer. The reason is that the test may be falsely normal (false negative) in many cases or abnormally elevated in people who have no cancer (false positive) in others. The main use of CA19-9 is therefore to see whether a pancreatic tumor is secreting it; if that is the case, then the levels should fall when the tumor is treated, and they may rise again if the disease recurs. Therefore it is useful as a surrogate marker for relapse. In people with pancreatic masses, CA19-9 can be useful in distinguishing between cancer and other diseases of the gland. Limitations CA19-9 can be elevated in many types of gastrointestinal cancer, such as colorectal cancer, esophageal cancer and hepatocellular carcinoma. Apart from cancer, elevated levels may occur in pancreatitis, cirrhosis, and diseases of the bile ducts. It can also be elevated in people with obstruction of the bile ducts. In people who lack Lewis antigenA (a blood type antigen on red blood cells), which is about 10% of the white population, CA19-9 is not produced by any cells, even in those with large tumors. This is because of a deficiency of a fucosyltransferase enzyme that is needed to produce Lewis antigenA. History CA19-9 was discovered in the serum of patients with colon cancer and pancreatic cancer in 1981. It was characterized shortly after, and it was found to be carried primarily by mucins. See also Sialyl-LewisX Lewis antigen system References External links CA19-9 at Lab Tests Online CA19-9: analyte monograph - The Association for Clinical Biochemistry and Laboratory Medicine Essentials of Glycobiology 3rd Edition, Chapter 14: "Structures Common to Different Glycans" https://www.ncbi.nlm.nih.gov/books/NBK453042/#_Ch14_s2_ Amino sugars Tetrasaccharides Acetamides Tumor markers Pancreatic cancer
CA19-9
[ "Chemistry", "Biology" ]
605
[ "Amino sugars", "Carbohydrates", "Biomarkers", "Tumor markers", "Chemical pathology" ]
11,004,631
https://en.wikipedia.org/wiki/Variable%20nebula
Variable nebulae are reflection nebulae that change in brightness because of changes in their star. See also McNeil's Nebula NGC 1555 (Hind's Variable Nebula) NGC 2261 (Hubble's Variable Nebula) NGC 6729 (R Coronae Australis Nebula) References External links Astrobiscuit: Seeing The Speed Of Light fun and educational video about variable nebula and the amateur community observing them Nebulae
Variable nebula
[ "Astronomy" ]
87
[ "Nebulae", "Astronomical objects" ]
11,007,302
https://en.wikipedia.org/wiki/Halo%20orbit
A halo orbit is a periodic, three-dimensional orbit associated with one of the L1, L2 or L3 Lagrange points in the three-body problem of orbital mechanics. Although a Lagrange point is just a point in empty space, its peculiar characteristic is that it can be orbited by a Lissajous orbit or by a halo orbit. These can be thought of as resulting from an interaction between the gravitational pull of the two planetary bodies and the Coriolis and centrifugal force on a spacecraft. Halo orbits exist in any three-body system, e.g., a Sun–Earth–orbiting satellite system or an Earth–Moon–orbiting satellite system. Continuous "families" of both northern and southern halo orbits exist at each Lagrange point. Because halo orbits tend to be unstable, station-keeping using thrusters may be required to keep a satellite on the orbit. Most satellites in halo orbit serve scientific purposes, for example space telescopes. Definition and history Robert W. Farquhar first used the name "halo" in 1966 for orbits around L which were made periodic using thrusters. Farquhar advocated using spacecraft in such an orbit beyond the Moon (Earth–Moon ) as a communications relay station for an Apollo mission to the far side of the Moon. A spacecraft in such an orbit would be in continuous view of both the Earth and the far side of the Moon, whereas a Lissajous orbit would sometimes make the spacecraft go behind the Moon. In the end, no relay satellite was launched for Apollo, since all landings were on the near side of the Moon. In 1973 Farquhar and Ahmed Kamel found that when the in-plane amplitude of a Lissajous orbit was large enough there would be a corresponding out-of-plane amplitude that would have the same period, so the orbit ceased to be a Lissajous orbit and became approximately an ellipse. They used analytical expressions to represent these halo orbits; in 1984, Kathleen Howell showed that more precise trajectories could be computed numerically. Additionally, she found that for most values of the ratio between the masses of the two bodies (such as the Earth and the Moon) there was a range of stable orbits. The first mission to use a halo orbit was ISEE-3, a joint ESA and NASA spacecraft launched in 1978. It traveled to the Sun–Earth point and remained there for several years. The next mission to use a halo orbit was Solar and Heliospheric Observatory (SOHO), also a joint ESA/NASA mission to study the Sun, which arrived at Sun–Earth in 1996. It used an orbit similar to ISEE-3. Although several other missions since then have traveled to Lagrange points, they (eg. Gaia astrometric space observatory) typically have used the related non-periodic variations called Lissajous orbits rather than an actual halo orbit. Although halo orbits were well known in the RTBP (Restricted Three Body Problem), it was difficult to obtain Halo orbits for the real Earth-Moon system. Translunar halo orbits were first computed in 1998 by M.A. Andreu, who introduced a new model for the motion of a spacecraft in the Earth-Moon-Sun system, which was called Quasi-Bicircular Problem (QBCP). In May 2018, Farquhar's original idea was finally realized when China placed the first communications relay satellite, Queqiao, into a halo orbit around the Earth-Moon point. On 3 January 2019, the Chang'e 4 spacecraft landed in the Von Kármán crater on the far side of the Moon, using the Queqiao relay satellite to communicate with the Earth. The James Webb Space Telescope entered a halo orbit around the Sun-Earth point on 24 January 2022. Euclid entered a similar orbit around this point in August 2023. India's space agency ISRO launched Aditya-L1 to study the sun from a halo orbit around L point. On 6 January 2024, Aditya-L1 spacecraft, India's first solar mission, has successfully entered its final orbit with a period of approximately 180 days around the first Sun-Earth Lagrangian point (L1), approximately 1.5 million kilometers from Earth. See also Interplanetary Transport Network Interplanetary spaceflight Lissajous orbit, another Lagrangian-point orbit which generalizes halo orbits. Near-rectilinear halo orbit :Category:Spacecraft using halo orbits Libration point orbit References External links SOHO – The Trip to the L1 Halo Orbit Low Energy Interplanetary Transfers Using Halo Orbit Hopping Method with STK/Astrogator Gaia's Lissajous Type Orbit – a Lissajous-type orbit, i.e., a near-circular ellipse or "halo" Three-body orbits Trojans (astronomy) Lagrangian mechanics
Halo orbit
[ "Physics", "Mathematics" ]
1,007
[ "Lagrangian mechanics", "Classical mechanics", "Dynamical systems" ]
11,007,779
https://en.wikipedia.org/wiki/PRMT4%20pathway
Protein arginine N-methyltransferase-4 (PRMT4/CARM1) methylation of arginine residues within proteins plays a critical key role in transcriptional regulation (see the PRMT4 pathway on the left). PRMT4 binds to the classes of transcriptional activators known as p160 and CBP/p300. The modified forms of these proteins are involved in stimulation of gene expression via steroid hormone receptors. Significantly, PRMT4 methylates core histones H3 and H4, which are also targets of the histone acetylase activity of CBP/p300 coactivators. PRMT4 recruitment of chromatin by binding to coactivators increases histone methylation and enhances the accessibility of promoter regions for transcription. Methylation of the transcriptional coactivator CBP by PRMT4 inhibits binding to CREB and thereby partitions the limited cellular pool of CBP for steroid hormone receptor interaction. See also DNA methyltransferase Nucleosome Histone Histone-Modifying Enzymes Chromatin Diet and cancer References Gene expression
PRMT4 pathway
[ "Chemistry", "Biology" ]
238
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
2,238,741
https://en.wikipedia.org/wiki/Bioactive%20glass
Bioactive glasses are a group of surface reactive glass-ceramic biomaterials and include the original bioactive glass, Bioglass. The biocompatibility and bioactivity of these glasses has led them to be used as implant devices in the human body to repair and replace diseased or damaged bones. Most bioactive glasses are silicate-based glasses that are degradable in body fluids and can act as a vehicle for delivering ions beneficial for healing. Bioactive glass is differentiated from other synthetic bone grafting biomaterials (e.g., hydroxyapatite, biphasic calcium phosphate, calcium sulfate), in that it is the only one with anti-infective and angiogenic properties. History Discovery and development Larry Hench and colleagues at the University of Florida first developed these materials in 1969 and they have been further developed by his research team at the Imperial College London and other researchers worldwide. Hench began development by submitting a proposal hypothesis to the United States Army Medial Research and Development command in 1968 based upon his theory of the body rejecting metallic or polymeric material unless it was able to form a coating of hydroxyapatite which is found in bone. Hench and his team received funding for one year, and began development on what would become the 45S5 composition. The name "Bioglass" was trademarked by the University of Florida as a name for the original 45S5 composition. It should therefore only be used in reference to the 45S5 composition and not as a general term for bioactive glasses. Through use of a Na2O-CaO-SiO2 phase diagram, Hench chose a composition of 45% SiO2, 24.5% Na2O, 24.5% CaO, and 6% P2O5 to allow for a large amount of CaO and some P2O5 in a SiO2-Na2O matrix. The glass was batched, melted, and cast into small rectangular implants to be inserted into the femoral bone of rats for six weeks as developed by Dr. Ted Greenlee of the University of Florida. After six weeks, Dr. Greenlee reported "These ceramic implants will not come out of the bone. They are bonded in place. I can push on them, I can shove them, I can hit them and they do not move. The controls easily slide out." These findings were the basis of the first paper on 45S5 bioactive glass in 1971 which summarized that in vitro experiments in a calcium and phosphate ion deficient solution showed a developed layer of hydroxyapatite similar to the observed hydroxyapatite later in vivo by Dr. Greenlee. Animal Testing Scientists in Amsterdam, the Netherlands, took cubes of bioactive glass and implanted them into the tibias of guinea pigs in 1986. After 8, 12, and 16 weeks of implantation, the guinea pigs were euthanized and their tibias were harvested. The implants and tibias were then subjected to a shear strength test to determine the mechanical properties of the implant to bone boundary, where it was found to have a shear strength of 5 N/mm2. Electron microscopy showed the ceramic implants had bone remnants firmly adhered to them. Further optical microscopy revealed bone cell and blood vessel growth within the area of the implant which was proof of biocompatibility between the bone and implant. Bioactive glass was the first material found to create a strong bond with living bone tissue. Structure Solid state NMR spectroscopy has been very useful in determining the structure of amorphous solids. Bioactive glasses have been studied by 29Si and 31P solid state MAS NMR spectroscopy. The chemical shift from MAS NMR is indicative of the type of chemical species present in the glass. The 29Si MAS NMR spectroscopy showed that Bioglass 45S5 was a Q2 type-structure with a small amount of Q3; i.e., silicate chains with a few crosslinks. The 31P MAS NMR revealed predominately Q0 species; i.e., PO43−; subsequent MAS NMR spectroscopy measurements have shown that Si-O-P bonds are below detectable levels Compositions There have been many variations on the original composition which was Food and Drug Administration (FDA) approved and termed Bioglass. This composition is known as Bioglass 45S5. The compositions include: 45S5: 45 wt% SiO2, 24.5 wt% CaO, 24.5 wt% Na2O and 6.0  wt% P2O5. Bioglass S53P4: 53 wt% SiO2, 23 wt% Na2O, 20 wt% CaO and 4 wt% P2O5. 58S: 58 wt% SiO2, 33 wt% CaO and 9 wt% P2O5. 70S30C: 70 wt% SiO2, 30 wt% CaO. 13-93: 53 wt% SiO2, 6 wt% Na2O, 12 wt% K2O, 5 wt% MgO, 20 wt% CaO, 4 wt% P2O5. Bioglass 45S5 The composition was originally selected because of being roughly eutectic. The 45S5 name signifies glass with 45 wt.% of SiO2 and 5:1 molar ratio of calcium to phosphorus. Lower Ca/P ratios do not bond to bone. The key composition features of Bioglass is that it contains less than 60 mol% SiO2, high Na2O and CaO contents, high CaO/P2O5 ratio, which makes Bioglass highly reactive to aqueous medium and bioactive. High bioactivity is the main advantage of Bioglass, while its disadvantages includes mechanical weakness, low fracture resistance due to amorphous 2-dimensional glass network. The bending strength of most Bioglass is in the range of 40–60 MPa, which is not enough for load-bearing application. Its Young's modulus is 30–35 GPa, very close to that of cortical bone, which can be an advantage. Bioglass implants can be used in non-load-bearing applications, for buried implants loaded slightly or compressively. Bioglass can be also used as a bioactive component in composite materials or as powder and can be used to create an artificial septum to treat perforations caused by cocaine abuse. It has no known side-effects. The first successful surgical use of Bioglass 45S5 was in replacement of ossicles in middle ear, as a treatment of conductive hearing loss. The advantage of 45S5 is in no tendency to form fibrous tissue. Other uses are in cones for implantation into the jaw following a tooth extraction. Composite materials made of Bioglass 45S5 and patient's own bone can be used for bone reconstruction. Bioglass is comparatively soft in comparison to other glasses. It can be machined, preferably with diamond tools, or ground to powder. Bioglass has to be stored in a dry environment, as it readily absorbs moisture and reacts with it. Bioglass 45S5 is manufactured by conventional glass-making technology, using platinum or platinum alloy crucibles to avoid contamination. Contaminants would interfere with the chemical reactivity in organism. Annealing is a crucial step in forming bulk parts, due to high thermal expansion of the material. Heat treatment of Bioglass reduces the volatile alkali metal oxide content and precipitates apatite crystals in the glass matrix. The resulting glass–ceramic material, named Ceravital, has higher mechanical strength and lower bioactivity. Bioglass S53P4 The formula of S53P4 was first developed in the early 1990s in Turku, Finland, at Åbo Akademi University and University of Turku. It has received the product claim for use in bone cavity filling in the treatment of chronic osteomyelitis in 2011. S53P4 is among the most studied bioactive glasses on the market with over 150 publications. When S53P4 bioactive glass is placed into the bone cavity, it reacts with body fluids to activate the glass. During this activation period, the bioactive glass goes through a series of chemical reactions, creating the ideal conditions for the bone to rebuild through osteoconduction. Na, Si, Ca, and P ions are released. A silica gel layer forms on the bioactive glass surface. CaP crystallizes, forming a layer of hydroxyapatite on the surface of the bioactive glass. Once the hydroxyapatite layer is formed, the bioactive glass interacts with biological entities, i.e., blood proteins, growth factors and collagen. Following this interaction, the osteoconductive and osteostimulative processes help the new bone grow onto and between the bioactive glass structures. Bioactive glass bonds to bone –facilitating new bone formation. Osteostimulation begins by stimulating osteogenic cells to increase the remodeling rate of bone. Radio-dense quality of bioactive glass allows for post-operative evaluation. In the final transformative phase, the process of bone regeneration and remodeling continues. Over time the bone fully regenerates, restoring the patient's natural anatomy. Bone consolidation occurs. S53P4 bioactive glass continues to remodel into bone over a period of years. Bioactive glass S53P4 is currently the only bioactive glass on the market which has been proven to inhibit bacterial growth effectively. The bacterial growth inhibiting properties of S53P4 derive from two simultaneous chemical and physical processes, which occurs once the bioactive glass reacts with body fluids. Sodium (Na) is released from the surface of the bioactive glass and induces an increase in pH (alkaline environment), which is not favorable for the bacteria, thus inhibiting their growth. The released Na, Ca, Si and P ions give rise to an increase in osmotic pressure due to an elevation in salt concentration, i.e., an environment where bacteria cannot grow. Bioglass 8625 Bioglass 8625, also called Schott 8625, is a soda-lime glass used for encapsulation of implanted devices. The most common use of Bioglass 8625 is in the housings of RFID transponders for use in human and animal microchip implants. It is patented and manufactured by Schott AG. Bioglass 8625 is also used for some piercings. Bioglass 8625 does not bond to tissue or bone, it is held in place by fibrous tissue encapsulation. After implantation, a calcium-rich layer forms on the interface between the glass and the tissue. Without additional antimigration coating it is subject to migration in the tissue. The antimigration coating is a material that bonds to both the glass and the tissue. Parylene, usually Parylene type C, is often used as such material. Bioglass 8625 has a significant content of iron, which provides infrared light absorption and allows sealing by a light source, e.g., a Nd:YAG laser or a mercury-vapor lamp. The content of Fe2O3 yields high absorption with maximum at 1100 nm, and gives the glass a green tint. The use of infrared radiation instead of flame or contact heating helps preventing contamination of the device. After implantation, the glass reacts with the environment in two phases, in the span of about two weeks. In the first phase, alkali metal ions are leached from the glass and replaced with hydrogen ions; small amount of calcium ions also diffuses from the material. During the second phase, the Si-O-Si bonds in the silica matrix undergo hydrolysis, yielding a gel-like surface layer rich on Si-O-H groups. A calcium phosphate-rich passivation layer gradually forms over the surface of the glass, preventing further leaching. It is used in microchips for tracking of many kinds of animals, and recently in some human implants. The U.S. Food and Drug Administration (FDA) approved use of Bioglass 8625 in humans in 1994. Bioglass 13-93 Compared to Bioglass 45S5, silicate 13-93 bioactive glass is composed of a higher composition of SiO2 and includes K2O and MgO. It is commercially available from Mo-Sci Corp. or can be directly prepared by melting a mixture of Na2CO3, K2CO3, MgCO3, CaCO3, SiO2 and NaH2PO4 · 2H2O in a platinum crucible at 1300 °C and quenching between stainless steel plates. The 13-93 glass has received approval for in vivo use in the US and Europe. It has more facile viscous flow behavior and a lower tendency to crystallize upon being pulled into fibers. 13-93 bioactive glass powder could be dispersed into a binder to create ink for robocasting or direct ink 3D printing technique. The mechanical properties of the resulting porous scaffolds have been studied in various works of literature. The printed 13-93 bioactive glass scaffold in the study by Liu et al. was dried in ambient air, fired to 600 °C under the O2 atmosphere to remove the processing additives, and sintered in air for 1 hour at 700 °C. In the pristine sample, the flexural strength (11 ± 3 MPa) and flexural modulus (13 ± 2 MPa) are comparable to the minimum value of those of trabecular bones while the compressive strength (86 ± 9 MPa) and compressive modulus (13 ± 2 GPa) are close to the cortical bone values. However, the fracture toughness of the as-fabricated scaffold was 0.48 ± 0.04 MPa·m1/2, indicating that it is more brittle than human cortical bone whose fracture toughness is 2–12 MPa·m1/2. After immersing the sample in a simulated body fluid (SBF) or subcutaneous implantation in the dorsum of rats, the compressive strength and compressive modulus decrease sharply during the initial two weeks but more gradually after two weeks. The decrease in the mechanical properties was attributed to the partial conversion of the glass filaments in the scaffolds into a layer mainly composed of a porous hydroxyapatite-like material. Another work by Kolan and co-workers used selective laser sintering instead of conventional heat treatment. After the optimization of the laser power, scan speed, and heating rate, the compressive strength of the sintered scaffolds varied from 41 MPa for a scaffold with ~50% porosity to 157 MPa for dense scaffolds. The in vitro study using SBF resulted in a decrease in the compressive strength but the final value was similar to that of human trabecular bone. 13-93 porous glass scaffolds were synthesized using a polyurethane foam replication method in the report by Fu et al. The stress-strain relationship was examined in obtained from the compressive test using eight samples with 85 ± 2% porosity. The resultant curve demonstrated a progressive breaking down of the scaffold structure and the average compressive strength of 11 ± 1 MPa, which was in the range of human trabecular bone and higher than competitive bioactive materials for bone repairing such as hydroxyapatite scaffolds with the same extent of pores and polymer-ceramic composites prepared by the thermally induced phase separation (TIPS) method. Synthesis Bioactive glasses have been synthesized through methods such as conventional melting, quenching, the sol–gel process, flame synthesis, and microwave irradiation. The synthesis of bioglass has been reviewed by various groups, with sol-gel synthesis being one of the most frequently used methods for producing bioglass composites, particularly for tissue engineering applications. Other methods of bioglass synthesis have been developed, such as flame and microwave synthesis, though they are less prevalent in research. Bioactive metallic glass Bioactive metallic glass is a subset of bioactive glass, wherein the bulk material is composed of a metal-glass substrate and is coated with bioactive glass in order to make the material bioactive. The reasoning behind the introduction of the metallic base is to create a less brittle, stronger material that will be permanently implanted within the body. Metallic glasses tout lower Young's Moduli and higher elastic limits than bioactive glass, and as such, will allow for more deformation of the material before fracture occurs. This is highly desirable, as a permanent implant would need to avoid shattering within the patient's body. Common materials which compose the metallic bulk include Zr and Ti, whereas some examples of the few key metals that shouldn't be used as bulk materials are Al, Be, and Ni. Laser-cladding While metals are not necessarily inherently bioactive, bioactive glass coatings which are applied to metal substrates via laser-cladding introduce the bioactivity that the glass would express, but have the added benefits of having a metal base. Laser cladding is a method by which bioactive glass microparticles are thrust in a stream at the bulk material, and introduced to a high enough heat that they melt into a coating of material. Sol-gel processing Metals can also be affixed with bioactive glass using a sol-gel process, in which the bioactive glass is sintered onto metals at a controlled temperature that is high enough to perform the sintering, but low enough to avoid phase-shifts and other unwanted side effects. Experimentation has been done with sintering double layered, silica-based bioactive glass onto stainless steel substrates at 600 °C for 5 hours. This method has proven to maintain largely amorphous structure while containing key crystalline elements, and also achieves a remarkably similar level of bioactivity to bioactive glass. Mechanism of activity The underlying mechanisms that enable bioactive glasses to act as materials for bone repair have been investigated since the first work of Hench et al. at the University of Florida. Early attention was paid to changes in the bioactive glass surface. Five inorganic reaction stages are commonly thought to occur when a bioactive glass is immersed in a physiological environment: Ion exchange in which modifier cations (mostly Na+) in the glass exchange with hydronium ions in the external solution. Hydrolysis in which Si-O-Si bridges are broken, forming Si-OH silanol groups, and the glass network is disrupted. Condensation of silanols in which the disrupted glass network changes its morphology to form a gel-like surface layer, depleted in sodium and calcium ions. Precipitation in which an amorphous calcium phosphate layer is deposited on the gel. Mineralization in which the calcium phosphate layer gradually transforms into crystalline hydroxyapatite, that mimics the mineral phase naturally contained with vertebrate bones. Later, it was discovered that the morphology of the gel surface layer was a key component in determining the bioactive response. This was supported by studies on bioactive glasses derived from sol-gel processing. Such glasses could contain significantly higher concentrations of SiO2 than traditional melt-derived bioactive glasses and still maintain bioactivity (i.e., the ability to form a mineralized hydroxyapatite layer on the surface). The inherent porosity of the sol-gel-derived material was cited as a possible explanation for why bioactivity was retained, and often enhanced with respect to the melt-derived glass. Subsequent advances in DNA microarray technology enabled an entirely new perspective on the mechanisms of bioactivity in bioactive glasses. Previously, it was known that a complex interplay existed between bioactive glasses and the molecular biology of the implant host, but the available tools did not provide a sufficient quantity of information to develop a holistic picture. Using DNA microarrays, researchers are now able to identify entire classes of genes that are regulated by the dissolution products of bioactive glasses, resulting in the so-called "genetic theory" of bioactive glasses. The first microarray studies on bioactive glasses demonstrated that genes associated with osteoblast growth and differentiation, maintenance of extracellular matrix, and promotion of cell-cell and cell-matrix adhesion were up-regulated by conditioned cell culture media containing the dissolution products of bioactive glass. Medical uses S53P4 bioactive glass was first used in a clinical setting as an alternative to bone or cartilage grafts in facial reconstruction surgery. The use of artificial materials as bone prosthesis had the advantage of being much more versatile than traditional autotransplants, as well as having fewer postoperative side effects. There is tentative evidence that bioactive glass by the composition S53P4 may also be useful in long bone infections. Support from randomized controlled trials, however, is still not available as of 2015. See also Ceramic foam Nanofoam Metal foam Osseointegration Porous medium Synthesis of bioglass References Periodontology Biomaterials Glass compositions Glass-ceramics Glass chemistry American inventions Glass
Bioactive glass
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
4,387
[ "Biomaterials", "Glass engineering and science", "Glass chemistry", "Glass compositions", "Materials", "Matter", "Medical technology" ]
2,239,104
https://en.wikipedia.org/wiki/Islands%20of%20automation
Islands of automation was a popular term used largely during the 1980s to describe how rapidly developing automation systems were at first unable to communicate easily with each other. Industrial communication protocols, network technologies, and system integration helped to improve this situation. Just a few of the many examples of helping technologies are Modbus, Fieldbus, Ethernet, etc. It is more recently used by automation specialists to describe a discrete and fully enclosed automated system applied in a largely manual environment. In today's interconnected world it is uncommon for automated systems to be fully stand alone. Therefore, the old usage is defunct and the new usage is more appropriate for companies that wish to automate in a limited fashion. References Impact of automation System integration
Islands of automation
[ "Technology", "Engineering" ]
144
[ "Systems engineering", "Computer network stubs", "Impact of automation", "Automation", "Computing stubs", "System integration" ]
2,239,113
https://en.wikipedia.org/wiki/Deep%20inelastic%20scattering
In particle physics, deep inelastic scattering is the name given to a process used to probe the insides of hadrons (particularly the baryons, such as protons and neutrons), using electrons, muons and neutrinos. It was first attempted in the 1960s and 1970s and provided the first convincing evidence of the reality of quarks, which up until that point had been considered by many to be a purely mathematical phenomenon. It is an extension of Rutherford scattering to much higher energies of the scattering particle and thus to much finer resolution of the components of the nuclei. Henry Way Kendall, Jerome Isaac Friedman and Richard E. Taylor were joint recipients of the Nobel Prize of 1990 "for their pioneering investigations concerning deep inelastic scattering of electrons on protons and bound neutrons, which have been of essential importance for the development of the quark model in particle physics." Description To explain each part of the terminology, "scattering" refers to the deflection of leptons (electron, muon, etc.) off of hadrons. Measuring the angles of deflection gives information about the nature of the process. "Inelastic" means that the target absorbs some kinetic energy. In fact, at the very high energies of leptons used, the target is "shattered" and emits many new particles. These particles are hadrons and, to oversimplify greatly, the process is interpreted as a constituent quark of the target being "knocked out" of the target hadron, and due to quark confinement, the quarks are not actually observed but instead produce the observable particles by hadronization. "Deep" refers to the high energy of the lepton, which gives it a very short wavelength and hence the ability to probe distances that are small compared with the size of the target hadron, so it can probe "deep inside" the hadron. Also, note that in the perturbative approximation it is a high-energy virtual photon emitted from the lepton and absorbed by the target hadron which transfers energy to one of its constituent quarks, as in the adjacent diagram. Povh and Rosina pointed out that the term “deep inelastic scattering against nucleons” was coined when the quark substructure of nucleons was unknown. They prefer the term “quasielastic lepton-quark scattering”. History The Standard Model of physics, in particular the work of Murray Gell-Mann in the 1960s, had been successful in uniting much of the previously disparate concepts in particle physics into one, relatively straightforward, scheme. In essence, there were three types of particles: The leptons, which were low-mass particles such as electrons, neutrinos and their antiparticles. They have integer electric charge. The gauge bosons, which were particles that exchange forces. These ranged from the massless, easy-to-detect photon (the carrier of the electro-magnetic force) to the exotic (though still massless) gluons that carry the strong nuclear force. The quarks, which were massive particles that carried fractional electric charges. They are the "building blocks" of the hadrons. They are also the only particles to be affected by the strong interaction. The leptons had been detected since 1897, when J. J. Thomson had shown that electric current is a flow of electrons. Some bosons were being routinely detected, although the W+, W− and Z0 particles of the electroweak force were only categorically seen in the early 1980s, and gluons were only firmly pinned down at DESY in Hamburg at about the same time. Quarks, however, were still elusive. Drawing on Rutherford's groundbreaking experiments in the early years of the 20th century, ideas for detecting quarks were formulated. Rutherford had proven that atoms had a small, massive, charged nucleus at their centre by firing alpha particles at atoms of gold. Most had gone through with little or no deviation, but a few were deflected through large angles or came right back. This suggested that atoms had internal structure and a lot of empty space. In order to probe the interiors of baryons, a small, penetrating and easily produced particle needed to be used. Electrons were ideal for the role, as they are abundant and easily accelerated to high energies due to their electric charge. In 1968, at the Stanford Linear Accelerator Center (SLAC), electrons were fired at protons and neutrons in atomic nuclei. Later experiments were conducted with muons and neutrinos, but the same principles apply. The collision absorbs some kinetic energy, and as such it is inelastic. This is a contrast to Rutherford scattering, which is elastic: no loss of kinetic energy. The electron emerges from the nucleus, and its trajectory and velocity can be detected. Analysis of the results led to the conclusion that hadrons do indeed have internal structure. The experiments were important because not only did they confirm the physical reality of quarks, but also proved again that the Standard Model was the correct avenue of research for particle physicists to pursue. See also Semi-inclusive deep inelastic scattering References Further reading Scattering Experimental particle physics 1960s in science
Deep inelastic scattering
[ "Physics", "Chemistry", "Materials_science" ]
1,082
[ "Nuclear physics", "Scattering", "Experimental physics", "Particle physics", "Condensed matter physics", "Experimental particle physics" ]
2,240,299
https://en.wikipedia.org/wiki/Freiman%27s%20theorem
In additive combinatorics, a discipline within mathematics, Freiman's theorem is a central result which indicates the approximate structure of sets whose sumset is small. It roughly states that if is small, then can be contained in a small generalized arithmetic progression. Statement If is a finite subset of with , then is contained in a generalized arithmetic progression of dimension at most and size at most , where and are constants depending only on . Examples For a finite set of integers, it is always true that with equality precisely when is an arithmetic progression. More generally, suppose is a subset of a finite proper generalized arithmetic progression of dimension such that for some real . Then , so that History of Freiman's theorem This result is due to Gregory Freiman (1964, 1966). Much interest in it, and applications, stemmed from a new proof by Imre Z. Ruzsa (1992,1994). Mei-Chu Chang proved new polynomial estimates for the size of arithmetic progressions arising in the theorem in 2002. The current best bounds were provided by Tom Sanders. Tools used in the proof The proof presented here follows the proof in Yufei Zhao's lecture notes. Plünnecke–Ruzsa inequality Ruzsa covering lemma The Ruzsa covering lemma states the following: Let and be finite subsets of an abelian group with nonempty, and let be a positive real number. Then if , there is a subset of with at most elements such that . This lemma provides a bound on how many copies of one needs to cover , hence the name. The proof is essentially a greedy algorithm: Proof: Let be a maximal subset of such that the sets for are all disjoint. Then , and also , so . Furthermore, for any , there is some such that intersects , as otherwise adding to contradicts the maximality of . Thus , so . Freiman homomorphisms and the Ruzsa modeling lemma Let be a positive integer, and and be abelian groups. Let and . A map is a Freiman -homomorphism if whenever for any . If in addition is a bijection and is a Freiman -homomorphism, then is a Freiman -isomorphism. If is a Freiman -homomorphism, then is a Freiman -homomorphism for any positive integer such that . Then the Ruzsa modeling lemma states the following: Let be a finite set of integers, and let be a positive integer. Let be a positive integer such that . Then there exists a subset of with cardinality at least such that is Freiman -isomorphic to a subset of . The last statement means there exists some Freiman -homomorphism between the two subsets. Proof sketch: Choose a prime sufficiently large such that the modulo- reduction map is a Freiman -isomorphism from to its image in . Let be the lifting map that takes each member of to its unique representative in . For nonzero , let be the multiplication by map, which is a Freiman -isomorphism. Let be the image . Choose a suitable subset of with cardinality at least such that the restriction of to is a Freiman -isomorphism onto its image, and let be the preimage of under . Then the restriction of to is a Freiman -isomorphism onto its image . Lastly, there exists some choice of nonzero such that the restriction of the modulo- reduction to is a Freiman -isomorphism onto its image. The result follows after composing this map with the earlier Freiman -isomorphism. Bohr sets and Bogolyubov's lemma Though Freiman's theorem applies to sets of integers, the Ruzsa modeling lemma allows one to model sets of integers as subsets of finite cyclic groups. So it is useful to first work in the setting of a finite field, and then generalize results to the integers. The following lemma was proved by Bogolyubov: Let and let . Then contains a subspace of of dimension at least . Generalizing this lemma to arbitrary cyclic groups requires an analogous notion to “subspace”: that of the Bohr set. Let be a subset of where is a prime. The Bohr set of dimension and width is where is the distance from to the nearest integer. The following lemma generalizes Bogolyubov's lemma: Let and . Then contains a Bohr set of dimension at most and width . Here the dimension of a Bohr set is analogous to the codimension of a set in . The proof of the lemma involves Fourier-analytic methods. The following proposition relates Bohr sets back to generalized arithmetic progressions, eventually leading to the proof of Freiman's theorem. Let be a Bohr set in of dimension and width . Then contains a proper generalized arithmetic progression of dimension at most and size at least . The proof of this proposition uses Minkowski's theorem, a fundamental result in geometry of numbers. Proof By the Plünnecke–Ruzsa inequality, . By Bertrand's postulate, there exists a prime such that . By the Ruzsa modeling lemma, there exists a subset of of cardinality at least such that is Freiman 8-isomorphic to a subset . By the generalization of Bogolyubov's lemma, contains a proper generalized arithmetic progression of dimension at most and size at least . Because and are Freiman 8-isomorphic, and are Freiman 2-isomorphic. Then the image under the 2-isomorphism of the proper generalized arithmetic progression in is a proper generalized arithmetic progression in called . But , since . Thus so by the Ruzsa covering lemma for some of cardinality at most . Then is contained in a generalized arithmetic progression of dimension and size at most , completing the proof. Generalizations A result due to Ben Green and Imre Ruzsa generalized Freiman's theorem to arbitrary abelian groups. They used an analogous notion to generalized arithmetic progressions, which they called coset progressions. A coset progression of an abelian group is a set for a proper generalized arithmetic progression and a subgroup of . The dimension of this coset progression is defined to be the dimension of , and its size is defined to be the cardinality of the whole set. Green and Ruzsa showed the following: Let be a finite set in an abelian group such that . Then is contained in a coset progression of dimension at most and size at most , where and are functions of that are independent of . Green and Ruzsa provided upper bounds of and for some absolute constant . Terence Tao (2010) also generalized Freiman's theorem to solvable groups of bounded derived length. Extending Freiman’s theorem to an arbitrary nonabelian group is still open. Results for , when a set has very small doubling, are referred to as Kneser theorems. The polynomial Freiman–Ruzsa conjecture, is a generalization published in a paper by Imre Ruzsa but credited by him to Katalin Marton. It states that if a subset of a group (a power of a cyclic group) has doubling constant such that then it is covered by a bounded number of cosets of some subgroup with. In 2012 Tom Sanders gave an almost polynomial bound of the conjecture for abelian groups. In 2023 a solution over a field of characteristic 2 has been posted as a preprint by Tim Gowers, Ben Green, Freddie Manners and Terry Tao. This proof was completely formalized in the Lean 4 formal proof language, a collaborative project that marked an important milestone in terms of mathematicians successfully formalizing contemporary mathematics. See also Markov spectrum Plünnecke–Ruzsa inequality Kneser's theorem (combinatorics) References Further reading Sumsets Theorems in number theory
Freiman's theorem
[ "Mathematics" ]
1,635
[ "Mathematical theorems", "Combinatorics", "Theorems in number theory", "Sumsets", "Mathematical problems", "Number theory" ]
2,240,310
https://en.wikipedia.org/wiki/Fermionic%20field
In quantum field theory, a fermionic field is a quantum field whose quanta are fermions; that is, they obey Fermi–Dirac statistics. Fermionic fields obey canonical anticommutation relations rather than the canonical commutation relations of bosonic fields. The most prominent example of a fermionic field is the Dirac field, which describes fermions with spin-1/2: electrons, protons, quarks, etc. The Dirac field can be described as either a 4-component spinor or as a pair of 2-component Weyl spinors. Spin-1/2 Majorana fermions, such as the hypothetical neutralino, can be described as either a dependent 4-component Majorana spinor or a single 2-component Weyl spinor. It is not known whether the neutrino is a Majorana fermion or a Dirac fermion; observing neutrinoless double-beta decay experimentally would settle this question. Basic properties Free (non-interacting) fermionic fields obey canonical anticommutation relations; i.e., involve the anticommutators {a, b} = ab + ba, rather than the commutators [a, b] = ab − ba of bosonic or standard quantum mechanics. Those relations also hold for interacting fermionic fields in the interaction picture, where the fields evolve in time as if free and the effects of the interaction are encoded in the evolution of the states. It is these anticommutation relations that imply Fermi–Dirac statistics for the field quanta. They also result in the Pauli exclusion principle: two fermionic particles cannot occupy the same state at the same time. Dirac fields The prominent example of a spin-1/2 fermion field is the Dirac field (named after Paul Dirac), and denoted by . The equation of motion for a free spin 1/2 particle is the Dirac equation, where are gamma matrices and is the mass. The simplest possible solutions to this equation are plane wave solutions, and . These plane wave solutions form a basis for the Fourier components of , allowing for the general expansion of the wave function as follows, Here u and v are spinors labelled by their spin s and spinor indices . For the electron, a spin 1/2 particle, s = +1/2 or s = −1/2. The energy factor is the result of having a Lorentz invariant integration measure. In second quantization, is promoted to an operator, so the coefficients of its Fourier modes must be operators too. Hence, and are operators. The properties of these operators can be discerned from the properties of the field. and obey the anticommutation relations: We impose an anticommutator relation (as opposed to a commutation relation as we do for the bosonic field) in order to make the operators compatible with Fermi–Dirac statistics. By putting in the expansions for and , the anticommutation relations for the coefficients can be computed. In a manner analogous to non-relativistic annihilation and creation operators and their commutators, these algebras lead to the physical interpretation that creates a fermion of momentum p and spin s, and creates an antifermion of momentum q and spin r. The general field is now seen to be a weighted (by the energy factor) summation over all possible spins and momenta for creating fermions and antifermions. Its conjugate field, , is the opposite, a weighted summation over all possible spins and momenta for annihilating fermions and antifermions. With the field modes understood and the conjugate field defined, it is possible to construct Lorentz invariant quantities for fermionic fields. The simplest is the quantity . This makes the reason for the choice of clear. This is because the general Lorentz transform on is not unitary so the quantity would not be invariant under such transforms, so the inclusion of is to correct for this. The other possible non-zero Lorentz invariant quantity, up to an overall conjugation, constructible from the fermionic fields is . Since linear combinations of these quantities are also Lorentz invariant, this leads naturally to the Lagrangian density for the Dirac field by the requirement that the Euler–Lagrange equation of the system recover the Dirac equation. Such an expression has its indices suppressed. When reintroduced the full expression is The Hamiltonian (energy) density can also be constructed by first defining the momentum canonically conjugate to , called With that definition of , the Hamiltonian density is: where is the standard gradient of the space-like coordinates, and is a vector of the space-like matrices. It is surprising that the Hamiltonian density doesn't depend on the time derivative of , directly, but the expression is correct. Given the expression for we can construct the Feynman propagator for the fermion field: we define the time-ordered product for fermions with a minus sign due to their anticommuting nature Plugging our plane wave expansion for the fermion field into the above equation yields: where we have employed the Feynman slash notation. This result makes sense since the factor is just the inverse of the operator acting on in the Dirac equation. Note that the Feynman propagator for the Klein–Gordon field has this same property. Since all reasonable observables (such as energy, charge, particle number, etc.) are built out of an even number of fermion fields, the commutation relation vanishes between any two observables at spacetime points outside the light cone. As we know from elementary quantum mechanics two simultaneously commuting observables can be measured simultaneously. We have therefore correctly implemented Lorentz invariance for the Dirac field, and preserved causality. More complicated field theories involving interactions (such as Yukawa theory, or quantum electrodynamics) can be analyzed too, by various perturbative and non-perturbative methods. Dirac fields are an important ingredient of the Standard Model. See also Dirac equation Spin–statistics theorem Spinor Composite Field Auxiliary Field References Peskin, M and Schroeder, D. (1995). An Introduction to Quantum Field Theory, Westview Press. (See pages 35–63.) Srednicki, Mark (2007). Quantum Field Theory , Cambridge University Press, . Weinberg, Steven (1995). The Quantum Theory of Fields, (3 volumes) Cambridge University Press. Quantum field theory Spinors
Fermionic field
[ "Physics" ]
1,380
[ "Quantum field theory", "Quantum mechanics" ]
2,240,363
https://en.wikipedia.org/wiki/Television%20antenna
A television antenna, also called a television aerial (in British English), is an antenna specifically designed for use with a television receiver (TV) to receive terrestrial over-the-air (OTA) broadcast television signals from a television station. Terrestrial television is broadcast on frequencies from about 47 to 250 MHz in the very high frequency (VHF) band, and 470 to 960 MHz in the ultra high frequency (UHF) band in different countries. Television antennas are manufactured in two different types: indoor and outdoor antennas. Indoor antennas are designed to be located on top of or next to the television set, but are ideally placed near a window in a room and as high up as possible for the best reception. The most common types of indoor antennas are the dipole ("rabbit ears"), which work best for VHF channels, and loop antennas, which work best for UHF. Outdoor antennas on the other hand are designed to be mounted on a mast on top of the owner's house, or in a loft or attic where the dry conditions and increased elevation are advantageous for reception and antenna longevity. Outdoor antennas are more expensive and difficult to install but are necessary for adequate reception in fringe areas far from television stations; the most common types of these are the Yagi, log periodic, and (for UHF) the multi-bay reflective array antenna. Description The purpose of the antenna is to intercept radio waves from the desired television stations and convert them to tiny radio frequency alternating currents which are applied to the television's tuner, which extracts the television signal. The antenna is connected to the television with a specialized cable designed to carry radio current, called transmission line. Earlier antennas used a flat cable called 300 ohm twin-lead. The standard today is 75 ohm coaxial cable, which is less susceptible to interference which plugs into an F connector or Belling-Lee connector (depending on region) on the back of the TV. To convert the signal from antennas that use a twin-lead line to the modern coaxial cable input, a small transformer called a balun is used in the line. In most countries, television broadcasting is allowed in the very high frequency (VHF) band from 47 to 68 MHz, called VHF low band or band I in Europe; 174 to 216 MHz, called VHF high band or band III in Europe, and in the ultra high frequency (UHF) band from 470 to 698 MHz, called band IV and V in Europe. The boundaries of each band vary somewhat in different countries. Radio waves in these bands travel by line-of-sight; they are blocked by hills and the visual horizon, limiting a television station's reception area to , depending on terrain. Analog vs. digital In the previous standard analog television, used before 2006, the VHF and UHF bands required separate tuners in the television receiver, which had separate antenna inputs. The wavelength of a radio wave equals the speed of light (c), divided by the frequency. The above frequency bands cover a 15:1 wavelength ratio, or almost 4 octaves. It is difficult to design a single antenna to receive such a wide wavelength range, and there is an octave gap from 216 to 470 MHz between the VHF and UHF frequencies. So traditionally, separate antennas (outdoor antennas with separate sets of elements on a single support boom) have been used to receive the VHF and UHF channels. Starting in 2006, many countries in the world switched from broadcasting using an older analog television standard to newer digital television (DTV). However, the same broadcast frequencies are generally used, so the antennas used for the older analog television will also receive the new DTV broadcasts. Sellers often claim to supply a special digital or high-definition television (HDTV) antenna advised as a replacement for an existing analog television antenna; at best this is misinformation to generate sales of unneeded equipment, At worst, it may leave the viewer with a UHF-only antenna in a local market (particularly in North America) where some digital stations remain on their original high VHF or low VHF frequencies. Reception issues Places unable to be reached by television broadcast transmitters are known as black spots in Australia. In East Germany, the areas that could not receive western TV signals were referred to as the Tal der Ahnungslosen, or Valley of the Clueless. Indoor Indoor antennas may be mounted on the television itself or stand on a table next to it, connected to the television by a short feed line. Due to space constraints, indoor antennas cannot be as large and elaborate as outdoor antennas, they are not mounted at as high an elevation, and the building walls block some of the radio waves; for these reasons, indoor antennas generally do not give as good reception as outdoor antennas. They are often perfectly adequate in urban and suburban areas, which are usually within the strong radiation footprint of local television stations. Still, in rural fringe reception areas, only an outdoor antenna may give adequate reception. A few of the simplest indoor antennas are described below, but a great variety of designs and types exist. Many have a dial on the antenna with a number of different settings to alter the antenna's reception pattern. This should be rotated with the set on while looking at the screen until the best picture is obtained. Rabbit ears The oldest and most widely used (at least in the United States) indoor antenna is the rabbit ears or bunny ears, which are often provided with new television sets. It is a simple half-wave dipole antenna used to receive the VHF television bands, consisting in the US of 54 to 88 MHz (band I) and 174 to 216 MHz (band III), with wavelengths of . It is constructed of two telescoping rods attached to a base, which extend out to about length (approximately one-quarter wavelength at 54 MHz) and can be collapsed when not in use. For best reception, the rods should be adjusted to be a little less than wavelength at the frequency of the television channel being received. However, the dipole has a wide bandwidth, so often adequate reception is achieved without adjusting the length. The measured gain of rabbit ears is low, about ―2 dBi, or ―4 dB with respect to a half wave dipole. This means it is not as directional and sensitive to distant stations as a large rooftop antenna. Still, its wide-angle reception pattern may allow it to receive several stations located in different directions without requiring readjustment when the channel is changed. Dipole antennas are bi-directional; that is, they have two main lobes in opposite directions, 180° apart. Instead of being fixed in position like other antennas, the elements are mounted on ball-and-socket joints. They can be adjusted to various angles in a V shape, allowing them to be moved out of the way in crowded quarters. Another reason for the V shape is that when receiving channels at the top of the band with the rods fully extended, the antenna elements will typically resonate at their 3rd harmonic. In this mode, the direction of maximum gain (the main lobe) is no longer perpendicular to the rods. Still, the radiation pattern will have lobes at an angle to the rods, making it advantageous to be able to adjust them to various angles. Whip antenna Some portable televisions use a whip antenna. This consists of a single telescoping rod about long attached to the television, which can be retracted when not in use. It functions as a quarter-wave monopole antenna. The other side of the feedline is connected to the ground plane on the TV's circuit board, which acts as ground. The whip antenna generally has an omnidirectional reception pattern, with maximum sensitivity in directions perpendicular to the antenna axis and gain similar to rabbit ears. Loop antenna The UHF channels are often received by a single turn loop antenna. Since a rabbit ears antenna only covers the VHF bands, it is often combined with a UHF loop mounted on the same base to cover all the TV channels. This of course also depends by country and region: for example in the UK and Ireland, terrestrial TV broadcasts are only on the UHF band, meaning that a loop antenna is necessary and the rabbit ears would only be useful for FM radio reception. Flat antenna A more recent phenomenon for indoor antennas are flat antennas, which are lightweight, thin, and usually square-shaped with the claim of having more omnidirectional reception. They are also marketed as being more in line with modern minimalistic home designs. Flat antennas may have a stand or could be hung on a wall or a window. Internally, the thin, flat square is a loop antenna with its circular metallic wiring embedded into conductive plastic. Outdoor When a higher-gain antenna is needed to achieve adequate reception in suburban or fringe reception areas, an outdoor directional antenna is usually used. Although most simple antennas have null directions where they have zero response, the directions of useful gain are very broad. In contrast, directional antennas can have an almost unidirectional radiation pattern, so the correct end of the antenna must be pointed at the TV station. As an antenna design provides higher gain (compared to a dipole), the main lobe of the radiation pattern becomes narrower. Outdoor antennas provide up to a 15 dB gain in signal strength and 15-20 dB greater rejection of ghost signals in analog TV. Combined with a signal increase of 14 dB due to height and 11 dB due to lack of attenuating building walls, an outdoor antenna can result in a signal strength increase of up to 40 dB at the TV receiver. Outdoor antenna designs are often based on the Yagi–Uda antenna or log-periodic dipole array (LPDA). These are composed of multiple half-wave dipole elements, consisting of metal rods approximately half of the wavelength of the television signal, mounted in a line on a support boom. These act as resonators; the electric field of the incoming radio wave pushes the electrons in the rods back and forth, creating standing waves of oscillating voltage in the rods. The antenna can have a smaller or larger number of rod elements; in general, the more elements, the higher the gain and the more directional. Another design used mainly for UHF reception is the reflective array antenna, consisting of a vertical metal screen with multiple dipole elements mounted in front of it. The television broadcast bands are too wide in frequency to be covered by a single antenna, so the two options are separate antennas used for the VHF and UHF bands or a combination (combo) VHF/UHF antenna. A VHF/UHF antenna combines two antennas feeding the same feedline mounted on the same support boom. More extended elements that pick up VHF frequencies are located at the back of the boom and often function as a log-periodic antenna. Shorter elements that receive the UHF stations are located at the front of the boom and often function as a Yagi antenna. Since directional antennas must be pointed at the transmitting antenna, this is a problem when the television stations to be received are located in different directions. In this case, two or more directional rooftop antennas, each pointed at a different transmitter, are often mounted on the same mast and connected to one receiver for best performance filter or matching circuits are used to keep each antenna from degrading the performance of the others connected to the same transmission line. An alternative is to use a single antenna mounted on a rotator, a remote servo system that rotates the antenna to a new direction when a dial next to the television is turned. Sometimes television transmitters are deliberately located such that receivers in a given region need only receive transmissions in a relatively narrow band of the full UHF television spectrum and from the same direction, hence allowing the use of a higher gain grouped aerial. Installation Antennas are commonly placed on rooftops and sometimes in attics. Placing an antenna indoors significantly attenuates the level of the available signal. Directional antennas must be pointed at the transmitter they are receiving; in most cases great accuracy is not needed. In a given region, it is sometimes arranged that all television transmitters are located in roughly the same direction and use frequencies spaced closely enough that a single antenna suffices for all. A single transmitter location may transmit signals for several channels. CABD (communal antenna broadcast distribution) is a system installed inside a building to receive free-to-air TV/FM signals transmitted via radio frequencies and distribute them to the audience. Analog television signals are susceptible to ghosting in the image, multiple closely spaced images giving the impression of blurred and repeated images of edges in the picture. This is due to the signal being reflected from nearby objects (buildings, trees, mountains); several copies of the signal, of different strengths and subject to different delays, are picked up. This is different for other transmissions. Careful positioning of the antenna can produce a compromise position, which minimizes the ghosts on different channels. Ghosting is also possible if multiple antennas connected to the same receiver pick up the same station, especially if the lengths of the cables connecting them to the splitter/merger are different lengths or the antennas are too close together. Analog television is being replaced by digital, which is not subject to ghosting; the same reflected signal that causes ghosting in an analog signal would produce no viewable content at all in digital. However, in this case, interference causes significantly more significant image quality degradation. Rooftop and other outdoor antennas Aerials are attached to roofs in various ways, usually on a pole to elevate it above the roof. This is generally sufficient in most areas. In some places, however, such as a deep valley or near taller structures, the antenna may need to be placed significantly higher, using a guyed mast or mast. The wire connecting the antenna indoors is referred to as the or drop, and the longer the downlead is, the greater the signal degradation in the wire. Certain cables may help reduce this tendency. The higher the antenna is placed, the better it will perform. An antenna of higher gain will be able to receive weaker signals from its preferred direction. Intervening buildings, topographical features (mountains), and dense forests will weaken the signal; in many cases, the signal will be reflected such that a usable signal is still available. There are physical dangers inherent to high or complex antennas, such as the structure falling or being destroyed by weather. There are also varying local ordinances which restrict and limit such things as the height of a structure without obtaining permits. For example, in the United States, the Telecommunications Act of 1996 allows any homeowner to install "An antenna that is designed to receive local television broadcast signals" but that "masts higher than above the roof-line may be subject to local permitting requirements." Indoor antennas As discussed previously, antennas may be placed indoors where signals are strong enough to overcome antenna shortcomings. The antenna is simply plugged into the television receiver and placed conveniently, often on the top of the receiver ("set-top"). Sometimes, the position needs to be experimented with to get the best picture. Indoor antennas can also benefit from RF amplification, commonly called a TV booster. Reception from indoor antennas can be problematic in weak signal areas. Attic installation Sometimes, it is desirable not to put an antenna on the roof; in these cases, antennas designed for outdoor use are often mounted in the attic or loft, although antennas designed for attic use are also available. Putting an antenna indoors significantly decreases its performance due to lower elevation above ground level and intervening walls; however, in strong signal areas, reception may be satisfactory. One layer of asphalt shingles, roof felt, and a plywood roof deck is considered to attenuate the signal to about half. Multiple antennas, rotators It is sometimes desired to receive signals from transmitters which are not in the same direction. This can be achieved, for one station at a time, by using a rotator operated by an electric motor to turn the antenna as desired. Alternatively, two or more antennas, each pointing at a desired transmitter and coupled by appropriate circuitry, can be used. To prevent the antennas from interfering with each other, the vertical spacing between the booms must be at least half the wavelength of the lowest frequency to be received (Distance = ). The wavelength of 54 MHz (Channel 2) is (λ × f = c) so the antennas must be a minimum of apart. It is also important that the cables connecting the antennas to the signal splitter/merger be precisely the same length to prevent phasing issues, which cause ghosting with analog reception. That is, the antennas might both pick up the same station; the signal from the one with the shorter cable will reach the receiver slightly sooner, supplying the receiver with two pictures slightly offset. There may be phasing issues even with the same length of down-lead cable. Band-pass filters or signal traps may help to reduce this problem. For side-by-side placement of multiple antennas, as is common in a space of limited height such as an attic, they should be separated by at least one full wavelength of the lowest frequency to be received at their closest point. When multiple antennas are often used, one is for a range of co-located stations, and the other is for a single transmitter in a different direction. Safety TV antennas are good conductors of electricity and attract lightning, acting as a lightning rod. A lightning arrester is usually used to protect against this. A large grounding rod connected to both the antenna and the mast or pole is required. Properly installed masts, especially tall ones, are guyed with galvanized cable; no insulators are needed. They are designed to withstand worst-case weather conditions in the area and are positioned so that they do not interfere with power lines if they fall. There is an inherent danger in being on the rooftop of a house, required for installing or adjusting a television antenna. See also Broadcast television systems Radio masts and towers, sometimes called Radio and TV antennas Satellite dish Satellite television Terrestrial television References External links Article on the basic theory of TV aerials and their use See Which TV Stations You Can Get on a Map 'Up on the roof' antenna page Antennas (radio) Radio electronics Radio frequency antenna types Radio frequency propagation Radio technology Antenna
Television antenna
[ "Physics", "Technology", "Engineering" ]
3,703
[ "Information and communications technology", "Radio electronics", "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Radio technology", "Waves" ]
2,242,435
https://en.wikipedia.org/wiki/Beam%20crossing
A beam crossing in a particle collider occurs when two packets of particles, going in opposite directions, reach the same point in space. Most of the particles in each packet cross each other, but a few may collide, producing other particles that may be observed in a particle detector. In a linear collider there is only one location where beam crossings occur, while in a modern accelerator ring there are a few locations (LHC, for example, has four); it is at these points that detectors are placed. References Experimental particle physics Accelerator physics
Beam crossing
[ "Physics" ]
114
[ "Applied and interdisciplinary physics", "Experimental physics", "Particle physics", "Experimental particle physics", "Particle physics stubs", "Accelerator physics" ]
2,242,567
https://en.wikipedia.org/wiki/Galveston%20Seawall
The Galveston Seawall is a seawall in Galveston, Texas, that was built after the Galveston hurricane of 1900 for protection from future hurricanes. Construction began in September 1902, and the initial segment was completed on July 29, 1904. From 1904 to 1963, the seawall was extended from to over . Description Although the Seawall performed as intended, it created an unintended and insurmountable consequence: passive erosion resulting in the gradual disappearance of the once-wide beach and the resort business with it. "Within twenty years, the city had lost one hundred yards of sand. People who once watched auto racing on a wide beach were left with a narrow strip of sand at low tide and a gloomy vista of waves on rocks when the tide was high." Houston soon overtook Galveston as the major city in the region. Reporting in the aftermath of the 1983 Hurricane Alicia, the Corps of Engineers estimated that $100 million in damage was avoided because of the seawall. On September 13, 2008, Hurricane Ike's large waves over-topped the seawall. As a result, a commission was established by the Texas governor to investigate preparing for and mitigating future disasters. A proposal has been put forth to build an "Ike Dike", a massive levee system that would protect the Galveston Bay and the important industrial facilities that line the coast and the Houston Ship Channel from a future, potentially more destructive storm. The proposal has gained widespread support from a variety of business interests. However, this proposal never passed the conceptual stage. Since 2009 there has been many similar propositions brought forth for a more practical layered network to consist of smaller, local levees and natural protections have been put forward by the SSPEED Center at Rice University and the University of Houston. These proposals include a surge gate at the mouth of the Houston Ship Channel connecting adjacent high ground near the Fred Hartman Bridge and hard protections for the west shore of Galveston Bay and around the densely developed east end of Galveston Island. Also included is the proposed lower coastal Lone Star Coastal National Recreation Area. Texas F.M. 3005 is known as Seawall Boulevard where it runs along the seawall. The sidewalk adjacent to Seawall Boulevard on top of the seawall is claimed to be the longest continuous sidewalk in the world at long. The seawall is long. It is approximately high and thick at its base. The seawall was listed in the National Register of Historic Places in 1977 and designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2001. Many miles of the seawall are painted with murals. These huge murals are painted by children and depict underwater life. Gallery See also 1900 Storm Memorial, installed along the Seawall The Dolphins (sculpture), installed along the Seawall National Register of Historic Places listings in Galveston County, Texas References Further reading (Diagrams of the movable concrete mixer plant used for construction of the seawall) (Diagram and description of the geometry of the seawall to dissipate wave energy) External links One-hundred-year-old photos of the Galveston seawall Buildings and structures in Galveston, Texas Buildings and structures on the National Register of Historic Places in Texas Dikes in the United States Galveston Hurricane of 1900 Historic Civil Engineering Landmarks National Register of Historic Places in Galveston County, Texas Seawalls Tourist attractions in Galveston, Texas 1904 establishments in Texas
Galveston Seawall
[ "Engineering" ]
677
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
2,242,975
https://en.wikipedia.org/wiki/Wavelet%20packet%20decomposition
Originally known as optimal subband tree structuring (SB-TS), also called wavelet packet decomposition (WPD; sometimes known as just wavelet packets or subband tree), is a wavelet transform where the discrete-time (sampled) signal is passed through more filters than the discrete wavelet transform (DWT). Introduction In the DWT, each level is calculated by passing only the previous wavelet approximation coefficients (cAj) through discrete-time low- and high-pass quadrature mirror filters. However, in the WPD, both the detail (cDj (in the 1-D case), cHj, cVj, cDj (in the 2-D case)) and approximation coefficients are decomposed to create the full binary tree. For n levels of decomposition the WPD produces 2n different sets of coefficients (or nodes) as opposed to sets for the DWT. However, due to the downsampling process the overall number of coefficients is still the same and there is no redundancy. From the point of view of compression, the standard wavelet transform may not produce the best result, since it is limited to wavelet bases that increase by a power of two towards the low frequencies. It could be that another combination of bases produce a more desirable representation for a particular signal. There are several algorithms for subband tree structuring that find a set of optimal bases that provide the most desirable representation of the data relative to a particular cost function (entropy, energy compaction, etc.). There were relevant studies in signal processing and communications fields to address the selection of subband trees (orthogonal basis) of various kinds, e.g. regular, dyadic, irregular, with respect to performance metrics of interest including energy compaction (entropy), subband correlations and others. Discrete wavelet transform theory (continuous in the time variable) offers an approximation to transform discrete (sampled) signals. In contrast, the discrete-time subband transform theory enables a perfect representation of already sampled signals. Gallery Applications Wavelet packets were successfully applied in preclinical diagnosis. Wavelet packet decomposition proves advantageous for capturing intricate patterns and variations in the electrochemical signals, which can be indicative of the battery's health and degradation over time. By breaking down the complex battery signal into its constituent frequency components, wavelet packet decomposition allows for a more detailed analysis of the underlying characteristics associated with different stages of battery aging. Wavelet packet decomposition is employed as a preprocessing step to decompose vibration signals acquired from the wind turbine gearbox into multiple frequency bands, capturing both high and low-frequency components. This decomposition allows for the extraction of essential features related to fault signatures at different scales, enabling a more comprehensive analysis of the gearbox's health status. It helps to improve the accuracy and efficiency of fault detection and classification, especially in the complex and critical domain of wind turbine gearbox systems. In the context of rainfall forecasting, wavelet packet decomposition proves valuable for capturing the complex and multi-scale patterns in precipitation data. It can decompose the original monthly rainfall time series into various sub-series corresponding to different frequency. This decomposition is instrumental in unveiling hidden patterns and trends within the data, which can be crucial for improving the forecasting accuracy. Moisture detection in timber is crucial for assessing its structural integrity and preventing potential issues such as decay and damage. Wavelet Packet Decomposition is a powerful signal processing technique that offers a multi-resolution analysis of the timber's moisture content. This approach allows for a detailed examination of the signal at different frequency bands, providing a more comprehensive understanding of the moisture distribution within the material. Researchers employ wavelet packet decomposition to analyze the seismic response of structures, enabling a finer resolution in both time and frequency domains. This detailed analysis allows for the identification of subtle changes in the structural response that may signify damage. By decomposing the seismic response into its constituent frequency components, the researchers gain insights into the time-varying characteristics of the structural behavior. This is crucial for identifying dynamic changes in the structure's response over time, which may indicate the presence and extent of damage. In the context of forecasting oil futures prices, the multiresolution nature of wavelet packet decomposition enables the forecasting model to capture both high and low-frequency components in the time series, thereby improving the ability to capture the complex patterns and fluctuations inherent in financial data. References External links An implementation of wavelet packet decomposition can be found in MATLAB wavelet toolbox. An implementation for R can be found in the wavethresh package. An illustration and implementation of wavelet packets along with its code in C++ can be found at: JWave: An implementation in Java for 1-D and 2-D wavelet packets using Haar, Daubechies, Coiflet, and Legendre wavelets. Wavelets Signal processing
Wavelet packet decomposition
[ "Technology", "Engineering" ]
1,004
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
610,760
https://en.wikipedia.org/wiki/Bioacoustics
Bioacoustics is a cross-disciplinary science that combines biology and acoustics. Usually it refers to the investigation of sound production, dispersion and reception in animals (including humans). This involves neurophysiological and anatomical basis of sound production and detection, and relation of acoustic signals to the medium they disperse through. The findings provide clues about the evolution of acoustic mechanisms, and from that, the evolution of animals that employ them. In underwater acoustics and fisheries acoustics the term is also used to mean the effect of plants and animals on sound propagated underwater, usually in reference to the use of sonar technology for biomass estimation. The study of substrate-borne vibrations used by animals is considered by some a distinct field called biotremology. History For a long time humans have employed animal sounds to recognise and find them. Bioacoustics as a scientific discipline was established by the Slovene biologist Ivan Regen who began systematically to study insect sounds. In 1925 he used a special stridulatory device to play in a duet with an insect. Later, he put a male cricket behind a microphone and female crickets in front of a loudspeaker. The females were not moving towards the male but towards the loudspeaker. Regen's most important contribution to the field apart from realization that insects also detect airborne sounds was the discovery of tympanal organ's function. Relatively crude electro-mechanical devices available at the time (such as phonographs) allowed only for crude appraisal of signal properties. More accurate measurements were made possible in the second half of the 20th century by advances in electronics and utilization of devices such as oscilloscopes and digital recorders. The most recent advances in bioacoustics concern the relationships among the animals and their acoustic environment and the impact of anthropogenic noise. Bioacoustic techniques have recently been proposed as a non-destructive method for estimating biodiversity of an area. Importance In the terrestrial environment, animals often use light for sensing distance, since light propagates well through air. Underwater sunlight only reaches to tens of meters depth. However, sound propagates readily through water and across considerable distances. Many marine animals can see well, but using hearing for communication, and sensing distance and location. Gauging the relative importance of audition versus vision in animals can be performed by comparing the number of auditory and optic nerves. Since the 1950s to 1960s, studies on dolphin echolocation behavior using high frequency click sounds revealed that many different marine mammal species make sounds, which can be used to detect and identify species under water. Much research in bioacoustics has been funded by naval research organizations, as biological sound sources can interfere with military uses underwater. Methods Listening is still one of the main methods used in bioacoustical research. Little is known about neurophysiological processes that play a role in production, detection and interpretation of sounds in animals, so animal behaviour and the signals themselves are used for gaining insight into these processes. Bioacoustics has also helped to pave the way for new emerging methods such as ecoacoustics (or acoustic ecology), an interdisciplinary field of research that studies the sounds produced by ecosystems, including biological, geophysical and anthropogenic sources. It examines how these sounds interact with the environment, providing insights into biodiversity, habitat health and ecological processes. By analysing soundscapes, ecoacoustics helps monitor environmental changes, assess conservation efforts and detect human impacts on natural systems. Acoustic signals An experienced observer can use animal sounds to recognize a "singing" animal species, its location and condition in nature. Investigation of animal sounds also includes signal recording with electronic recording equipment. Due to the wide range of signal properties and media they propagate through, specialized equipment may be required instead of the usual microphone, such as a hydrophone (for underwater sounds), detectors of ultrasound (very high-frequency sounds) or infrasound (very low-frequency sounds), or a laser vibrometer (substrate-borne vibrational signals). Computers are used for storing and analysis of recorded sounds. Specialized sound-editing software is used for describing and sorting signals according to their intensity, frequency, duration and other parameters. Animal sound collections, managed by museums of natural history and other institutions, are an important tool for systematic investigation of signals. Many effective automated methods involving signal processing, data mining, machine learning and artificial intelligence techniques have been developed to detect and classify the bioacoustic signals. Sound production, detection, and use in animals Scientists in the field of bioacoustics are interested in anatomy and neurophysiology of organs involved in sound production and detection, including their shape, muscle action, and activity of neuronal networks involved. Of special interest is coding of signals with action potentials in the latter. But since the methods used for neurophysiological research are still fairly complex and understanding of relevant processes is incomplete, more trivial methods are also used. Especially useful is observation of behavioural responses to acoustic signals. One such response is phonotaxis – directional movement towards the signal source. By observing response to well defined signals in a controlled environment, we can gain insight into signal function, sensitivity of the hearing apparatus, noise filtering capability, etc. Biomass estimation Biomass estimation is a method of detecting and quantifying fish and other marine organisms using sonar technology. As the sound pulse travels through water it encounters objects that are of different density than the surrounding medium, such as fish, that reflect sound back toward the sound source. These echoes provide information on fish size, location, and abundance. The basic components of the scientific echo sounder hardware function is to transmit the sound, receive, filter and amplify, record, and analyze the echoes. While there are many manufacturers of commercially available "fish-finders," quantitative analysis requires that measurements be made with calibrated echo sounder equipment, having high signal-to-noise ratios. Animal sounds Sounds used by animals that fall within the scope of bioacoustics include a wide range of frequencies and media, and are often not "sound" in the narrow sense of the word (i.e. compression waves that propagate through air and are detectable by the human ear). Katydid crickets, for example, communicate by sounds with frequencies higher than 100 kHz, far into the ultrasound range. Lower, but still in ultrasound, are sounds used by bats for echolocation. A segmented marine worm Leocratides kimuraorum produces one of the loudest popping sounds in the ocean at 157 dB, frequencies 1–100 kHz, similar to the snapping shrimps. On the other side of the frequency spectrum are low frequency-vibrations, often not detected by hearing organs, but with other, less specialized sense organs. The examples include ground vibrations produced by elephants whose principal frequency component is around 15 Hz, and low- to medium-frequency substrate-borne vibrations used by most insect orders. Many animal sounds, however, do fall within the frequency range detectable by a human ear, between 20 and 20,000 Hz. Mechanisms for sound production and detection are just as diverse as the signals themselves. Plant sounds In a series of scientific journal articles published between 2013 and 2016, Monica Gagliano of the University of Western Australia extended the science to include plant bioacoustics. See also Acoustic ecology Acoustical oceanography Animal communication Animal language Anthropophony Biomusic Biophony Diffusion (acoustics) Field recording Frog hearing and communication List of animal sounds List of Bioacoustics Software Music therapy Natural sounds Soundscape ecology Underwater acoustics Vocal learning Whale sound Zoomusicology Phonology References Further reading Ewing A.W. (1989): Arthropod bioacoustics: Neurobiology and behaviour. Edinburgh: Edinburgh University Press. Fletcher N. (2007): Animal Bioacoustics. IN: Rossing T.D. (ed.): Springer Handbook of Acoustics, Springer. External links ASA Animal Bioacoustics Technical Committee BioAcoustica: Wildlife Sounds Database The British Library Sound Archive has 150,000 recordings of over 10,000 species. International Bioacoustics Council links to many bioacoustics resources. Borror Laboratory of Bioacoustics at The Ohio State University has a large archive of animal sound recordings. Listen to Nature 400 examples of animal songs and calls Wildlife Sound Recording Society Bioacoustic Research Program at the Cornell Lab of Ornithology distributes a number of different free bioacoustics synthesis & analysis programs. Macaulay Library at the Cornell Lab of Ornithology is the world's largest collection of animal sounds and associated video. Xeno-canto A collection of bird vocalizations from around the world. Acoustics Zoosemiotics Soundscape ecology Sound Noise Hearing
Bioacoustics
[ "Physics", "Biology" ]
1,811
[ "Behavior", "Ethology", "Zoosemiotics", "Classical mechanics", "Acoustics", "Ecological techniques", "Soundscape ecology" ]
610,773
https://en.wikipedia.org/wiki/Per-unit%20system
In the power systems analysis field of electrical engineering, a per-unit system is the expression of system quantities as fractions of a defined base unit quantity. Calculations are simplified because quantities expressed as per-unit do not change when they are referred from one side of a transformer to the other. This can be a pronounced advantage in power system analysis where large numbers of transformers may be encountered. Moreover, similar types of apparatus will have the impedances lying within a narrow numerical range when expressed as a per-unit fraction of the equipment rating, even if the unit size varies widely. Conversion of per-unit quantities to volts, ohms, or amperes requires a knowledge of the base that the per-unit quantities were referenced to. The per-unit system is used in power flow, short circuit evaluation, motor starting studies etc. The main idea of a per unit system is to absorb large differences in absolute values into base relationships. Thus, representations of elements in the system with per unit values become more uniform. A per-unit system provides units for power, voltage, current, impedance, and admittance. With the exception of impedance and admittance, any two units are independent and can be selected as base values; power and voltage are typically chosen. All quantities are specified as multiples of selected base values. For example, the base power might be the rated power of a transformer, or perhaps an arbitrarily selected power which makes power quantities in the system more convenient. The base voltage might be the nominal voltage of a bus. Different types of quantities are labeled with the same symbol (pu); it should be clear whether the quantity is a voltage, current, or other unit of measurement. Purpose There are several reasons for using a per-unit system: Similar apparatus (generators, transformers, lines) will have similar per-unit impedances and losses expressed on their own rating, regardless of their absolute size. Because of this, per-unit data can be checked rapidly for gross errors. A per unit value out of normal range is worth looking into for potential errors. Manufacturers usually specify the impedance of apparatus in per unit values. Use of the constant is reduced in three-phase calculations. Per-unit quantities are the same on either side of a transformer, independent of voltage level By normalizing quantities to a common base, both hand and automatic calculations are simplified. It improves numerical stability of automatic calculation methods. Per unit data representation yields important information about relative magnitudes. The per-unit system was developed to make manual analysis of power systems easier. Although power-system analysis is now done by computer, results are often expressed as per-unit values on a convenient system-wide base. Base quantities Generally base values of power and voltage are chosen. The base power may be the rating of a single piece of apparatus such as a motor or generator. If a system is being studied, the base power is usually chosen as a convenient round number such as 10 MVA or 100 MVA. The base voltage is chosen as the nominal rated voltage of the system. All other base quantities are derived from these two base quantities. Once the base power and the base voltage are chosen, the base current and the base impedance are determined by the natural laws of electrical circuits. The base value should only be a magnitude, while the per-unit value is a phasor. The phase angles of complex power, voltage, current, impedance, etc., are not affected by the conversion to per unit values. The purpose of using a per-unit system is to simplify conversion between different transformers. Hence, it is appropriate to illustrate the steps for finding per-unit values for voltage and impedance. First, let the base power (S) of each end of a transformer become the same. Once every S is set on the same base, the base voltage and base impedance for every transformer can easily be obtained. Then, the real numbers of impedances and voltages can be substituted into the per-unit calculation definition to get the answers for the per-unit system. If the per-unit values are known, the real values can be obtained by multiplying by the base values. By convention, the following two rules are adopted for base quantities: The base power value is the same for the entire power system of concern. The ratio of the voltage bases on either side of a transformer is selected to be the same as the ratio of the transformer voltage ratings. With these two rules, a per-unit impedance remains unchanged when referred from one side of a transformer to the other. This allows the ideal transformer to be eliminated from a transformer model. Relationship between units The relationship between units in a per-unit system depends on whether the system is single-phase or three-phase. Single-phase Assuming that the independent base values are power and voltage, we have: Alternatively, the base value for power may be given in terms of reactive or apparent power, in which case we have, respectively, or The rest of the units can be derived from power and voltage using the equations , , and (Ohm's law), being represented by . We have: Three-phase Power and voltage are specified in the same way as single-phase systems. However, due to differences in what these terms usually represent in three-phase systems, the relationships for the derived units are different. Specifically, power is given as total (not per-phase) power, and voltage is line-to-line voltage. In three-phase systems the equations and also hold. The apparent power now equals Example of per-unit As an example of how per-unit is used, consider a three-phase power transmission system that deals with powers of the order of 500 MW and uses a nominal voltage of 138 kV for transmission. We arbitrarily select , and use the nominal voltage 138 kV as the base voltage . We then have: If, for example, the actual voltage at one of the buses is measured to be 136 kV, we have: Per-unit system formulas The following tabulation of per-unit system formulas is adapted from Beeman's Industrial Power Systems Handbook. In transformers It can be shown that voltages, currents, and impedances in a per-unit system will have the same values whether they are referred to primary or secondary of a transformer. For instance, for voltage, we can prove that the per unit voltages of two sides of the transformer, side 1 and side 2, are the same. Here, the per-unit voltages of the two sides are E1pu and E2pu respectively. (source: Alexandra von Meier Power System Lectures, UC Berkeley) E1 and E2 are the voltages of sides 1 and 2 in volts. N1 is the number of turns the coil on side 1 has. N2 is the number of turns the coil on side 2 has. Vbase1 and Vbase2 are the base voltages on sides 1 and 2. For current, we can prove that the per-unit currents of the two sides are the same below. (source: Alexandra von Meier Power System Lectures, UC Berkeley) where I1,pu and I2,pu are the per-unit currents of sides 1 and 2 respectively. In this, the base currents Ibase1 and Ibase2 are related in the opposite way that Vbase1 and Vbase2 are related, in that The reason for this relation is for power conservation Sbase1 = Sbase2 The full load copper loss of a transformer in per-unit form is equal to the per-unit value of its resistance: Therefore, it may be more useful to express the resistance in per-unit form as it also represents the full-load copper loss. As stated above, there are two degrees of freedom within the per unit system that allow the engineer to specify any per unit system. The degrees of freedom are the choice of the base voltage (V) and the base power (S). By convention, a single base power (S) is chosen for both sides of the transformer and its value is equal to the rated power of the transformer. By convention, there are actually two different base voltages that are chosen, V and V which are equal to the rated voltages for either side of the transformer. By choosing the base quantities in this manner, the transformer can be effectively removed from the circuit as described above. For example: Take a transformer that is rated at 10 kVA and 240/100 V. The secondary side has an impedance equal to 1∠0° Ω. The base impedance on the secondary side is equal to: This means that the per unit impedance on the secondary side is 1∠0° Ω / 1 Ω = 1∠0° pu When this impedance is referred to the other side, the impedance becomes: The base impedance for the primary side is calculated the same way as the secondary: This means that the per unit impedance is 5.76∠0° Ω / 5.76 Ω = 1∠0° pu, which is the same as when calculated from the other side of the transformer, as would be expected. Another useful tool for analyzing transformers is to have the base change formula that allows the engineer to go from a base impedance with one set of a base voltage and base power to another base impedance for a different set of a base voltage and base power. This becomes especially useful in real life applications where a transformer with a secondary side voltage of 1.2 kV might be connected to the primary side of another transformer whose rated voltage is 1 kV. The formula is as shown below. References Electrical engineering Electric power Power engineering
Per-unit system
[ "Physics", "Engineering" ]
1,980
[ "Physical quantities", "Energy engineering", "Power (physics)", "Electric power", "Power engineering", "Electrical engineering" ]
611,074
https://en.wikipedia.org/wiki/Point%20mutation
A point mutation is a genetic mutation where a single nucleotide base is changed, inserted or deleted from a DNA or RNA sequence of an organism's genome. Point mutations have a variety of effects on the downstream protein product—consequences that are moderately predictable based upon the specifics of the mutation. These consequences can range from no effect (e.g. synonymous mutations) to deleterious effects (e.g. frameshift mutations), with regard to protein production, composition, and function. Causes Point mutations usually take place during DNA replication. DNA replication occurs when one double-stranded DNA molecule creates two single strands of DNA, each of which is a template for the creation of the complementary strand. A single point mutation can change the whole DNA sequence. Changing one purine or pyrimidine may change the amino acid that the nucleotides code for. Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention. There are multiple ways for point mutations to occur. First, ultraviolet (UV) light and higher-frequency light have ionizing capability, which in turn can affect DNA. Reactive oxygen molecules with free radicals, which are a byproduct of cellular metabolism, can also be very harmful to DNA. These reactants can lead to both single-stranded and double-stranded DNA breaks. Third, bonds in DNA eventually degrade, which creates another problem to keep the integrity of DNA to a high standard. There can also be replication errors that lead to substitution, insertion, or deletion mutations. Categorization Transition/transversion categorization In 1959 Ernst Freese coined the terms "transitions" or "transversions" to categorize different types of point mutations. Transitions are replacement of a purine base with another purine or replacement of a pyrimidine with another pyrimidine. Transversions are replacement of a purine with a pyrimidine or vice versa. There is a systematic difference in mutation rates for transitions (Alpha) and transversions (Beta). Transition mutations are about ten times more common than transversions. Functional categorization Nonsense mutations include stop-gain and start-loss. Stop-gain is a mutation that results in a premature termination codon (a stop was gained), which signals the end of translation. This interruption causes the protein to be abnormally shortened. The number of amino acids lost mediates the impact on the protein's functionality and whether it will function whatsoever. Stop-loss is a mutation in the original termination codon (a stop was lost), resulting in abnormal extension of a protein's carboxyl terminus. Start-gain creates an AUG start codon upstream of the original start site. If the new AUG is near the original start site, in-frame within the processed transcript and downstream to a ribosomal binding site, it can be used to initiate translation. The likely effect is additional amino acids added to the amino terminus of the original protein. Frame-shift mutations are also possible in start-gain mutations, but typically do not affect translation of the original protein. Start-loss is a point mutation in a transcript's AUG start codon, resulting in the reduction or elimination of protein production. Missense mutations code for a different amino acid. A missense mutation changes a codon so that a different protein is created, a non-synonymous change. Conservative mutations result in an amino acid change. However, the properties of the amino acid remain the same (e.g., hydrophobic, hydrophilic, etc.). At times, a change to one amino acid in the protein is not detrimental to the organism as a whole. Most proteins can withstand one or two point mutations before their function changes. Non-conservative mutations result in an amino acid change that has different properties than the wild type. The protein may lose its function, which can result in a disease in the organism. For example, sickle-cell disease is caused by a single point mutation (a missense mutation) in the beta-hemoglobin gene that converts a GAG codon into GUG, which encodes the amino acid valine rather than glutamic acid. The protein may also exhibit a "gain of function" or become activated, such is the case with the mutation changing a valine to glutamic acid in the BRAF gene; this leads to an activation of the RAF protein which causes unlimited proliferative signalling in cancer cells. These are both examples of a non-conservative (missense) mutation. Silent mutations code for the same amino acid (a "synonymous substitution"). A silent mutation does not affect the functioning of the protein. A single nucleotide can change, but the new codon specifies the same amino acid, resulting in an unmutated protein. This type of change is called synonymous change since the old and new codon code for the same amino acid. This is possible because 64 codons specify only 20 amino acids. Different codons can lead to differential protein expression levels, however. Single base pair insertions and deletions Sometimes the term point mutation is used to describe insertions or deletions of a single base pair (which has more of an adverse effect on the synthesized protein due to the nucleotides' still being read in triplets, but in different frames: a mutation called a frameshift mutation). General consequences Point mutations that occur in non-coding sequences are most often without consequences, although there are exceptions. If the mutated base pair is in the promoter sequence of a gene, then the expression of the gene may change. Also, if the mutation occurs in the splicing site of an intron, then this may interfere with correct splicing of the transcribed pre-mRNA. By altering just one amino acid, the entire peptide may change, thereby changing the entire protein. The new protein is called a protein variant. If the original protein functions in cellular reproduction then this single point mutation can change the entire process of cellular reproduction for this organism. Point germline mutations can lead to beneficial as well as harmful traits or diseases. This leads to adaptations based on the environment where the organism lives. An advantageous mutation can create an advantage for that organism and lead to the trait's being passed down from generation to generation, improving and benefiting the entire population. The scientific theory of evolution is greatly dependent on point mutations in cells. The theory explains the diversity and history of living organisms on Earth. In relation to point mutations, it states that beneficial mutations allow the organism to thrive and reproduce, thereby passing its positively affected mutated genes on to the next generation. On the other hand, harmful mutations cause the organism to die or be less likely to reproduce in a phenomenon known as natural selection. There are different short-term and long-term effects that can arise from mutations. Smaller ones would be a halting of the cell cycle at numerous points. This means that a codon coding for the amino acid glycine may be changed to a stop codon, causing the proteins that should have been produced to be deformed and unable to complete their intended tasks. Because the mutations can affect the DNA and thus the chromatin, it can prohibit mitosis from occurring due to the lack of a complete chromosome. Problems can also arise during the processes of transcription and replication of DNA. These all prohibit the cell from reproduction and thus lead to the death of the cell. Long-term effects can be a permanent changing of a chromosome, which can lead to a mutation. These mutations can be either beneficial or detrimental. Cancer is an example of how they can be detrimental. Other effects of point mutations, or single nucleotide polymorphisms in DNA, depend on the location of the mutation within the gene. For example, if the mutation occurs in the region of the gene responsible for coding, the amino acid sequence of the encoded protein may be altered, causing a change in the function, protein localization, stability of the protein or protein complex. Many methods have been proposed to predict the effects of missense mutations on proteins. Machine learning algorithms train their models to distinguish known disease-associated from neutral mutations whereas other methods do not explicitly train their models but almost all methods exploit the evolutionary conservation assuming that changes at conserved positions tend to be more deleterious. While majority of methods provide a binary classification of effects of mutations into damaging and benign, a new level of annotation is needed to offer an explanation of why and how these mutations damage proteins. Moreover, if the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the binding of the transcription factors because the short nucleotide sequences recognized by the transcription factors will be altered. Mutations in this region can affect rate of efficiency of gene transcription, which in turn can alter levels of mRNA and, thus, protein levels in general. Point mutations can have several effects on the behavior and reproduction of a protein depending on where the mutation occurs in the amino acid sequence of the protein. If the mutation occurs in the region of the gene that is responsible for coding for the protein, the amino acid may be altered. This slight change in the sequence of amino acids can cause a change in the function, activation of the protein meaning how it binds with a given enzyme, where the protein will be located within the cell, or the amount of free energy stored within the protein. If the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the way in which transcription factors bind to the protein. The mechanisms of transcription bind to a protein through recognition of short nucleotide sequences. A mutation in this region may alter these sequences and, thus, change the way the transcription factors bind to the protein. Mutations in this region can affect the efficiency of gene transcription, which controls both the levels of mRNA and overall protein levels. Specific diseases caused by point mutations Cancer Point mutations in multiple tumor suppressor proteins cause cancer. For instance, point mutations in Adenomatous Polyposis Coli promote tumorigenesis. A novel assay, Fast parallel proteolysis (FASTpp), might help swift screening of specific stability defects in individual cancer patients. Neurofibromatosis Neurofibromatosis is caused by point mutations in the Neurofibromin 1 or Neurofibromin 2 gene. Sickle-cell anemia Sickle-cell anemia is caused by a point mutation in the β-globin chain of hemoglobin, causing the hydrophilic amino acid glutamic acid to be replaced with the hydrophobic amino acid valine at the sixth position. The β-globin gene is found on the short arm of chromosome 11. The association of two wild-type α-globin subunits with two mutant β-globin subunits forms hemoglobin S (HbS). Under low-oxygen conditions (being at high altitude, for example), the absence of a polar amino acid at position six of the β-globin chain promotes the non-covalent polymerisation (aggregation) of hemoglobin, which distorts red blood cells into a sickle shape and decreases their elasticity. Hemoglobin is a protein found in red blood cells, and is responsible for the transportation of oxygen through the body. There are two subunits that make up the hemoglobin protein: beta-globins and alpha-globins. Beta-hemoglobin is created from the genetic information on the HBB, or "hemoglobin, beta" gene found on chromosome 11p15.5. A single point mutation in this polypeptide chain, which is 147 amino acids long, results in the disease known as Sickle Cell Anemia. Sickle-cell anemia is an autosomal recessive disorder that affects 1 in 500 African Americans, and is one of the most common blood disorders in the United States. The single replacement of the sixth amino acid in the beta-globin, glutamic acid, with valine results in deformed red blood cells. These sickle-shaped cells cannot carry nearly as much oxygen as normal red blood cells and they get caught more easily in the capillaries, cutting off blood supply to vital organs. The single nucleotide change in the beta-globin means that even the smallest of exertions on the part of the carrier results in severe pain and even heart attack. Below is a chart depicting the first thirteen amino acids in the normal and abnormal sickle cell polypeptide chain. Tay–Sachs disease The cause of Tay–Sachs disease is a genetic defect that is passed from parent to child. This genetic defect is located in the HEXA gene, which is found on chromosome 15. The HEXA gene makes part of an enzyme called beta-hexosaminidase A, which plays a critical role in the nervous system. This enzyme helps break down a fatty substance called GM2 ganglioside in nerve cells. Mutations in the HEXA gene disrupt the activity of beta-hexosaminidase A, preventing the breakdown of the fatty substances. As a result, the fatty substances accumulate to deadly levels in the brain and spinal cord. The buildup of GM2 ganglioside causes progressive damage to the nerve cells. This is the cause of the signs and symptoms of Tay-Sachs disease. Repeat-induced point mutation In molecular biology, repeat-induced point mutation or RIP is a process by which DNA accumulates G:C to A:T transition mutations. Genomic evidence indicates that RIP occurs or has occurred in a variety of fungi while experimental evidence indicates that RIP is active in Neurospora crassa, Podospora anserina, Magnaporthe grisea, Leptosphaeria maculans, Gibberella zeae and Nectria haematococca. In Neurospora crassa, sequences mutated by RIP are often methylated de novo. RIP occurs during the sexual stage in haploid nuclei after fertilization but prior to meiotic DNA replication. In Neurospora crassa, repeat sequences of at least 400 base pairs in length are vulnerable to RIP. Repeats with as low as 80% nucleotide identity may also be subject to RIP. Though the exact mechanism of repeat recognition and mutagenesis are poorly understood, RIP results in repeated sequences undergoing multiple transition mutations. The RIP mutations do not seem to be limited to repeated sequences. Indeed, for example, in the phytopathogenic fungus L. maculans, RIP mutations are found in single copy regions, adjacent to the repeated elements. These regions are either non-coding regions or genes encoding small secreted proteins including avirulence genes. The degree of RIP within these single copy regions was proportional to their proximity to repetitive elements. Rep and Kistler have speculated that the presence of highly repetitive regions containing transposons, may promote mutation of resident effector genes. So the presence of effector genes within such regions is suggested to promote their adaptation and diversification when exposed to strong selection pressure. As RIP mutation is traditionally observed to be restricted to repetitive regions and not single copy regions, Fudal et al. suggested that leakage of RIP mutation might occur within a relatively short distance of a RIP-affected repeat. Indeed, this has been reported in N. crassa whereby leakage of RIP was detected in single copy sequences at least 930 bp from the boundary of neighbouring duplicated sequences. To elucidate the mechanism of detection of repeated sequences leading to RIP may allow to understand how the flanking sequences may also be affected. Mechanism RIP causes G:C to A:T transition mutations within repeats, however, the mechanism that detects the repeated sequences is unknown. RID is the only known protein essential for RIP. It is a DNA methyltransferease-like protein, that when mutated or knocked out results in loss of RIP. Deletion of the rid homolog in Aspergillus nidulans, dmtA, results in loss of fertility while deletion of the rid homolog in Ascobolus immersens, masc1, results in fertility defects and loss of methylation induced premeiotically (MIP). Consequences RIP is believed to have evolved as a defense mechanism against transposable elements, which resemble parasites by invading and multiplying within the genome. RIP creates multiple missense and nonsense mutations in the coding sequence. This hypermutation of G-C to A-T in repetitive sequences eliminates functional gene products of the sequence (if there were any to begin with). In addition, many of the C-bearing nucleotides become methylated, thus decreasing transcription. Use in molecular biology Because RIP is so efficient at detecting and mutating repeats, fungal biologists often use it as a tool for mutagenesis. A second copy of a single-copy gene is first transformed into the genome. The fungus must then mate and go through its sexual cycle to activate the RIP machinery. Many different mutations within the duplicated gene are obtained from even a single fertilization event so that inactivated alleles, usually due to nonsense mutations, as well as alleles containing missense mutations can be obtained. History The cellular reproduction process of meiosis was discovered by Oscar Hertwig in 1876. Mitosis was discovered several years later in 1882 by Walther Flemming. Hertwig studied sea urchins, and noticed that each egg contained one nucleus prior to fertilization and two nuclei after. This discovery proved that one spermatozoon could fertilize an egg, and therefore proved the process of meiosis. Hermann Fol continued Hertwig's research by testing the effects of injecting several spermatozoa into an egg, and found that the process did not work with more than one spermatozoon. Flemming began his research of cell division starting in 1868. The study of cells was an increasingly popular topic in this time period. By 1873, Schneider had already begun to describe the steps of cell division. Flemming furthered this description in 1874 and 1875 as he explained the steps in more detail. He also argued with Schneider's findings that the nucleus separated into rod-like structures by suggesting that the nucleus actually separated into threads that in turn separated. Flemming concluded that cells replicate through cell division, to be more specific mitosis. Matthew Meselson and Franklin Stahl are credited with the discovery of DNA replication. Watson and Crick acknowledged that the structure of DNA did indicate that there is some form of replicating process. However, there was not a lot of research done on this aspect of DNA until after Watson and Crick. People considered all possible methods of determining the replication process of DNA, but none were successful until Meselson and Stahl. Meselson and Stahl introduced a heavy isotope into some DNA and traced its distribution. Through this experiment, Meselson and Stahl were able to prove that DNA reproduces semi-conservatively. See also Missense mRNA PAM matrix References External links Modification of genetic information Mutation Molecular biology
Point mutation
[ "Chemistry", "Biology" ]
4,016
[ "Biochemistry", "Modification of genetic information", "Molecular genetics", "Molecular biology" ]
612,057
https://en.wikipedia.org/wiki/Potential%20well
A potential well is the region surrounding a local minimum of potential energy. Energy captured in a potential well is unable to convert to another type of energy (kinetic energy in the case of a gravitational potential well) because it is captured in the local minimum of a potential well. Therefore, a body may not proceed to the global minimum of potential energy, as it would naturally tend to do due to entropy. Overview Energy may be released from a potential well if sufficient energy is added to the system such that the local maximum is surmounted. In quantum physics, potential energy may escape a potential well without added energy due to the probabilistic characteristics of quantum particles; in these cases a particle may be imagined to tunnel through the walls of a potential well. The graph of a 2D potential energy function is a potential energy surface that can be imagined as the Earth's surface in a landscape of hills and valleys. Then a potential well would be a valley surrounded on all sides with higher terrain, which thus could be filled with water (e.g., be a lake) without any water flowing away toward another, lower minimum (e.g. sea level). In the case of gravity, the region around a mass is a gravitational potential well, unless the density of the mass is so low that tidal forces from other masses are greater than the gravity of the body itself. A potential hill is the opposite of a potential well, and is the region surrounding a local maximum. Quantum confinement Quantum confinement can be observed once the diameter of a material is of the same magnitude as the de Broglie wavelength of the electron wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials. A particle behaves as if it were free when the confining dimension is large compared to the wavelength of the particle. During this state, the bandgap remains at its original energy due to a continuous energy state. However, as the confining dimension decreases and reaches a certain limit, typically in nanoscale, the energy spectrum becomes discrete. As a result, the bandgap becomes size-dependent. As the size of the particles decreases, the electrons and electron holes come closer, and the energy required to activate them increases, which ultimately results in a blueshift in light emission. Specifically, the effect describes the phenomenon resulting from electrons and electron holes being squeezed into a dimension that approaches a critical quantum measurement, called the exciton Bohr radius. In current application, a quantum dot such as a small sphere confines in three dimensions, a quantum wire confines in two dimensions, and a quantum well confines only in one dimension. These are also known as zero-, one- and two-dimensional potential wells, respectively. In these cases they refer to the number of dimensions in which a confined particle can act as a free carrier. See external links, below, for application examples in biotechnology and solar cell technology. Quantum mechanics view The electronic and optical properties of materials are affected by size and shape. Well-established technical achievements including quantum dots were derived from size manipulation and investigation for their theoretical corroboration on quantum confinement effect. The major part of the theory is the behaviour of the exciton resembles that of an atom as its surrounding space shortens. A rather good approximation of an exciton's behaviour is the 3-D model of a particle in a box. The solution of this problem provides a sole mathematical connection between energy states and the dimension of space. Decreasing the volume or the dimensions of the available space, increases the energy of the states. Shown in the diagram is the change in electron energy level and bandgap between nanomaterial and its bulk state. The following equation shows the relationship between energy level and dimension spacing: Research results provide an alternative explanation of the shift of properties at nanoscale. In the bulk phase, the surfaces appear to control some of the macroscopically observed properties. However, in nanoparticles, surface molecules do not obey the expected configuration in space. As a result, surface tension changes tremendously. Classical mechanics view The Young–Laplace equation can give a background on the investigation of the scale of forces applied to the surface molecules: Under the assumption of spherical shape and resolving the Young–Laplace equation for the new radii (nm), we estimate the new (GPa). The smaller the radii, the greater the pressure is present. The increase in pressure at the nanoscale results in strong forces toward the interior of the particle. Consequently, the molecular structure of the particle appears to be different from the bulk mode, especially at the surface. These abnormalities at the surface are responsible for changes of inter-atomic interactions and bandgap. See also Quantum well Finite potential well Quantum dot References External links Semiconductor Fundamental Band Theory of Solid Quantum dots synthesis Biological application Quantum mechanical potentials Classical mechanics
Potential well
[ "Physics" ]
994
[ "Quantum mechanical potentials", "Quantum mechanics", "Mechanics", "Classical mechanics" ]
612,341
https://en.wikipedia.org/wiki/Haploidisation
Haploidisation is the process of halving the chromosomal content of a cell, producing a haploid cell. Within the normal reproductive cycle, haploidisation is one of the major functional consequences of meiosis, the other being a process of chromosomal crossover that mingles the genetic content of the parental chromosomes. Usually, haploidisation creates a monoploid cell from a diploid progenitor, or it can involve halving of a polyploid cell, for example to make a diploid potato plant from a tetraploid lineage of potato plants. If haploidisation is not followed by fertilisation, the result is a haploid lineage of cells. For example, experimental haploidisation may be used to recover a strain of haploid Dictyostelium from a diploid strain. It sometimes occurs naturally in plants when meiotically reduced cells (usually egg cells) develop by parthenogenesis. Haploidisation was one of the procedures used by Japanese researchers to produce Kaguya, a mouse which had same-sex parents; two haploids were then combined to make the diploid mouse. Haploidisation commitment is a checkpoint in meiosis which follows the successful completion of premeiotic DNA replication and recombination commitment. See also Polyploidy Ploidy References Genetics
Haploidisation
[ "Biology" ]
284
[ "Genetics" ]
612,874
https://en.wikipedia.org/wiki/Acylation
In chemistry, acylation is a broad class of chemical reactions in which an acyl group () is added to a substrate. The compound providing the acyl group is called the acylating agent. The substrate to be acylated and the product include the following: alcohols, esters amines, amides arenes or alkenes, ketones A particularly common type of acylation is acetylation, the addition of the acetyl group. Closely related to acylation is formylation, which employ sources of "HCO+ in place of "RCO+". Examples Because they form a strong electrophile when treated with Lewis acids, acyl halides are commonly used as acylating agents. For example, Friedel–Crafts acylation uses acetyl chloride () as the agent and aluminum chloride () as a catalyst to add an acetyl group to benzene: This reaction is an example of electrophilic aromatic substitution. Acyl halides and acid anhydrides of carboxylic acids are also common acylating agents. In some cases, active esters exhibit comparable reactivity. All react with amines to form amides and with alcohols to form esters by nucleophilic acyl substitution. Acylation can be used to prevent rearrangement reactions that would normally occur in alkylation. To do this an acylation reaction is performed, then the carbonyl is removed by Clemmensen reduction or a similar process. Acylation in biology Protein acylation is the post-translational modification of proteins via the attachment of functional groups through acyl linkages. Protein acylation has been observed as a mechanism controlling biological signaling. One prominent type is fatty acylation, the addition of fatty acids to particular amino acids (e.g. myristoylation, palmitoylation or palmitoleoylation). Different types of fatty acids engage in global protein acylation. Palmitoleoylation is an acylation type where the monounsaturated fatty acid palmitoleic acid is covalently attached to serine or threonine residues of proteins. Palmitoleoylation appears to play a significant role in the trafficking, targeting, and function of Wnt proteins. See also Hydroacylation Acetyl Ketene References Organic reactions
Acylation
[ "Chemistry" ]
494
[ "Organic reactions" ]
612,995
https://en.wikipedia.org/wiki/Synroc
Synroc, a portmanteau of "synthetic rock", is a means of safely storing radioactive waste. It was pioneered in 1978 by a team led by Professor Ted Ringwood at the Australian National University, with further research undertaken in collaboration with ANSTO at research laboratories in Lucas Heights. Manufacture Synroc is composed of three titanate minerals – hollandite, zirconolite and perovskite – plus rutile and a small amount of metal alloy. These are combined into a slurry to which is added a portion of high-level liquid nuclear waste. The mixture is dried and calcined at to produce a powder. The powder is then compressed in a process known as hot isostatic pressing (HIP), where it is compressed within a bellows-like stainless steel container at temperatures of . The result is a cylinder of hard, dense, black synthetic rock. Comparisons If stored in a liquid form, nuclear waste can enter the environment and the waterways, and cause widespread damage. As a solid, these risks are greatly minimised. Unlike borosilicate glass, which is amorphous, Synroc is a ceramic that incorporates the radioactive waste into its crystal structure. Naturally occurring rocks can store radioactive materials for long periods. The aim of Synroc is to imitate this by converting liquid into a crystalline structure and use to store radioactive waste. Synroc-based glass composite materials (GCM) combine the process and chemical flexibility of glass with the superior chemical durability of ceramics and can achieve higher waste loadings. Different types of Synroc waste forms (ratios of component minerals, specific HIP pressures and temperatures etc.) can be developed for the immobilisation of different types of waste. Only zirconolite and perovskite can accommodate actinides. The exact proportions of the main phases vary depending on the HLW composition. For example, Synroc-C is designed to contain about 20% by weight of calcined HLW and it consists of approximately (% by weight): 30 – hollandite; 30 – zirconolite; 20 – perovskite and 20 – Ti-oxides and other phases. Immobilising weapons-grade plutonium or transuranium wastes instead of bulk HLW may essentially change the Synroc phase composition to primarily zirconolite-based or a pyrochlore-based ceramic. The starting precursor for Synroc-C fabrication contains ~57% by weight TiO2 and 2% by weight metallic Ti. The metallic titanium provides reducing conditions during ceramic synthesis and helps decrease volatilisation of radioactive cesium. Synroc is not a disposal method. Synroc still has to be stored. Even though the waste is held in a solid lattice and prevented from spreading, it is still radioactive and can have a negative effect on its surroundings. Synroc is a superior method of nuclear waste storage because it minimises leaching. Production use In 1997 Synroc was tested with real HLW using technology developed jointly by ANSTO and the US DoE's Argonne National Laboratory. In January 2010, the United States Department of Energy selected hot isostatic pressing (HIP) for processing waste at the Idaho National Laboratory. In April 2008, the Battelle Energy Alliance signed a contract with ANSTO to demonstrate the benefits of Synroc in processing waste managed by Batelle as part of its contract to manage the Idaho National Laboratory. Synroc was chosen in April 2005 for a multimillion-dollar "demonstration" contract to eliminate of plutonium-contaminated waste at British Nuclear Fuel's Sellafield plant, on the northwest coast of England. References External links Synroc Wasteform (from World Nuclear Association) Canberra Observer report on 2005 contract ANSTO The Synroc Website Radioactive waste Synthetic materials
Synroc
[ "Chemistry", "Technology" ]
799
[ "Synthetic materials", "Environmental impact of nuclear power", "Hazardous waste", "Radioactivity", "Chemical synthesis", "Radioactive waste" ]
613,362
https://en.wikipedia.org/wiki/Solar%20storm
A solar storm is a disturbance on the Sun, which can emanate outward across the heliosphere, affecting the entire Solar System, including Earth and its magnetosphere, and is the cause of space weather in the short-term with long-term patterns comprising space climate. Types Solar storms include: Solar flare, a large explosion in the Sun's atmosphere caused by tangling, crossing or reorganizing of magnetic field lines Coronal mass ejection (CME), a massive burst of plasma from the Sun, sometimes associated with solar flares Geomagnetic storm, the interaction of the Sun's outburst with Earth's magnetic field Solar particle event (SPE), proton or energetic particle (SEP) See also List of solar storms Aurora, a luminous phenomenon induced by ionization and excitation of constituents of a planet's upper atmosphere Heliophysics, the scientific study of the Sun and region of space affected by the Sun Magnetic cloud, a transient disturbance in the solar wind Solar cycle, an 11-year cycle of sunspot activity Solar prominence, a plasma and magnetic structure in the Sun's corona Solar wind, the stream of particles and plasma emanating from the Sun Active region, where most solar flares and coronal mass ejections originate References Storm Space weather Geomagnetic storms Space hazards da:Solstorm
Solar storm
[ "Physics" ]
279
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
8,849,222
https://en.wikipedia.org/wiki/International%20Journal%20of%20Quantum%20Chemistry
The International Journal of Quantum Chemistry is a peer-reviewed scientific journal publishing original, primary research and review articles on all aspects of quantum chemistry, including an expanded scope focusing on aspects of materials science, biochemistry, biophysics, quantum physics, quantum information theory, etc. According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.444. It was established in 1967 by Per-Olov Löwdin. In 2011, the journal moved to an in-house editorial office model, in which a permanent team of full-time, professional editors is responsible for article scrutiny and editorial content. References External links Chemistry journals Academic journals established in 1967 Hybrid open access journals Wiley (publisher) academic journals English-language journals Computational chemistry
International Journal of Quantum Chemistry
[ "Chemistry" ]
150
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Theoretical chemistry", "Computational chemistry stubs", "Computational chemistry", "Physical chemistry stubs" ]
8,849,460
https://en.wikipedia.org/wiki/Split-ring%20resonator
A split-ring resonator (SRR) is an artificially produced structure common to metamaterials. Its purpose is to produce the desired magnetic susceptibility (magnetic response) in various types of metamaterials up to 200 terahertz. Background Split ring resonators (SRRs) consist of a pair of concentric metallic rings, etched on a dielectric substrate, with slits etched on opposite sides. SRRs can produce the effect of being electrically smaller when responding to an oscillating electromagnetic field. These resonators have been used for the synthesis of left-handed and negative refractive index media, where the necessary value of the negative effective permeability is due to the presence of the SRRs. When an array of electrically small SRRs is excited by means of a time-varying magnetic field, the structure behaves as an effective medium with negative effective permeability in a narrow band above SRR resonance. SRRs have also been coupled to planar transmission lines for the synthesis of metamaterials transmission line. These media create the necessary strong magnetic coupling to an applied electromagnetic field not otherwise available in conventional materials. For example, an effect such as negative permeability is produced with a periodic array of split ring resonators. A single-cell SRR has a pair of enclosed loops with splits in them at opposite ends. The loops are made of nonmagnetic metal like copper and have a small gap between them. The loops can be concentric or square, and gapped as needed. A magnetic flux penetrating the metal rings will induce rotating currents in the rings, which produce their own flux to enhance or oppose the incident field (depending on the SRR resonant properties). This field pattern is dipolar. The small gaps between the rings produces large capacitance values, which lowers the resonating frequency. Hence the dimensions of the structure are small compared to the resonant wavelength. This results in low radiative losses and very high quality factors. The split ring resonator was a microstructure design featured in the paper by Pendry et al in 1999 called, "Magnetism from Conductors and Enhanced Nonlinear Phenomena". It proposed that the split ring resonator design, built out of nonmagnetic material, could enhance the magnetic activity unseen in natural materials. In the simple microstructure design, it is shown that in an array of conducting cylinders, with an applied external field parallel to the cylinders, the effective permeability can be written as the following. (This model is very limited and the effective permeability cannot be less than zero or greater than one.) Where is the resistance of the cylinder surface per unit area, a is the spacing of the cylinders, is the angular frequency, is the permeability of free space and r is the radius. Moreover, when gaps are introduced to a double cylinder design similar to the image above, we see that the gaps produce a capacitance. This capacitor and inductor microstructure design introduces a resonance that amplifies the magnetic effect. The new form of the effective permeability resembles a familiar response known in plasmonic materials. Where d is the spacing of the concentric conducting sheets. The final design replaces the double concentric cylinders with a pair of flat concentric c-shaped sheets, placed on each side of a unit cell. The unit cells are stacked on top of each other by a length, l. The final result of the effective permeability can be seen below. where c is the thickness of the c-shaped sheet and is the resistance of unit length of the sheets measured around the circumference. Characteristics The split ring resonator and the metamaterial itself are composite materials. Each SRR has an individual tailored response to the electromagnetic field. However, the periodic construction of many SRR cells is such that the electromagnetic wave interacts as if these were homogeneous materials. This is similar to how light actually interacts with everyday materials; materials such as glass or lenses are made of atoms, an averaging or macroscopic effect is produced. The SRR is designed to mimic the magnetic response of atoms, only on a much larger scale. Also, as part of periodic composite structure, the SRR is designed to have a stronger magnetic coupling than is found in nature. The larger scale allows for more control over the magnetic response, while each unit is smaller than the radiated electromagnetic wave. SRRs are much more active than ferromagnetic materials found in nature. The pronounced magnetic response in such lightweight materials demonstrates an advantage over heavier, naturally occurring materials. Each unit can be designed to have its own magnetic response. The response can be enhanced or lessened as desired. In addition, the overall effect reduces power requirements. SRR configuration There are a variety of split-ring resonators and periodic structures: rod-split-rings, nested split-rings, single split rings, deformed split-rings, spiral split-rings, and extended S-structures. The variations of split ring resonators have achieved different results, including smaller and higher frequency structures. The research which involves some of these types are discussed throughout the article. To date (December 2009) the capability for desired results in the visible spectrum has not been achieved. However, in 2005 it was noted that, physically, a nested circular split-ring resonator must have an inner radius of 30 to 40 nanometers for success in the mid-range of the visible spectrum. Microfabrication and nanofabrication techniques may utilize direct laser beam writing or electron beam lithography depending on the desired resolution. Various configurations Split-ring resonators (SRR) are one of the most common elements used to fabricate metamaterials. Split-ring resonators are non-magnetic materials, which initially were fabricated from circuit board material to create metamaterials. Looking at the image directly to the right, it can be seen that at first a single SRR looks like an object with two square perimeters, with each perimeter having a small section removed. This results in square "C" shapes on fiberglass printed circuit board material. In this type of configuration it is actually two concentric bands of non-magnetic conductor material. There is one gap in each band placed 180° relative to each other. The gap in each band gives it the distinctive "C" shape, rather than a totally circular or square shape. Then multiple cells of this double band configuration are fabricated onto circuit board material by an etching technique and lined with copper wire strip arrays. After processing, the boards are cut and assembled into an interlocking unit. It is constructed into a periodic array with a large number of SRRs. There are now a number of different configurations that use the SRR nomenclature. Demonstrations A periodic array of SRRs was used for the first demonstration of a negative index of refraction. For this demonstration, square shaped SRRs, with the lined wire configurations, were fabricated into a periodic, arrayed, cell structure. This is the substance of the metamaterial. Then a metamaterial prism was cut from this material. The prism experiment demonstrated a negative index of refraction for the first time in the year 2000; the paper about the demonstration was submitted to the journal Science on January 8, 2001, accepted on February 22, 2001 and published on April 6, 2001. Just before this prism experiment, Pendry et al. was able to demonstrate that a three-dimensional array of intersecting thin wires could be used to create negative values of ε. In a later demonstration, a periodic array of copper split-ring resonators could produce an effective negative μ. In 2000 Smith et al. were the first to successfully combine the two arrays and produce a so-called left-handed material, which has negative values of ε and μ for a band of frequencies in the GHz range. SRRs were first used to fabricate left-handed metamaterials for the microwave range, and several years later for the terahertz range. By 2007, experimental demonstration of this structure at microwave frequencies has been achieved by many groups. In addition, SRRs have been used for research in acoustic metamaterials. The arrayed SRRs and wires of the first left-handed metamaterial were melded into alternating layers. This concept and methodology was then applied to (dielectric) materials with optical resonances producing negative effective permittivity for certain frequency intervals resulting in "photonic bandgap frequencies". Another analysis showed left-handed materials to be fabricated from inhomogeneous constituents, which yet results in a macroscopically homogeneous material. SRRs had been used to focus a signal from a point source, increasing the transmission distance for near field waves. Furthermore, another analysis showed SRRs with a negative index of refraction capable of high-frequency magnetic response, which created an artificial magnetic device composed of non-magnetic materials (dielectric circuit board). The resonance phenomena that occurs in this system is essential to achieving the desired effects. SRRs also exhibit resonant electric response in addition to their resonant magnetic response. The response, when combined with an array of identical wires, is averaged over the whole composite structure which results in effective values, including the refractive index. The original logic behind SRRs specifically, and metamaterials generally was to create a structure, which imitates an arrayed atomic structure only on a much larger scale. Several types of SRR In research based in metamaterials, and specifically negative refractive index, there are different types of split-ring resonators. Of the examples mentioned below, most of them have a gap in each ring. In other words, with a double ring structure, each ring has a gap. There is the 1-D Split-Ring Structure with two square rings, one inside the other. One set of cited "unit cell" dimensions would be an outer square of 2.62 mm and an inner square of 0.25 mm. 1-D structures such as this are easier to fabricate compared with constructing a rigid 2-D structure. The Symmetrical-Ring Structure is another classic example. Described by the nomenclature these are two rectangular square D type configurations, exactly the same size, lying flat, side by side, in the unit cell. Also these are not concentric. One set of cited dimensions are 2 mm on the shorter side, and 3.12 mm on the longer side. The gaps in each ring face each other, in the unit cell. The Omega Structure, as the nomenclature describes, has an Ω-shaped ring structure. There are two of these, standing vertical, side by side, instead of lying flat, in the unit cell. In 2005 these were considered to be a new type of metamaterial. One set of cited dimensions are annular parameters of R=1.4 mm and r=1 mm, and the straight edge is 3.33 mm. Another new metamaterial in 2005 was a coupled S-shaped structure. There are two vertical S-shaped structures, side by side, in a unit cell. There is no gap as in the ring structure; however, there is a space between the top and middle parts of the S and space between the middle part and bottom part of the S. Furthermore, it still has the properties of having an electric plasma frequency and a magnetic resonant frequency. Research On May 1, 2000, research was published about an experiment which involved conducting wires placed symmetrically within each cell of a periodic split-ring resonator array. This effectively achieved negative permeability and permittivity for electromagnetic waves in the microwave regime. The concept was and still is used to build interacting elements smaller than the applied electromagnetic radiation. In addition, the spacing between the resonators is much smaller than the wavelength of the applied radiation. Additionally, the splits in the ring allow the SRR unit to achieve resonance at wavelengths much larger than the diameter of the ring. The unit is designed to generate a large capacitance, lower the resonant frequency, and concentrate the electric field. Combining units creates a design as a periodic medium. Furthermore, the multiple unit structure has strong magnetic coupling with low radiative losses. Research has also covered variations in magnetic resonances for different SRR configurations. Research has continued into terahertz radiations with SRRs Other related work fashioned metamaterial configurations with fractals and non-SRR structures. These can be constructed with materials such as periodic metallic crosses, or an ever-widening concentric ring structures known as Swiss rolls. Permeability for only the red wavelength at 780 nm has been analyzed and along with other related work. See also History of metamaterials Superlens Quantum metamaterials Metamaterial cloaking Photonic metamaterials Metamaterial antennas Nonlinear metamaterials Photonic crystal Seismic metamaterials Acoustic metamaterials Metamaterial absorber Plasmonic metamaterials Terahertz metamaterials Tunable metamaterials Transformation optics Theories of cloaking Academic journals Metamaterials (journal) Metamaterials books Metamaterials Handbook Metamaterials: Physics and Engineering Explorations References Further reading Shepard, K. W. et al. Split-ring resonator for the Argonne Superconducting Heavy Ion Booster. IEEE Transactions on Nuclear Science, VoL. NS-24, N0.3, JUN 1977. External links Video: John Pendry lecture: The science of invisibility April 2009, SlowTV Split Ring Resonator Calculator: Online tool to calculate the LC equivalent circuit and resonant frequency of SRR and CSRR topologies. Resonators Materials science Electromagnetic radiation Metamaterials Scattering, absorption and radiative transfer (optics) Optical materials
Split-ring resonator
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,858
[ "Physical phenomena", " absorption and radiative transfer (optics)", "Applied and interdisciplinary physics", "Metamaterials", "Electromagnetic radiation", "Materials science", "Materials", "Radiation", "Optical materials", "Scattering", "nan", "Matter" ]
8,850,128
https://en.wikipedia.org/wiki/2-Chloropropionic%20acid
2-Chloropropionic acid (2-chloropropanoic acid) is the chemical compound with the formula CH3CHClCO2H. This colorless liquid is the simplest chiral chlorocarboxylic acid, and it is noteworthy for being readily available as a single enantiomer. The conjugate base of 2-chloropropionic acid (CH3CHClCO2−), as well as its salts and esters, are known as 2-chloropropionates or 2-chloropropanoates. Preparation Racemic 2-chloropropionic acid is produced by chlorination of propionyl chloride followed by hydrolysis of the 2-chloropropionyl chloride. Enantiomerically pure (S)-2-chloropropionic acid can be prepared from L-alanine via diazotization in hydrochloric acid. Other α-amino acids undergo this reaction. Reactions Reduction of (S)-2-chloropropionic acid with lithium aluminium hydride affords (S)-2-chloropropanol, the simplest chiral chloro-alcohol. This alcohol undergoes cyclization upon treatment with potassium hydroxide, which causes dehydrohalogenation to give the epoxide, (R)-propylene oxide (methyloxirane). 2-Chloropropionyl chloride reacts with isobutylbenzene to give, after hydrolysis, ibuprofen. Safety In general, α-halocarboxylic acids and their esters are good alkylating agents and should be handled with care. 2-Chloropropionic acid is a neurotoxin. See also 2,2-Dichloropropionic acid References Carboxylic acids Organochlorides
2-Chloropropionic acid
[ "Chemistry" ]
400
[ "Carboxylic acids", "Functional groups" ]
8,850,372
https://en.wikipedia.org/wiki/Counterimmunoelectrophoresis
Counterimmunoelectrophoresis is a laboratory technique used to evaluate the binding of an antibody to its antigen, it is similar to immunodiffusion, but with the addition of an applied electrical field across the diffusion medium, usually an agar or polyacrylamide gel. The effect is rapid migration of the antibody and antigen out of their respective wells towards one another to form a line of precipitation, or a precipitin line, indicating binding. See also Electrophoresis Immunoelectrophoresis References External links https://web.archive.org/web/20070613005107/http://www.lib.mcg.edu/edu/esimmuno/ch4/electro.htm Immunologic tests Blood tests
Counterimmunoelectrophoresis
[ "Chemistry", "Biology" ]
174
[ "Blood tests", "Chemical pathology", "Immunologic tests" ]
8,851,414
https://en.wikipedia.org/wiki/Low-temperature%20thermal%20desorption
For environmental remediation, Low-temperature thermal desorption (LTTD), also known as low-temperature thermal volatilization, thermal stripping, and soil roasting, is an ex-situ remedial technology that uses heat to physically separate petroleum hydrocarbons from excavated soils. Thermal desorbers are designed to heat soils to temperatures sufficient to cause constituents to volatilize and desorb (physically separate) from the soil. Although they are not designed to decompose organic constituents, thermal desorbers can, depending upon the specific organics present and the temperature of the desorber system, cause some organic constituents to completely or partially decompose. The vaporized hydrocarbons are generally treated in a secondary treatment unit (e.g., an afterburner, catalytic oxidation chamber, condenser, or carbon adsorption unit) prior to discharge to the atmosphere. Afterburners and oxidizers destroy the organic constituents. Condensers and carbon adsorption units trap organic compounds for subsequent treatment or disposal. Some preprocessing and postprocessing of soil is necessary when using LTTD. Excavated soils are first screened to remove large (greater than 2 inches in diameter) objects. These may be sized (e.g., crushed or shredded) and then introduced back into the feed material. After leaving the desorber, soils are cooled, re-moistened to control dust, and stabilized (if necessary) to prepare them for disposal or reuse. Treated soil may be redeposited onsite, used as cover in landfills, or incorporated into asphalt. Application LTTD has proven very effective in reducing concentrations of petroleum products including gasoline, jet fuels, kerosene, diesel fuel, heating oils, and lubricating oils. LTTD is applicable to constituents that are volatile at temperatures up to 1,200 °F. Most desorbers operate at temperatures between 300 °F to 1,000 °F. Desorbers constructed of special alloys can operate at temperatures up to 1,200 °F. More volatile products (e.g. gasoline) can be desorbed at the lower operating range, while semivolatile products (e.g. kerosene, diesel fuel) generally need temperatures over 700 °F, and relatively nonvolatile products (e.g., heating oil, lubricating oils) need even higher temperatures. Essentially all soil types are amenable for treatment by LTTD systems. However, different soils may require varying degrees and types of pretreatment. For example, coarse-grained soils (e.g. gravel and cobbles) may require crushing; fine-grained soils that are excessively cohesive (e.g. clay) may require shredding. State and local regulations specify that petroleum-contaminated soils must be pilot tested, by some soil from the site being processed through the LTTD system (a "test burn"). The results of preliminary testing of soil samples should identify the relevant constituent properties, and examination of the machine's performance records should indicate how effective the system will be in treating the soil. The proven effectiveness of a particular system for a specific site or waste does not ensure that it will be effective at all sites or that the treatment efficiencies achieved will be acceptable at other sites. If a test burn is conducted, it is important to ensure that the soil tested is representative of average conditions and that enough samples are analyzed before and after treatment to confidently determine whether LTTD will be effective. Operation of LTTD units requires various permits and demonstration of compliance with permit requirements. Monitoring requirements for LTTD systems are by their nature different from monitoring required at a UST site. Monitoring of LTTD system waste streams (e.g. concentrations of particulates, volatiles, and carbon monoxide in stack gas) are required by the agency or agencies issuing the permits for operation of the facility. The LTTD facility owner/operator is responsible for complying with limits specified by the permits and for other LTTD system operating parameters (e.g. desorber temperature, soil feed rate, afterburner temperature). The decision as to whether or not LTTD is a practical remedial alternative depends upon site-specific characteristics (e.g. the location and volume of contaminated soils, site layout). Practicability is also determined by regulatory, logistical, and economic considerations. The economics of LTTD as a remedial option are highly site-specific. Economic factors include:- Site usage (because excavation and onsite soil treatment at a retail site (e.g. gasoline station, convenience store) will most likely prevent the business from operating for a longish time). The cost of LTTD per unit volume of soil relative to other remedial options. The location of the nearest applicable LTTD system (because transportation costs are a function of distance). Operation principles Thermal desorption systems fall into two general classes—stationary facilities and mobile units. Contaminated soils are excavated and transported to stationary facilities; mobile units can be operated directly onsite. Desorption units are available in a variety of process configurations including rotary desorbers, asphalt plant aggregate dryers, thermal screws, and conveyor furnaces. The plasticity of the soil is a measure of its ability to deform without shearing and is to some extent a function of water content. Plastic soils tend to stick to screens and other equipment, and agglomerate into large clumps. In addition to slowing down the feed rate, plastic soils are difficult to treat. Heating plastic soils requires higher temperatures because of the low surface area to volume ratio and increased moisture content. Also, because plastic soils tend to be very fine-grained, organic compounds tend to be tightly sorbed. Thermal treatment of highly plastic soils requires pretreatment, such as shredding or blending with more friable soils or other amendments (e.g. gypsum). Material larger than 2 inches in diameter will need to be crushed or removed. Crushed material is recycled back into the feed to be processed. Coarser-grained soils tend to be free-flowing and do not agglomerate into clumps. They typically do not retain excessive moisture, therefore, contaminants are easily desorbed. Finer-grained soils tend to retain soil moisture and agglomerate into clumps. When dry, they may yield large amounts of particulates that may require recycling after being intercepted in the baghouse. The solids processing capacity of a thermal desorption system is inversely proportional to the moisture content of the feed material. The presence of moisture in the excavated soils to be treated in the LTTD unit will determine the residence time required and heating requirements for effective removal of contaminants. In order for desorption of petroleum constituents to occur, most of the soil moisture must be evaporated in the desorber. This process can require significant additional thermal input to the desorber and excessive residence time for the soil in the desorber. Moisture content also influences plasticity which affects handling of the soil. Soils with excessive moisture content (> 20%) must be dewatered. Typical dewatering methods include air drying (if storage space is available to spread the soils), mixing with drier soils, or mechanical dewatering. The presence of metals in soil can have two implications: Limitations on disposal of the solid wastes generated by desorption. Attention to air pollution control regulations that limit the amount of metals that may be released in stack emissions. At normal LTTD operating temperatures, heavy metals are not likely to be significantly separated from soils. High concentrations of petroleum products in soil can result in high soil heating values. Heat released from soils can result in overheating and damage to the desorber. Soils with heating values greater than 2,000 Btu/lb require blending with cleaner soils to dilute the high concentration of hydrocarbons. High hydrocarbon concentrations in the offgas may exceed the thermal capacity of the afterburner and potentially result in the release of untreated vapors into the atmosphere. Excessive constituent levels in soil could also potentially result in the generation of vapors in the desorber at concentrations exceeding the lower explosive limit (LEL). If the LEL is exceeded there is a potential for explosion. System design The term "thermal desorber" describes the primary treatment operation that heats petroleum-contaminated materials and desorbs organic materials into a purge gas. Mechanical design features and process operating conditions vary considerably among the various types of LTTD systems. Desorption units are: available in four configurations: Rotary dryer Asphalt plant aggregate dryer Thermal screw Conveyor furnace Although all LTTD systems use heat to separate (desorb) organic contaminants from the soil matrix, each system has a different configuration with its own set of advantages and disadvantages. The decision to use one system over another depends on the nature of the contaminants as well as machine availability, system performance, and economic considerations. System performance may be evaluated on the basis of pilot tests (e.g., test burns) or examination of historical machine performance records. Pilot tests to develop treatment conditions are generally not necessary for petroleum-contaminated soils. Rotary dryer Rotary dryer systems use a cylindrical metal reactor (drum) that is inclined slightly from the horizontal. A burner located at one end provides heat to raise the temperature of the soil sufficiently to desorb organic contaminants. The flow of soil may be either cocurrent with or countercurrent to the direction of the purge gas flow. As the drum rotates, soil is conveyed through the drum. Lifters raise the soil, carrying it to near the top of the drum before allowing it to fall through the heated purge gas. Mixing in a rotary dryer enhances heat transfer by convection and allow soils to be rapidly heated. Rotary desorber units are manufactured for a wide range of treatment capacities; these units may be either stationary or mobile. The maximum soil temperature that can be obtained in a rotary dryer depends on the composition of the dryer shell. The soil discharge temperature of carbon steel drums is typically 300 to 600 degrees F. Alloy drums are available that can increase the soil discharge temperature to 1,200 degrees F. Most rotary dryers that are used to treat petroleum contaminated soil are made of carbon steel. After the treated soil exits the rotary dryer, it enters a cooling conveyor where water is sprayed on the soil for cooling and dust control. Water addition may be conducted in either a screw conveyor or a pugmill. Besides the direction of purge gas flow relative to soil feed direction, there is one major difference in configuration between countercurrent and cocurrent rotary dryers. The purge gas from a countercurrent rotary dryer is typically only 350 °F to 500 °F and does not require cooling before entering the baghouse where fine particles are trapped. A disadvantage is that these particles may not have been decontaminated and are typically recycled to the dryer. Countercurrent dryers have several advantages over cocurrent systems. They are more efficient in transferring heat from purge gas to contaminated soil, and the volume and temperature of exit gas are lower, allowing the gas to go directly to a baghouse without needing to be cooled. The cooler exit gas temperature and smaller volume eliminates the need for a cooling unit, which allows downstream processing equipment to be smaller. Countercurrent systems are effective on petroleum products with molecular weights lower than No.2 fuel oil. In cocurrent systems, the purge gas is 50 °F to 100 °F hotter than the soil discharge temperature. The result is that the purge gas exit temperature may range from 400 °F to 1,000 °F and cannot go directly to the baghouse. Purge gas first enters an afterburner to decontaminate the fine particles, then goes into a cooling unit prior to introduction into the baghouse. Because of the higher temperature and volume of the purge gas, the baghouse and all other downstream processing equipment must be larger than in a countercurrent system. Cocurrent systems do have several advantages over countercurrent systems: The afterburner is located upstream of the baghouse ensuring that fine particles are decontaminated; and because the heated purge gas is introduced at the same end of the drum as the feed soil, the soil is heated faster, resulting in a longer residence time. Higher temperatures and longer residence time mean that cocurrent systems can be used to treat soils contaminated with heavier petroleum products. Cocurrent systems are effective for light and heavy petroleum products including No. 6 fuel oil, crude oil, motor oil, and lubricating oil. Asphalt plant aggregate dryer Hot-mix asphalt plants use aggregate that has been processed in a dryer before it is mixed with liquid asphalt. The use of petroleum contaminated soils for aggregate material is widespread. Aggregate dryers may either be stationary or mobile. Soil treatment capacities range from 25-150 tons per hour. The soil may be incorporated into the asphalt as a recycling process or the treated soil may be used for other purposes. Asphalt rotary dryers are normally constructed of carbon steel and have a soil discharge temperature of 300 °F to 600 °F. Typically, asphalt plant aggregate dryers are identical to the countercurrent rotary desorbers described above and are effective on the same types of contaminants. The primary difference is that an afterburner is not required for incorporation of clean aggregate into the asphalt mix. In some areas, asphalt plants that use petroleum-contaminated soil for aggregate may be required to be equipped with an afterburner. Thermal screw A thermal screw desorber typically consists of a series of 1-4 augers. The auger system conveys, mixes, and heats contaminated soils to volatilize moisture and organic contaminants into a purge gas stream. Augers can be arranged in series to increase the soil residence time, or they can be configured in parallel to increase throughput capacity. Most thermal screw systems circulate a hot heat-transfer oil through the hollow flights of the auger and return the hot oil through the shaft to the heat transfer fluid heating system. The heated oil is also circulated through the jacketed trough in which each auger rotates. Thermal screws can also be steam-heated. Systems heated with oil can achieve soil temperatures of up to 500 °F, and steam-heated systems can heat soil to approximately 350 °F. Most of the gas generated during heating of the heat-transfer oil does not come into contact the waste material and can be discharged directly to the atmosphere without emission controls. The remainder of the flue gas maintains the thermal screw purge gas exit temperature above 300 degrees F. This ensures that volatilized organics and moisture do not condense. In addition, the recycled flue gas has a low oxygen content (less than 2% by volume) which minimizes oxidation of the organics and reduces the explosion hazard. If pretreatment analytical data indicates a high organic content (greater than 4 percent), use of a thermal screw is recommended. After the treated soil exits the thermal screw, water is sprayed on the soil for cooling and dust control. Thermal screws are available with soil treatment capacities ranging from 3-15 tons per hour. Since thermal screws are indirectly heated, the volume of purge gas from the primary thermal treatment unit is less than one half of the volume from a directly heated system with an equivalent soil processing capacity. Therefore, offgas treatment systems consist of relatively small unit operations that are well suited to mobile applications. Indirect heating also allows thermal screws to process materials with high organic contents since the recycled flue gas is inert, thereby reducing the explosion hazard. Conveyor furnace A conveyor furnace uses a flexible metal belt to convey soil through the primary heating chamber. A one-inch-deep layer of soil is spread evenly over the belt. As the belt moves through the system, soil agitators lift the belt and turn the soil to enhance heat transfer and volatilization of organics. The conveyor furnace can heat soils to temperatures from 300 to 800 degrees F. At the higher temperature range, the conveyor furnace is more effective in treating some heavier petroleum hydrocarbons than are oil- or steam-heated thermal screws, asphalt plant aggregate dryers, and carbon steel rotary dryers. After the treated soil exits the conveyor furnace, it is sprayed with water for cooling and dust control. As of February 1993, only one conveyor furnace system was currently in use for the remediation of petroleum contaminated soil. This system is mobile and can treat 5 to 10 tons of soil per hour. Offgas treatment Offgas treatment systems for LTTD systems are designed to address three types of air pollutants: particulates, organic vapors, and carbon monoxide. Particulates are controlled with both wet (e.g., venturi scrubbers) and dry (e.g., cyclones, baghouses) unit operations. Rotary dryers and asphalt aggregate dryers most commonly use dry gas cleaning unit operations. Cyclones are used to capture large particulates and reduce the particulate load to the baghouse. Baghouses are used as the final particulate control device. Thermal screw systems typically use a venturi scrubber as the primary particulate control. The control of organic vapors is achieved by either destruction or collection. Afterburners are used downstream of rotary dryers and conveyor furnaces to destroy organic contaminants and oxidize carbon monoxide. Conventional afterburners are designed so that exit gas temperatures reach 1,400 °F to 1,600 °F. Organic destruction efficiency typically ranges from 95% to greater than 99%. Condensers and activated carbon may also be used to treat the offgas from thermal screw systems. Condensers may be either water-cooled or electrically cooled systems to decrease offgas temperatures to 100 °F to 140 °F. The efficiency of condensers for removing organic compounds ranges from 50% to greater than 95%. Noncondensible gases exiting the condenser are normally treated by a vapor-phase activated carbon treatment system. The efficiency of activated carbon adsorption systems for removing organic contaminants ranges from 50% to 99%. Condensate from the condenser is processed through a phase separator where the non-aqueous phase organic component is separated and disposed of or recycled. The remaining water is then processed through activated carbon and used to rehumidify treated soil. Treatment temperature is a key parameter affecting the degree of treatment of organic components. The required treatment temperature depends upon the specific types of petroleum contamination in the soil. The actual temperature achieved by an LTTD system is a function of the moisture content and heat capacity of the soil, soil particle size, and the heat transfer and mixing characteristics of the thermal desorber. Residence time is a key parameter affecting the degree to which decontamination is achievable. Residence time depends upon the design and operation of the system, characteristics of the contaminants and the soil, and the degree of treatment required. References Technology hazards Petroleum technology Oil spill remediation technologies
Low-temperature thermal desorption
[ "Chemistry", "Technology", "Engineering" ]
3,977
[ "Petroleum engineering", "Petroleum technology", "nan" ]
1,590,747
https://en.wikipedia.org/wiki/Franck%E2%80%93Condon%20principle
The Franck-Condon principle describes the intensities of vibronic transitions, or the absorption or emission of a photon. It states that when a molecule is undergoing an electronic transition, such as ionization, the nuclear configuration of the molecule experiences no significant change. Overview The Franck–Condon principle has a well-established semiclassical interpretation based on the original contributions of James Franck. Electronic transitions are relatively instantaneous compared with the time scale of nuclear motions, therefore if the molecule is to move to a new vibrational level during the electronic transition, this new vibrational level must be instantaneously compatible with the nuclear positions and momenta of the vibrational level of the molecule in the originating electronic state. In the semiclassical picture of vibrations (oscillations) of a simple harmonic oscillator, the necessary conditions can occur at the turning points, where the momentum is zero. In the quantum mechanical picture, the vibrational levels and vibrational wavefunctions are those of quantum harmonic oscillators, or of more complex approximations to the potential energy of molecules, such as the Morse potential. Figure 1 illustrates the Franck–Condon principle for vibronic transitions in a molecule with Morse-like potential energy functions in both the ground and excited electronic states. In the low temperature approximation, the molecule starts out in the v = 0 vibrational level of the ground electronic state and upon absorbing a photon of the necessary energy, makes a transition to the excited electronic state. The electron configuration of the new state may result in a shift of the equilibrium position of the nuclei constituting the molecule. In Figure 3 this shift in nuclear coordinates between the ground and the first excited state is labeled as q01. In the simplest case of a diatomic molecule the nuclear coordinates axis refers to the internuclear separation. The vibronic transition is indicated by a vertical arrow due to the assumption of constant nuclear coordinates during the transition. The probability that the molecule can end up in any particular vibrational level is proportional to the square of the (vertical) overlap of the vibrational wavefunctions of the original and final state (see Quantum mechanical formulation section below). In the electronic excited state molecules quickly relax to the lowest vibrational level of the lowest electronic excitation state (Kasha's rule), and from there can decay to the electronic ground state via photon emission. The Franck–Condon principle is applied equally to absorption and to fluorescence. The applicability of the Franck–Condon principle in both absorption and fluorescence, along with Kasha's rule leads to an approximate mirror symmetry shown in Figure 2. The vibrational structure of molecules in a cold, sparse gas is most clearly visible due to the absence of inhomogeneous broadening of the individual transitions. Vibronic transitions are drawn in Figure 2 as narrow, equally spaced Lorentzian line shapes. Equal spacing between vibrational levels is only the case for the parabolic potential of simple harmonic oscillators, in more realistic potentials, such as those shown in Figure 1, energy spacing decreases with increasing vibrational energy. Electronic transitions to and from the lowest vibrational states are often referred to as 0–0 (zero zero) transitions and have the same energy in both absorption and fluorescence. Development of the principle In a report published in 1926 in Transactions of the Faraday Society, James Franck was concerned with the mechanisms of photon-induced chemical reactions. The presumed mechanism was the excitation of a molecule by a photon, followed by a collision with another molecule during the short period of excitation. The question was whether it was possible for a molecule to break into photoproducts in a single step, the absorption of a photon, and without a collision. In order for a molecule to break apart, it must acquire from the photon a vibrational energy exceeding the dissociation energy, that is, the energy to break a chemical bond. However, as was known at the time, molecules will only absorb energy corresponding to allowed quantum transitions, and there are no vibrational levels above the dissociation energy level of the potential well. High-energy photon absorption leads to a transition to a higher electronic state instead of dissociation. In examining how much vibrational energy a molecule could acquire when it is excited to a higher electronic level, and whether this vibrational energy could be enough to immediately break apart the molecule, he drew three diagrams representing the possible changes in binding energy between the lowest electronic state and higher electronic states. James Franck recognized that changes in vibrational levels could be a consequence of the instantaneous nature of excitation to higher electronic energy levels and a new equilibrium position for the nuclear interaction potential. Edward Condon extended this insight beyond photoreactions in a 1926 Physical Review article titled "A Theory of Intensity Distribution in Band Systems". Here he formulates the semiclassical formulation in a manner quite similar to its modern form. The first joint reference to both Franck and Condon in regard to the new principle appears in the same 1926 issue of Physical Review in an article on the band structure of carbon monoxide by Raymond Birge. Quantum mechanical formulation Consider an electrical dipole transition from the initial vibrational state (υ) of the ground electronic level (ε), , to some vibrational state (υ′) of an excited electronic state (ε′), (see bra–ket notation). The molecular dipole operator μ is determined by the charge (−e) and locations (ri) of the electrons as well as the charges (+Zje) and locations (Rj) of the nuclei: The probability amplitude P for the transition between these two states is given by where and are, respectively, the overall wavefunctions of the initial and final state. The overall wavefunctions are the product of the individual vibrational (depending on spatial coordinates of the nuclei) and electronic space and spin wavefunctions: This separation of the electronic and vibrational wavefunctions is an expression of the Born–Oppenheimer approximation and is the fundamental assumption of the Franck–Condon principle. Combining these equations leads to an expression for the probability amplitude in terms of separate electronic space, spin and vibrational contributions: The spin-independent part of the initial integral is here approximated as a product of two integrals: This factorization would be exact if the integral over the spatial coordinates of the electrons would not depend on the nuclear coordinates. However, in the Born–Oppenheimer approximation and do depend (parametrically) on the nuclear coordinates, so that the integral (a so-called transition dipole surface) is a function of nuclear coordinates. Since the dependence is usually rather smooth it is neglected (i.e., the assumption that the transition dipole surface is independent of nuclear coordinates, called the Condon approximation is often allowed). The first integral after the plus sign is equal to zero because electronic wavefunctions of different states are orthogonal. Remaining is the product of three integrals. The first integral is the vibrational overlap integral, also called the Franck–Condon factor. The remaining two integrals contributing to the probability amplitude determine the electronic spatial and spin selection rules. The Franck–Condon principle is a statement on allowed vibrational transitions between two different electronic states; other quantum mechanical selection rules may lower the probability of a transition or prohibit it altogether. Rotational selection rules have been neglected in the above derivation. Rotational contributions can be observed in the spectra of gases but are strongly suppressed in liquids and solids. It should be clear that the quantum mechanical formulation of the Franck–Condon principle is the result of a series of approximations, principally the electrical dipole transition assumption and the Born–Oppenheimer approximation. Weaker magnetic dipole and electric quadrupole electronic transitions along with the incomplete validity of the factorization of the total wavefunction into nuclear, electronic spatial and spin wavefunctions means that the selection rules, including the Franck–Condon factor, are not strictly observed. For any given transition, the value of P is determined by all of the selection rules, however spin selection is the largest contributor, followed by electronic selection rules. The Franck–Condon factor only weakly modulates the intensity of transitions, i.e., it contributes with a factor on the order of 1 to the intensity of bands whose order of magnitude is determined by the other selection rules. The table below gives the range of extinction coefficients for the possible combinations of allowed and forbidden spin and orbital selection rules. Franck–Condon metaphors in spectroscopy The Franck–Condon principle, in its canonical form, applies only to changes in the vibrational levels of a molecule in the course of a change in electronic levels by either absorption or emission of a photon. The physical intuition of this principle is anchored by the idea that the nuclear coordinates of the atoms constituting the molecule do not have time to change during the very brief amount of time involved in an electronic transition. However, this physical intuition can be, and is indeed, routinely extended to interactions between light-absorbing or emitting molecules (chromophores) and their environment. Franck–Condon metaphors are appropriate because molecules often interact strongly with surrounding molecules, particularly in liquids and solids, and these interactions modify the nuclear coordinates of the chromophore in ways closely analogous to the molecular vibrations considered by the Franck–Condon principle. Franck–Condon principle for phonons The closest Franck–Condon analogy is due to the interaction of phonons (quanta of lattice vibrations) with the electronic transitions of chromophores embedded as impurities in the lattice. In this situation, transitions to higher electronic levels can take place when the energy of the photon corresponds to the purely electronic transition energy or to the purely electronic transition energy plus the energy of one or more lattice phonons. In the low-temperature approximation, emission is from the zero-phonon level of the excited state to the zero-phonon level of the ground state or to higher phonon levels of the ground state. Just like in the Franck–Condon principle, the probability of transitions involving phonons is determined by the overlap of the phonon wavefunctions at the initial and final energy levels. For the Franck–Condon principle applied to phonon transitions, the label of the horizontal axis of Figure 1 is replaced in Figure 6 with the configurational coordinate for a normal mode. The lattice mode potential energy in Figure 6 is represented as that of a harmonic oscillator, and the spacing between phonon levels () is determined by lattice parameters. Because the energy of single phonons is generally quite small, zero- or few-phonon transitions can only be observed at temperatures below about 40 kelvins. See Zero-phonon line and phonon sideband for further details and references. Franck–Condon principle in solvation Franck–Condon considerations can also be applied to the electronic transitions of chromophores dissolved in liquids. In this use of the Franck–Condon metaphor, the vibrational levels of the chromophores, as well as interactions of the chromophores with phonons in the liquid, continue to contribute to the structure of the absorption and emission spectra, but these effects are considered separately and independently. Consider chromophores surrounded by solvent molecules. These surrounding molecules may interact with the chromophores, particularly if the solvent molecules are polar. This association between solvent and solute is referred to as solvation and is a stabilizing interaction, that is, the solvent molecules can move and rotate until the energy of the interaction is minimized. The interaction itself involves electrostatic and van der Waals forces and can also include hydrogen bonds. Franck–Condon principles can be applied when the interactions between the chromophore and the surrounding solvent molecules are different in the ground and in the excited electronic state. This change in interaction can originate, for example, due to different dipole moments in these two states. If the chromophore starts in its ground state and is close to equilibrium with the surrounding solvent molecules and then absorbs a photon that takes it to the excited state, its interaction with the solvent will be far from equilibrium in the excited state. This effect is analogous to the original Franck–Condon principle: the electronic transition is very fast compared with the motion of nuclei—the rearrangement of solvent molecules in the case of solvation. We now speak of a vertical transition, but now the horizontal coordinate is solvent-solute interaction space. This coordinate axis is often labeled as "Solvation Coordinate" and represents, somewhat abstractly, all of the relevant dimensions of motion of all of the interacting solvent molecules. In the original Franck–Condon principle, after the electronic transition, the molecules which end up in higher vibrational states immediately begin to relax to the lowest vibrational state. In the case of solvation, the solvent molecules will immediately try to rearrange themselves in order to minimize the interaction energy. The rate of solvent relaxation depends on the viscosity of the solvent. Assuming the solvent relaxation time is short compared with the lifetime of the electronic excited state, emission will be from the lowest solvent energy state of the excited electronic state. For small-molecule solvents such as water or methanol at ambient temperature, solvent relaxation time is on the order of some tens of picoseconds whereas chromophore excited state lifetimes range from a few picoseconds to a few nanoseconds. Immediately after the transition to the ground electronic state, the solvent molecules must also rearrange themselves to accommodate the new electronic configuration of the chromophore. Figure 7 illustrates the Franck–Condon principle applied to solvation. When the solution is illuminated by light corresponding to the electronic transition energy, some of the chromophores will move to the excited state. Within this group of chromophores there will be a statistical distribution of solvent-chromophore interaction energies, represented in the figure by a Gaussian distribution function. The solvent-chromophore interaction is drawn as a parabolic potential in both electronic states. Since the electronic transition is essentially instantaneous on the time scale of solvent motion (vertical arrow), the collection of excited state chromophores is immediately far from equilibrium. The rearrangement of the solvent molecules according to the new potential energy curve is represented by the curved arrows in Figure 7. Note that while the electronic transitions are quantized, the chromophore-solvent interaction energy is treated as a classical continuum due to the large number of molecules involved. Although emission is depicted as taking place from the minimum of the excited state chromophore-solvent interaction potential, significant emission can take place before equilibrium is reached when the viscosity of the solvent is high, or the lifetime of the excited state is short. The energy difference between absorbed and emitted photons depicted in Figure 7 is the solvation contribution to the Stokes shift. See also Born–Oppenheimer approximation Molecular electronic transition Ultraviolet-visible spectroscopy Quantum harmonic oscillator Morse potential Vibronic coupling Zero-phonon line and phonon sideband Sudden approximation References Further reading Link Link Link Link Link Link External links Quantum chemistry Spectroscopy Molecular physics
Franck–Condon principle
[ "Physics", "Chemistry" ]
3,172
[ "Molecular physics", "Spectrum (physical sciences)", "Quantum chemistry", "Instrumental analysis", "Quantum mechanics", "Theoretical chemistry", " molecular", "nan", "Atomic", "Spectroscopy", " and optical physics" ]
1,590,842
https://en.wikipedia.org/wiki/Work%20hardening
Work hardening, also known as strain hardening, is the process by which a material's load-bearing capacity (strength) increases during plastic (permanent) deformation. This characteristic is what sets ductile materials apart from brittle materials. Work hardening may be desirable, undesirable, or inconsequential, depending on the application. This strengthening occurs because of dislocation movements and dislocation generation within the crystal structure of the material. Many non-brittle metals with a reasonably high melting point as well as several polymers can be strengthened in this fashion. Alloys not amenable to heat treatment, including low-carbon steel, are often work-hardened. Some materials cannot be work-hardened at low temperatures, such as indium, however others can be strengthened only via work hardening, such as pure copper and aluminum. Undesirable work hardening An example of undesirable work hardening is during machining when early passes of a cutter inadvertently work-harden the workpiece surface, causing damage to the cutter during the later passes. Certain alloys are more prone to this than others; superalloys such as Inconel require materials science machining strategies that take it into account. For metal objects designed to flex, such as springs, specialized alloys are usually employed in order to avoid work hardening (a result of plastic deformation) and metal fatigue, with specific heat treatments required to obtain the necessary characteristics. Intentional work hardening An example of desirable work hardening is that which occurs in metalworking processes that intentionally induce plastic deformation to exact a shape change. These processes are known as cold working or cold forming processes. They are characterized by shaping the workpiece at a temperature below its recrystallization temperature, usually at ambient temperature. Cold forming techniques are usually classified into four major groups: squeezing, bending, drawing, and shearing. Applications include the heading of bolts and cap screws and the finishing of cold rolled steel. In cold forming, metal is formed at high speed and high pressure using tool steel or carbide dies. The cold working of the metal increases the hardness, yield strength, and tensile strength. Theory Before work hardening, the lattice of the material exhibits a regular, nearly defect-free pattern (almost no dislocations). The defect-free lattice can be created or restored at any time by annealing. As the material is work hardened it becomes increasingly saturated with new dislocations, and more dislocations are prevented from nucleating (a resistance to dislocation-formation develops). This resistance to dislocation-formation manifests itself as a resistance to plastic deformation; hence, the observed strengthening. In metallic crystals, this is a reversible process and is usually carried out on a microscopic scale by defects called dislocations, which are created by fluctuations in local stress fields within the material culminating in a lattice rearrangement as the dislocations propagate through the lattice. At normal temperatures the dislocations are not annihilated by annealing. Instead, the dislocations accumulate, interact with one another, and serve as pinning points or obstacles that significantly impede their motion. This leads to an increase in the yield strength of the material and a subsequent decrease in ductility. Such deformation increases the concentration of dislocations which may subsequently form low-angle grain boundaries surrounding sub-grains. Cold working generally results in a higher yield strength as a result of the increased number of dislocations and the Hall–Petch effect of the sub-grains, and a decrease in ductility. The effects of cold working may be reversed by annealing the material at high temperatures where recovery and recrystallization reduce the dislocation density. A material's work hardenability can be predicted by analyzing a stress–strain curve, or studied in context by performing hardness tests before and after a process. Elastic and plastic deformation Work hardening is a consequence of plastic deformation, a permanent change in shape. This is distinct from elastic deformation, which is reversible. Most materials do not exhibit only one or the other, but rather a combination of the two. The following discussion mostly applies to metals, especially steels, which are well studied. Work hardening occurs most notably for ductile materials such as metals. Ductility is the ability of a material to undergo plastic deformations before fracture (for example, bending a steel rod until it finally breaks). The tensile test is widely used to study deformation mechanisms. This is because under compression, most materials will experience trivial (lattice mismatch) and non-trivial (buckling) events before plastic deformation or fracture occur. Hence the intermediate processes that occur to the material under uniaxial compression before the incidence of plastic deformation make the compressive test fraught with difficulties. A material generally deforms elastically under the influence of small forces; the material returns quickly to its original shape when the deforming force is removed. This phenomenon is called elastic deformation. This behavior in materials is described by Hooke's Law. Materials behave elastically until the deforming force increases beyond the elastic limit, which is also known as the yield stress. At that point, the material is permanently deformed and fails to return to its original shape when the force is removed. This phenomenon is called plastic deformation. For example, if one stretches a coil spring up to a certain point, it will return to its original shape, but once it is stretched beyond the elastic limit, it will remain deformed and won't return to its original state. Elastic deformation stretches the bonds between atoms away from their equilibrium radius of separation, without applying enough energy to break the inter-atomic bonds. Plastic deformation, on the other hand, breaks inter-atomic bonds, and therefore involves the rearrangement of atoms in a solid material. Dislocations and lattice strain fields In materials science parlance, dislocations are defined as line defects in a material's crystal structure. The bonds surrounding the dislocation are already elastically strained by the defect compared to the bonds between the constituents of the regular crystal lattice. Therefore, these bonds break at relatively lower stresses, leading to plastic deformation. The strained bonds around a dislocation are characterized by lattice strain fields. For example, there are compressively strained bonds directly next to an edge dislocation and strained in tension bonds beyond the end of an edge dislocation. These form compressive strain fields and tensile strain fields, respectively. Strain fields are analogous to electric fields in certain ways. Specifically, the strain fields of dislocations obey similar laws of attraction and repulsion; in order to reduce overall strain, compressive strains are attracted to tensile strains, and vice versa. The visible (macroscopic) results of plastic deformation are the result of microscopic dislocation motion. For example, the stretching of a steel rod in a tensile tester is accommodated through dislocation motion on the atomic scale. Increase of dislocations and work hardening Increase in the number of dislocations is a quantification of work hardening. Plastic deformation occurs as a consequence of work being done on a material; energy is added to the material. In addition, the energy is almost always applied fast enough and in large enough magnitude to not only move existing dislocations, but also to produce a great number of new dislocations by jarring or working the material sufficiently enough. New dislocations are generated in proximity to a Frank–Read source. Yield strength is increased in a cold-worked material. Using lattice strain fields, it can be shown that an environment filled with dislocations will hinder the movement of any one dislocation. Because dislocation motion is hindered, plastic deformation cannot occur at normal stresses. Upon application of stresses just beyond the yield strength of the non-cold-worked material, a cold-worked material will continue to deform using the only mechanism available: elastic deformation, the regular scheme of stretching or compressing of electrical bonds (without dislocation motion) continues to occur, and the modulus of elasticity is unchanged. Eventually the stress is great enough to overcome the strain-field interactions and plastic deformation resumes. However, ductility of a work-hardened material is decreased. Ductility is the extent to which a material can undergo plastic deformation, that is, it is how far a material can be plastically deformed before fracture. A cold-worked material is, in effect, a normal (brittle) material that has already been extended through part of its allowed plastic deformation. If dislocation motion and plastic deformation have been hindered enough by dislocation accumulation, and stretching of electronic bonds and elastic deformation have reached their limit, a third mode of deformation occurs: fracture. Quantification of work hardening The shear strength, , of a dislocation is dependent on the shear modulus, G, the magnitude of the Burgers vector, b, and the dislocation density, : where is the intrinsic strength of the material with low dislocation density and is a correction factor specific to the material. As shown in Figure 1 and the equation above, work hardening has a half root dependency on the number of dislocations. The material exhibits high strength if there are either high levels of dislocations (greater than 1014 dislocations per m2) or no dislocations. A moderate number of dislocations (between 107 and 109 dislocations per m2) typically results in low strength. Example For an extreme example, in a tensile test a bar of steel is strained to just before the length at which it usually fractures. The load is released smoothly and the material relieves some of its strain by decreasing in length. The decrease in length is called the elastic recovery, and the result is a work-hardened steel bar. The fraction of length recovered (length recovered/original length) is equal to the yield-stress divided by the modulus of elasticity. (Here we discuss true stress in order to account for the drastic decrease in diameter in this tensile test.) The length recovered after removing a load from a material just before it breaks is equal to the length recovered after removing a load just before it enters plastic deformation. The work-hardened steel bar has a large enough number of dislocations that the strain field interaction prevents all plastic deformation. Subsequent deformation requires a stress that varies linearly with the strain observed, the slope of the graph of stress vs. strain is the modulus of elasticity, as usual. The work-hardened steel bar fractures when the applied stress exceeds the usual fracture stress and the strain exceeds usual fracture strain. This may be considered to be the elastic limit and the yield stress is now equal to the fracture toughness, which is much higher than a non-work-hardened steel yield stress. The amount of plastic deformation possible is zero, which is less than the amount of plastic deformation possible for a non-work-hardened material. Thus, the ductility of the cold-worked bar is reduced. Substantial and prolonged cavitation can also produce strain hardening. Empirical relations There are two common mathematical descriptions of the work hardening phenomenon. Hollomon's equation is a power law relationship between the stress and the amount of plastic strain: where σ is the stress, K is the strength index or strength coefficient, εp is the plastic strain and n is the strain hardening exponent. Ludwik's equation is similar but includes the yield stress: If a material has been subjected to prior deformation (at low temperature) then the yield stress will be increased by a factor depending on the amount of prior plastic strain ε0: The constant K is structure dependent and is influenced by processing while n is a material property normally lying in the range 0.2–0.5. The strain hardening index can be described by: This equation can be evaluated from the slope of a log(σ) – log(ε) plot. Rearranging allows a determination of the rate of strain hardening at a given stress and strain: Work hardening in specific materials Steel Steel is an important engineering material, used in many applications. Steel may be work hardened by deformation at low temperature, called cold working. Typically, an increase in cold work results in a decrease in the strain hardening exponent. Similarly, high strength steels tend to exhibit a lower strain hardening exponent. Copper Copper was the first metal in common use for tools and containers since it is one of the few metals available in non-oxidized form, not requiring the smelting of an ore. Copper is easily softened by heating and then cooling (it does not harden by quenching, e.g., quenching in cool water). In this annealed state it may then be hammered, stretched and otherwise formed, progressing toward the desired final shape but becoming harder and less ductile as work progresses. If work continues beyond a certain hardness the metal will tend to fracture when worked and so it may be re-annealed periodically as shaping continues. Annealing is stopped when the workpiece is near its final desired shape, and so the final product will have a desired strength and hardness. The technique of repoussé exploits these properties of copper, enabling the construction of durable jewelry articles and sculptures (such as the Statue of Liberty). Gold and other precious metals Much gold jewelry is produced by casting, with little or no cold working; which, depending on the alloy grade, may leave the metal relatively soft and bendable. However, a jeweler may intentionally use work hardening to strengthen wearable objects that are exposed to stress, such as rings. Aluminum Items made from aluminum and its alloys must be carefully designed to minimize or evenly distribute flexure, which can lead to work hardening and, in turn, stress cracking, possibly causing catastrophic failure. For this reason modern aluminum aircraft will have an imposed working lifetime (dependent upon the type of loads encountered), after which the aircraft must be retired. References Bibliography . Industrial processes Metallurgical processes Metalworking Strengthening mechanisms of materials
Work hardening
[ "Chemistry", "Materials_science", "Engineering" ]
2,913
[ "Strengthening mechanisms of materials", "Metallurgical processes", "Materials science", "Metallurgy" ]
1,590,904
https://en.wikipedia.org/wiki/Precipitation%20hardening
Precipitation hardening, also called age hardening or particle hardening, is a heat treatment technique used to increase the yield strength of malleable materials, including most structural alloys of aluminium, magnesium, nickel, titanium, and some steels, stainless steels, and duplex stainless steel. In superalloys, it is known to cause yield strength anomaly providing excellent high-temperature strength. Precipitation hardening relies on changes in solid solubility with temperature to produce fine particles of an impurity phase, which impede the movement of dislocations, or defects in a crystal's lattice. Since dislocations are often the dominant carriers of plasticity, this serves to harden the material. The impurities play the same role as the particle substances in particle-reinforced composite materials. Just as the formation of ice in air can produce clouds, snow, or hail, depending upon the thermal history of a given portion of the atmosphere, precipitation in solids can produce many different sizes of particles, which have radically different properties. Unlike ordinary tempering, alloys must be kept at elevated temperature for hours to allow precipitation to take place. This time delay is called "aging". Solution treatment and aging is sometimes abbreviated "STA" in specifications and certificates for metals. Two different heat treatments involving precipitates can alter the strength of a material: solution heat treating and precipitation heat treating. Solid solution strengthening involves formation of a single-phase solid solution via quenching. Precipitation heat treating involves the addition of impurity particles to increase a material's strength. Kinetics versus thermodynamics This technique exploits the phenomenon of supersaturation, and involves careful balancing of the driving force for precipitation and the thermal activation energy available for both desirable and undesirable processes. Nucleation occurs at a relatively high temperature (often just below the solubility limit) so that the kinetic barrier of surface energy can be more easily overcome and the maximum number of precipitate particles can form. These particles are then allowed to grow at lower temperature in a process called ageing. This is carried out under conditions of low solubility so that thermodynamics drive a greater total volume of precipitate formation. Diffusion's exponential dependence upon temperature makes precipitation strengthening, like all heat treatments, a fairly delicate process. Too little diffusion (under ageing), and the particles will be too small to impede dislocations effectively; too much (over ageing), and they will be too large and dispersed to interact with the majority of dislocations. Alloy design Precipitation strengthening is possible if the line of solid solubility slopes strongly toward the center of a phase diagram. While a large volume of precipitate particles is desirable, a small enough amount of the alloying element should be added so that it remains easily soluble at some reasonable annealing temperature. Although large volumes are often wanted, they are wanted in small particle sizes as to avoid a decrease in strength as is explained below. Elements used for precipitation strengthening in typical aluminium and titanium alloys make up about 10% of their composition. While binary alloys are more easily understood as an academic exercise, commercial alloys often use three components for precipitation strengthening, in compositions such as Al(Mg, Cu) and Ti(Al, V). A large number of other constituents may be unintentional, but benign, or may be added for other purposes such as grain refinement or corrosion resistance. An example is the addition of Sc and Zr to aluminum alloys to form FCC L12 structures that help refine grains and strengthen the material. In some cases, such as many aluminium alloys, an increase in strength is achieved at the expense of corrosion resistance. More recent technology is focused on additive manufacturing due to the higher amount of metastable phases that can be obtained due to the fast cooling, whereas traditional casting is more limited to equilibrium phases. The addition of large amounts of nickel and chromium needed for corrosion resistance in stainless steels means that traditional hardening and tempering methods are not effective. However, precipitates of chromium, copper, or other elements can strengthen the steel by similar amounts in comparison to hardening and tempering. The strength can be tailored by adjusting the annealing process, with lower initial temperatures resulting in higher strengths. The lower initial temperatures increase the driving force of nucleation. More driving force means more nucleation sites, and more sites means more places for dislocations to be disrupted while the finished part is in use. Many alloy systems allow the ageing temperature to be adjusted. For instance, some aluminium alloys used to make rivets for aircraft construction are kept in dry ice from their initial heat treatment until they are installed in the structure. After this type of rivet is deformed into its final shape, ageing occurs at room temperature and increases its strength, locking the structure together. Higher ageing temperatures would risk over-ageing other parts of the structure, and require expensive post-assembly heat treatment because a high ageing temperature promotes the precipitate to grow too readily. Types of hardening There are several ways by which a matrix can be hardened by precipitates, which could also be different for deforming precipitates and non-deforming precipitates. Deforming particles (weak precipitates): Coherency hardening occurs when the interface between the particles and the matrix is coherent, which depends on parameters like particle size and the way that particles are introduced. Coherency is where the lattice of the precipitate and that of the matrix are continuous across the interface. Small particles precipitated from supersaturated solid solution usually have coherent interfaces with the matrix. Coherency hardening originates from the atomic volume difference between precipitate and the matrix, which results in a coherency strain. If the atomic volume of the precipitate is smaller, there will be tension because the lattice atoms are located closer than their normal conditions while when the atomic volume of the precipitate is larger, there will be compression of the lattice atoms, as they are further apart than their normal position. Regardless of whether the lattice is under compression or tension, the associated stress field interacts with dislocations leading to decreased dislocation motion either by repulsion or attraction of the dislocations, leading to an increase in yield strength, similar to the size effect in solid solution strengthening. What differentiates this mechanism from solid solution strengthening is the fact that the precipitate has a definite size, not an atom, and therefore a stronger interaction with dislocations. Modulus hardening results from the different shear modulus of the precipitate and the matrix, which leads to an energy change of dislocation line tension when the dislocation line cuts the precipitate. Also, the dislocation line could bend when entering the precipitate, increasing the affected length of the dislocation line. Again, the strengthening arises in a way similar to that of solid solution strengthening, where there is a mismatch in the lattice that interacts with the dislocations, impeding their motion. Of course, the severity of the interaction is different than that of solid solution and coherency strengthening. Chemical strengthening is associated with the surface energy of the newly introduced precipitate-matrix interface when the particle is sheared by dislocations. Because it takes energy to make the surface, some of the stress that is causing dislocation motion is accommodated by the additional surfaces. Like modulus hardening, the analysis of interfacial area can be complicated by dislocation line distortion. Order strengthening occurs when the precipitate is an ordered structure such that bond energy before and after shearing is different. For example, in an ordered cubic crystal with composition AB, the bond energy of A-A and B-B after shearing is higher than that of the A-B bond before. The associated energy increase per unit area is anti-phase boundary energy and accumulates gradually as the dislocation passes through the particle. However, a second dislocation could remove the anti-phase domain left by the first dislocation when traverses the particle. The attraction of the particle and the repulsion of the first dislocation maintains a balanced distance between two dislocations, which makes order strengthening more complicated. Except for when there are very fine particles, this mechanism is generally not as effective as others to strengthen. Another way to consider this mechanism is that when a dislocation shears a particle, the stacking sequence between the new surface made and the matrix is broken, and the bonding is not stable. To get the sequence back into this interface, another dislocation, is needed to shift the stacking. The first and second dislocation are often called a superdislocation. Because superdislocations are required to shear these particles, there is strengthening because of the decreased dislocation motion. Non-deforming particles (strong precipitate): In non-deforming particles, where the spacing is small enough or the precipitate-matrix interface is disordered, dislocation bows instead of shears. The strengthening is related to the effective spacing between particles considering finite particle size, but not particle strength, because once the particle is strong enough for the dislocations to bow rather than cut, further increase of the dislocation penetration resistance won't affect strengthening. The main mechanism therefore is Orowan strengthening, where the strong particles do not allow for dislocations to move past. Therefore bowing must occur and in this bowing can cause dislocation loops to build up, which decreases the space available for additional dislocation to bow between. If the dislocations cannot shear particles and cannot move past them, then dislocation motion is successfully impeded. Theory The primary species of precipitation strengthening are second phase particles. These particles impede the movement of dislocations throughout the lattice. You can determine whether or not second phase particles will precipitate into solution from the solidus line on the phase diagram for the particles. Physically, this strengthening effect can be attributed both to size and modulus effects, and to interfacial or surface energy. The presence of second phase particles often causes lattice distortions. These lattice distortions result when the precipitate particles differ in size and crystallographic structure from the host atoms. Smaller precipitate particles in a host lattice leads to a tensile stress, whereas larger precipitate particles leads to a compressive stress. Dislocation defects also create a stress field. Above the dislocation there is a compressive stress and below there is a tensile stress. Consequently, there is a negative interaction energy between a dislocation and a precipitate that each respectively cause a compressive and a tensile stress or vice versa. In other words, the dislocation will be attracted to the precipitate. In addition, there is a positive interaction energy between a dislocation and a precipitate that have the same type of stress field. This means that the dislocation will be repulsed by the precipitate. Precipitate particles also serve by locally changing the stiffness of a material. Dislocations are repulsed by regions of higher stiffness. Conversely, if the precipitate causes the material to be locally more compliant, then the dislocation will be attracted to that region. In addition, there are three types of interphase boundaries (IPBs). The first type is a coherent or ordered IPB, the atoms match up one by one along the boundary. Due to the difference in lattice parameters of the two phases, a coherency strain energy is associated with this type of boundary. The second type is a fully disordered IPB and there are no coherency strains, but the particle tends to be non-deforming to dislocations. The last one is a partially ordered IPB, so coherency strains are partially relieved by the periodic introduction of dislocations along the boundary. In coherent precipitates in a matrix, if the precipitate has a lattice parameter less than that of the matrix, then the atomic match across the IPB leads to an internal stress field that interacts with moving dislocations. There are two deformation paths, one is the coherency hardening, the lattice mismatch is Where is the shear modulus, is the coherent lattice mismatch, is the particle radius, is the particle volume fraction, is the burgers vector, equals the concentration. The other one is modulus hardening. The energy of the dislocation energy is , when it cuts through the precipitate, its energy is , the change in line segment energy is . The maximum dislocation length affected is the particle diameter, the line tension change takes place gradually over a distance equal to . The interaction force between the dislocation and the precipitate is and . Furthermore, a dislocation may cut through a precipitate particle, and introduce more precipitate-matrix interface, which is chemical strengthening. When the dislocation is entering the particle and is within the particle, the upper part of the particle shears b with respect to the lower part accompanies the dislocation entry. A similar process occurs when the dislocation exits the particle. The complete transit is accompanied by creation of matrix-precipitate surface area of approximate magnitude , where r is the radius of the particle and b is the magnitude of the burgers vector. The resulting increase in surface energy is , where is the surface energy. The maximum force between the dislocation and particle is , the corresponding flow stress should be . When a particle is sheared by a dislocation, a threshold shear stress is needed to deform the particle. The expression for the required shear stress is as follows: When the precipitate size is small, the required shear stress is proportional to the precipitate size , However, for a fixed particle volume fraction, this stress may decrease at larger values of r owing to an increase in particle spacing. The overall level of the curve is raised by increases in either inherent particle strength or particle volume fraction. The dislocation can also bow around a precipitate particle through so-called Orowan mechanism. Since the particle is non-deforming, the dislocation bows around the particles (), the stress required to effect the bypassing is inversely proportional to the interparticle spacing , that is, , where is the particle radius. Dislocation loops encircle the particles after the bypass operation, a subsequent dislocation would have to be extruded between the loops. Thus, the effective particle spacing for the second dislocation is reduced to with , and the bypassing stress for this dislocation should be , which is greater than for the first one. However, as the radius of particle increases, will increase so as to maintain the same volume fraction of precipitates, will increase and will decrease. As a result, the material will become weaker as the precipitate size increases. For a fixed particle volume fraction, decreases with increasing r as this is accompanied by an increase in particle spacing. On the other hand, increasing increases the level of the stress as a result of a finer particle spacing. The level of is unaffected by particle strength. That is, once a particle is strong enough to resist cutting, any further increase in its resistance to dislocation penetration has no effect on , which depends only on matrix properties and effective particle spacing. If particles of A of volume fraction are dispersed in a matrix, particles are sheared for and are bypassed for , maximum strength is obtained at , where the cutting and bowing stresses are equal. If inherently harder particles of B of the same volume fraction are present, the level of the curve is increased but that of the one is not. Maximum hardening, greater than that for A particles, is found at . Increasing the volume fraction of A raises the level of both and and increases the maximum strength obtained. The latter is found at , which may be either less than or greater than depending on the shape of the curve. Governing equations There are two main types of equations to describe the two mechanisms for precipitation hardening based on weak and strong precipitates. Weak precipitates can be sheared by dislocations while strong precipitates cannot, and therefore the dislocation must bow. First, it is important to consider the difference between these two different mechanisms in terms of the dislocation line tension that they make. The line tension balance equation is: Where is the radius of the dislocation at a certain stress. Strong obstacles have small due to the bowing of the dislocation. Still, decreasing obstacle strength will increase the and must be included in the calculation. L’ is also equal to the effective spacing between obstacles L. This leaves an equation for strong obstacles: Considering weak particles, should be nearing due to the dislocation line staying relatively straight through obstacles. Also , L’ will be: which states the weak particle equation: Now, consider the mechanisms for each regime: Dislocation cutting through particles: For most strengthening at the early stage, it increases with , where is a dimensionless mismatch parameter (for example, in coherency hardening, is the fractional change of precipitate and matrix lattice parameter), is the volume fraction of precipitate, is the precipitate radius, and is the magnitude of the Burgers vector. According to this relationship, materials strength increases with increasing mismatch, volume fraction, and particle size, so that dislocation is easier to cut through particles with smaller radius. For different types of hardening through cutting, governing equations are as following. For coherency hardening, , , where is increased shear stress, is the shear modulus of the matrix, and are the lattice parameter of the precipitate or the matrix. For modulus hardening, , , where and are the shear modulus of the precipitate or the matrix. For chemical strengthening, , , where is the particle-matrix interphase surface energy. For order strengthening, (low , early stage precipitation), where the dislocations are widely separated; (high , early stage precipitation), where the dislocations are not widely separated; , where is anti-phase boundary energy. Dislocations bowing around particles: When the precipitate is strong enough to resist dislocation penetration, dislocation bows and the maximum stress is given by the Orowan equation. Dislocation bowing, also called Orowan strengthening, is more likely to occur when the particle density in the material is lower. where is the material strength, is the shear modulus, is the magnitude of the Burgers vector, is the distance between pinning points, and is the second phase particle radius. This governing equation shows that for dislocation bowing the strength is inversely proportional to the second phase particle radius , because when the volume fraction of the precipitate is fixed, the spacing between particles increases concurrently with the particle radius , therefore increases with . These governing equations show that the precipitation hardening mechanism depends on the size of the precipitate particles. At small , cutting will dominate, while at large , bowing will dominate. Looking at the plot of both equations, it is clear that there is a critical radius at which max strengthening occurs. This critical radius is typically 5-30 nm. The Orowan strengthening model above neglects changes to the dislocations due to the bending. If bowing is accounted for, and the instability condition in the Frank-Read mechanism is assumed, the critical stress for dislocations bowing between pinning segments can be described as: where is a function of , is the angle between the dislocation line and the Burgers vector, is the effective particle separation, is the Burgers vector, and is the particle radius. Other Considerations Grain Size Control Precipitates in a polycrystalline material can act as grain refiners if they are nucleated or located near grain boundaries, where they pin the grain boundaries as an alloy is solidifying and do not allow for a coarse microstructure. This is helpful, as finer microstructures often outperform (mechanical properties) coarser ones at room temperatures. In recent times nano-precipitates are being studied under creep conditions. These precipitates can also pin the grain boundary at higher temperatures, essentially acting as "friction". Another useful effect can be to impede grain-boundary sliding under diffusional creep conditions with very fine precipitates and if the precipitates are homogeneously dispersed in the matrix, then these same precipitates in the grains might interact with dislocations under creep dislocation creep conditions. Secondary Precipitates Different precipitates, depending on their elemental compositions, can form under certain aging conditions that were not previously there. Secondary precipitates can arise from removing solutes from the matrix solid solution states. The control of this can be exploited to control the microstructure and influence properties. Computational discovery of new alloys While significant effort has been made to develop new alloys, the experimental results take time and money to be implemented. One possible alternative is doing simulations with Density functional theory, that can take advantage of, in the context of precipitation hardening, the crystalline structure of precipitates and of the matrix and allow the exploration of a lot more alternatives than with experiments in the traditional form. One strategy for doing these simulations is focusing on the ordered structures that can be found in many metal alloys, like the long-period stacking ordered (LPSO) structures that have been observed in numerous systems. The LPSO structure is long packed layered configuration along one axis with some layers enriched with precipitated elements. This allows to exploit the symmetry of the supercells and it suits well with the currently available DFT methods. In this way, some researchers have developed strategies to screen the possible strengthening precipitates that allow decreasing the weight of some metal alloys. For example, Mg-alloys have received progressive interest to replace Aluminum and Steel in the vehicle industry because it is one of the lighter structural metals. However, Mg-alloys show issues with low strength and ductility which have limited their use. To overcome this, the Precipitation hardening technique, through the addition of rare earth elements, has been used to improve the alloy strength and ductility. Specifically, the LPSO structures were found that are responsible for these increments, generating an Mg-alloy that exhibited high-yield strength: 610 MPa at 5% of elongation at room temperature. In this way, some researchers have developed strategies to Looking for cheaper alternatives than Rare Elements (RE) it was simulated a ternary system with Mg-Xl-Xs, where Xl and Xs correspond to atoms larger than and shorter than Mg, respectively. Under this study, it was confirmed more than 85 Mg-Re-Xs LPSO structures, showing the DFT ability to predict known LPSO ternary structures. Then they explore the 11 non-RE Xl elements and was found that 4 of them are thermodynamically stable. One of them is the Mg-Ca-Zn system that is predicted to form an LPSO structure. Following the previous DFT predictions, other investigators made experiments with the Mg-Zn-Y-Mn-Ca system and found that at 0.34%at Ca addition the mechanical properties of the system were enhanced due to the formation of LPSO-structures, achieving “a good balance of the strength and ductibility”. Examples of precipitation hardening materials 2000-series aluminium alloys (important examples: 2024 and 2019, also Y alloy and Hiduminium) 6000-series aluminium alloys (important example: 6061 for bicycle frames and aeronautical structures) 7000-series aluminium alloys (important examples: 7075 and 7475) 17-4 stainless steel (UNS S17400) Maraging steel Inconel 718 Alloy X-750 René 41 Waspaloy Mulberry (uranium alloy) NAK55 Low Carbon Steel See also Alfred Wilm Strength of materials Strengthening mechanisms of materials Metallurgy Superalloy References Further reading ASM metals handbook vol 4 heat treating External links Project aluMatter Metal heat treatments Strengthening mechanisms of materials
Precipitation hardening
[ "Chemistry", "Materials_science", "Engineering" ]
5,088
[ "Strengthening mechanisms of materials", "Metallurgical processes", "Metal heat treatments", "Materials science" ]
1,591,064
https://en.wikipedia.org/wiki/Breathalyzer
A breathalyzer or breathalyser (a portmanteau of breath and analyzer/analyser), also called an alcohol meter, is a device for measuring breath alcohol content (BrAC). It is commonly utilized by law enforcement officers whenever they initiate traffic stops. The name is a genericized trademark of the Breathalyzer brand name of instruments developed by inventor Patrick Tegeler in the 1950s. Origins Research into the possibilities of using breath to test for alcohol in a person's body dates as far back as 1874, when Francis E. Anstie made the observation that small amounts of alcohol were excreted in breath. In 1927, Emil Bogen produced a paper on breath analysis. He collected air in a football bladder and then tested this air for traces of alcohol, discovering that the alcohol content of 2 litres of expired air was a little greater than that of 1 cc of urine. Also in 1927, a Chicago chemist, William Duncan McNally, invented a breathalyzer in which the breath moving through chemicals in water would change color. One suggested use for his invention was for housewives to test whether their husbands had been drinking. In December 1927, in a case in Marlborough, England, Dr. Gorsky, a police surgeon, asked a suspect to inflate a football bladder with his breath. Since the 2 liters of the man's breath contained 1.5 mg of ethanol, Gorsky testified before the court that the defendant was "50% drunk". The use of drunkenness as the standard, as opposed to BAC, perhaps invalidated the analysis, as tolerance to alcohol varies. However, the story illustrates the general principles of breath analysis. In 1931 the first practical roadside breath-testing device was the drunkometer developed by Rolla Neil Harger of the Indiana University School of Medicine. The drunkometer collected a motorist's breath sample directly into a balloon inside the machine. The breath sample was then pumped through an acidified potassium permanganate solution. If there was alcohol in the breath sample, the solution changed color. The greater the color change, the more alcohol there was present in the breath. The drunkometer was manufactured and sold by Stephenson Corporation of Red Bank, New Jersey. In 1954 Robert Frank Borkenstein (1912–2002) was a captain with the Indiana State Police and later a professor at Indiana University Bloomington. His trademarked Breathalyzer used chemical oxidation and photometry to determine alcohol concentrations. The invention of the Breathalyzer provided law enforcement with a quick and portable test to determine an individual's intoxication level via breath analysis. Subsequent breath analyzers have converted primarily to infrared spectroscopy. In 1967 in Britain, Bill Ducie and Tom Parry Jones developed and marketed the first electronic breathalyser. They established Lion Laboratories in Cardiff. Ducie was a chartered electrical engineer, and Tom Parry Jones was a lecturer at UWIST. The Road Safety Act 1967 introduced the first legally enforceable maximum blood alcohol level for drivers in the UK, above which it became an offence to be in charge of a motor vehicle; and introduced the roadside breathalyser, made available to police forces across the country. In 1979, Lion Laboratories' version of the breathalyser, known as the Alcolyser and incorporating crystal-filled tubes that changed colour above a certain level of alcohol in the breath, was approved for police use. Lion Laboratories won the Queen's Award for Technological Achievement for the product in 1980, and it began to be marketed worldwide. The Alcolyser was superseded by the Lion Intoximeter 3000 in 1983, and later by the Lion Alcolmeter and Lion Intoxilyser. These later models used a fuel cell alcohol sensor rather than crystals, providing a more reliable curbside test and removing the need for blood or urine samples to be taken at a police station. In 1991, Lion Laboratories was sold to the American company MPD, Inc. Accuracy Breath analyzers do not directly measure blood alcohol concentration (BAC), which requires the analysis of a blood sample. Instead, they measure the amount of alcohol in one's breath, BrAC, generally reported in milligrams of alcohol per liter of breathed air. The relationship between BrAC and BAC is complex, and is affected by many factors. Calibration Calibration is the process of checking and adjusting the internal settings of a breath analyzer by comparing and adjting its test results to a known alcohol standard. Breath analyzer sensors drift over time and require periodic calibration to ensure accuracy. Many handheld breath analyzers sold to consumers use a silicon oxide sensor (also called a semiconductor sensor) to determine the alcohol concentration. These sensors are prone to contamination and interference from substances other than breath alcohol, and require recalibration or replacement every six months. Higher-end personal breath analyzers and professional-use breath alcohol testers use platinum fuel cell sensors. These too require recalibration but at less frequent intervals than semiconductor devices, usually once a year. There are two ways of calibrating a precision fuel cell breath analyzer, the wet-bath and the dry-gas methods. Each method requires specialized equipment and factory-trained technicians. It is not a procedure that can be conducted by untrained users or without the proper equipment. The dry-gas method utilizes a portable calibration standard which is a precise mixture of ethanol and inert nitrogen available in a pressurized canister. Initial equipment costs are less than alternative methods and the steps required are fewer. The equipment is also portable allowing calibrations to be done when and where required. The wet-bath method utilizes an ethanol/water standard in a precise specialized alcohol concentration, contained and delivered in specialized breath simulator equipment. The wet-bath method has a higher initial cost and is not intended to be portable. The standard must be fresh and replaced regularly. In addition, the assumed water-air partition ratio for aqueous ethanol must be taken into account along with its associated uncertainty. Some semiconductor models are designed specifically to allow the sensor module to be replaced without the need to send the unit to a calibration lab. Non-specific analysis One major problem with older breath analyzers is non-specificity: the machines identify not only the ethyl alcohol (or ethanol) found in alcoholic beverages but also other substances similar in molecular structure or reactivity, "interfering compounds". The oldest breath analyzer models pass breath through a solution of potassium dichromate, which oxidizes ethanol into acetic acid, changing color in the process. A monochromatic light beam is passed through this sample, and a detector records the change in intensity and, hence, the change in color, which is used to calculate the percent alcohol in the breath. However, since potassium dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by it, producing false positives. This source of false positives is unlikely as very few other substances found in exhaled air are oxidizable. Infrared-based breath analyzers project an infrared beam of radiation through the captured breath in the sample chamber and detect the absorbance of the compound as a function of the wavelength of the beam, producing an absorbance spectrum that can be used to identify the compound, as the absorbance is due to the harmonic vibration and stretching of specific bonds in the molecule at specific wavelengths (see infrared spectroscopy). The characteristic bond of alcohols in infrared is the O-H bond, which gives a strong absorbance at a short wavelength. The more light is absorbed by compounds containing the alcohol group, the less reaches the detector on the other side—and the higher the reading. Other groups, most notably aromatic rings and carboxylic acids can give similar absorbance readings. Some natural and volatile interfering compounds do exist, however. For example, the National Highway Traffic Safety Administration has found that dieters and diabetics may have acetone levels hundreds or even thousands of times higher than those in others. Acetone is one of the many substances that can be falsely identified as ethyl alcohol by some breath machines. However, fuel cell based systems are non-responsive to substances like acetone. Substances in the environment can also lead to false BAC readings. For example, methyl tert-butyl ether, a common gasoline additive, has been alleged anecdotally to cause false positives in persons exposed to it. Tests have shown this to be true for older machines; however, newer machines detect this interference and compensate for it. Any number of other products found in the environment or workplace can also cause erroneous BAC results. These include compounds found in lacquer, paint remover, celluloid, gasoline, and cleaning fluids, especially ethers, alcohols, and other volatile compounds. Pharmacokinetics Absorption of alcohol continues for anywhere from 20 minutes (on an empty stomach) to two-and-one-half hours (on a full stomach) after the last consumption, generally taking around 40-50 minutes. During the absorptive phase, the concentration of alcohol throughout the body changes unpredictably, as it is affected by gastrointestinal physiology such as irregular contraction patterns. After absorption, the concentrations in the body settle down and follow predictable patterns. During absorption, the BAC in arterial blood will generally be higher than in venous blood, but post-absorption, venous BAC will be higher than arterial BAC. This is especially clear with bolus dosing, chugging a single large drink. With additional doses of alcohol, the definitions of absorption and post-absorption are less clear. However, once absorption of the last drink has finished, the concentrations will follow standard post-absorption curves. It is also not always clear from a BAC graph when the absorption phase finishes - for example, the body can reach a sustained equilibrium BAC where absorption and elimination are proportional. Across all phases, BrAC correlates closely with arterial BAC. Arterial blood distributes oxygen throughout the body. Breath alcohol is a representation of the equilibrium of alcohol concentration as the blood gases (alcohol) pass from the arterial blood into the lungs to be expired in the breath. The ratio of ABAC:BrAC is 2294 ± 56 across all phases and 2251 ± 46 [2141-2307] in the post-absorption phase. For example, a breathalyzer measurement of 0.10 mg/L of breath alcohol characterises approximately 0.0001×2251 g/L, or 0.2251 g/L of arterial blood alcohol concentration (equivalent to 0.2251 permille or 0.02251% BAC). The ratio of venous blood alcohol content to breath alcohol content may vary significantly, from 1300:1 to 3100:1. Assuming a blood-alcohol concentration of 0.07%, for example, a person could have a partition ratio of 1500:1 and a breath test reading of 0.10 g/2100 mL, over the legal limit in some jurisdictions. However, low partition ratios are generally observed during the absorption phase. Post-absorption, the ratio is relatively fixed, 2382 ± 119 [2125–2765], although this ratio was measured in a laboratory environment and variation may be larger in real-world scenarios. Other false positives of high BrAC and also blood reading are related to patients with proteinuria and hematuria, due to kidney metabolization and failure. The metabolization rate of related patients with kidney damage is abnormal in relation to percent in alcohol in the breath. However, since potassium dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by kidney and blood filtration, producing false positives. Breathing pattern It is sometimes said that the exhaled air analyzed by the breathalyzer is "alveolar air", coming from the alveoli in close proximity to the blood in pulmonary circulation and containing ethanol in concentrations proportional to that blood approximated by Henry's law. However, the alcohol in the exhaled air comes essentially from the airways of the lung, and not from the alveoli. The alcohol acts similarly to water vapor, so it is instructive to study the humidity of lung air. During breathing, the inspired air picks up water and alcohol from the airways. Almost all uptake occurs in the upper airways; thus, the BrAC is most affected by the alcohol concentration in the bronchial circulation, which supplies blood to these airways. When the air reaches the alveoli, it is already near equilibrium - this is why inhaling dry air does not dry out the lungs significantly. With exhalation, water and alcohol are rapidly lost to the airways, primarily within the fifth to fifteenth generations of branching. Nonetheless, as may be evidenced by seeing one's breath in the cold, some water vapor does not get re-absorbed by the airways and is exhaled, and similarly some alcohol is exhaled during breathing. But the relationship of the alcohol concentration of this air to the concentration of alcohol in the blood is somewhat suspect and can be affected by many variables. As air is exhaled, the alcohol concentration of the exhaled air increases over time, rising significantly in the first few seconds and then slowing down after, but not leveling out until the subject stops exhaling. This is not because there is a "dead space" of non-alcoholic air in the airways - the alcohol concentration is nearly identical in all regions of the lung. Rather, it is because, during exhalation, water and alcohol are being redeposited on the airways, primarily the trachea and generations 6 though 12 of the airways. As more fluid is deposited on the mucous surfaces, the remaining fluid travels further, resulting in more alcohol being recorded by the breathalyzer. The recorded alcohol concentrations never reach the alveolar alcohol concentration, even if the subject exhales as deeply as possible. According to Henry's law, alveolar air alcohol concentration would be pulmonary BAC divided by 1756, compared to the BrAC which is arterial blood concentration divided by 2251. When the subject stops exhaling, the alcohol concentration levels off - this does not indicate that alveolar air has been obtained, as it will level off regardless of the point at which the subject stops exhaling. But it does mean that end-exhaled BrAC is readily obtained. This brings up the question of what is meant by reporting BrAC as a single number; is it the "deep-lung air", the highest possible reading obtainable by the subject's full exhalation? Or is it the zero concentration at the initial part of the curve? Hlastala suggests using the average BrAC during the exhalation, which corresponds to the BrAC measured at about the 5-second mark. The Supreme Court of California determined that the BrAC is defined as the alcohol concentration of the last part of the subject’s expired breath. End-exhaled BrAC varies depending on several factors. Most alcohol breath testers require a minimum exhalation volume (normally between 1.1 and 1.5 L) or minimum six-second exhalation time before the breath sample is accepted. This raises concerns for subject with smaller lung volumes - they must exhale a greater fraction of their available lung volume compared with a larger subject. A mathematical model suggests that a 2L-lung-capacity subject's end-exhaled BrAC may read 35% higher than a 6L subject for the same minimum 1.5L exhalation and alveolar alcohol concentration. For exhalation to the maximum extent, such as under typical laboratory conditions, measured BrAC is unaffected by lung size. The subject's body temperature and breath temperature also influence results, with an increase in temperature corresponding to an increase in measured BrAC. Furthermore, the humidity and temperature of the ambient air can decrease results by as much as 10%. The result of these factors is that the breath test is more forgiving for some subjects than others. Nonetheless, the overall variance due to how much one breathes out is usually low, and some breathalyzers compensate for the volume of air. Jones tested several breathing patterns immediately before and during breathalyzer use and found the following changes (in order of effect): Hyperventilation by rapid inspiration and expiration of room air for 20 seconds before forced expiration - decrease by 10% Moderate inspiration through mouth and deep expiration - control Deep expiration without an inspiration - statistically insignificant increase Inspiration through the nose before a deep expiration. - 1.3% increase Deep inspiration followed by a slow (20 second) expiration. - 2.0% increase Mouth closed for 5 minutes (shallow breathing) before nose-inspiration and a forced expiration. - 7.7% increase Inspiration through the nose followed by breath-holding for 30 seconds before forced expiration. - 12.6% increase A normal inspiration with breath-holding for 30 seconds before a forced expiration. - 15.7% increase Overall, the results show an increase in measured BrAC with increased contact between the lungs and the measured air. Exercising immediately before the test, such as running up and down a flight of stairs, can also reduce measured BrAC by 13% or more, with the combined effect of exercise and hyperventilation reaching 20%. Mouth alcohol One of the most common causes of falsely high breath analyzer readings is the existence of mouth alcohol. In analyzing a subject's breath sample, the breath analyzer's internal computer is making the assumption that the alcohol in the breath sample came from the lungs. However, alcohol may have come from the mouth, throat or stomach for a number of reasons. A very tiny amount of alcohol from the mouth, throat or stomach can have a significant impact on the breath-alcohol reading. Recent use of mouthwash or breath fresheners can also skew results upward, as they can contain fairly high levels of alcohol. Listerine mouthwash, for example, contains 26.9% alcohol, and can skew results for between 5 and 10 minutes. A scientist tested the effects of Binaca breath spray on an Intoxilyzer 5000. He performed 23 tests with subjects who sprayed their throats and obtained readings as high as 0.81—far beyond legal levels. The scientist also noted that the effects of the spray did not fall below detectable levels until after 18 minutes. Other than those, the most common source of mouth alcohol is from belching or burping. This causes the liquids and/or gases from the stomach—including any alcohol—to rise up into the soft tissue of the esophagus and oral cavity, where it will stay until it has dissipated. The American Medical Association concludes in its Manual for Chemical Tests for Intoxication (1959): "True reactions with alcohol in expired breath from sources other than the alveolar air (eructation, regurgitation, vomiting) will, of course, vitiate the breath alcohol results." Acid reflux, or gastroesophageal reflux disease, can greatly exacerbate the mouth-alcohol problem. The stomach is normally separated from the throat by a valve, but when this valve becomes incompetent or herniated, there is nothing to stop the liquid contents in the stomach from rising and permeating the esophagus and mouth. The contents—including any alcohol—are then later exhaled into the breathalyzer. One study of 10 individuals suffering from this condition did not find any actual increase in breath ethanol. Mouth alcohol can also be created in other ways. Dentures, some have theorized, will trap alcohol, although experiments have shown no difference if the normal 15 minute observation period is observed. Periodontal disease can also create pockets in the gums which will contain the alcohol for longer periods. Also known to produce false results due to residual alcohol in the mouth is passionate kissing with an intoxicated person. To help guard against mouth-alcohol contamination, certified breath-test operators and police officers are trained to observe a test subject carefully for at least 15–20 minutes before administering the breath test. Some instruments also feature built-in safeguards. The Intoxilyzer 5000 features a "slope" parameter. This parameter detects any decrease in alcohol concentration of 0.006 g per 210 L of breath in 0.6 second, a condition indicative of residual mouth alcohol, and will result in an "invalid sample" warning to the operator, notifying the operator of the presence of the residual mouth alcohol. Other instruments require that the individual be tested twice at least two minutes apart. Mouthwash or other mouth alcohol will have somewhat dissipated after two minutes and cause the second reading to disagree with the first, requiring a retest. Many preliminary breath testers, however, feature no such safeguards. Myths about accuracy There are a number of substances or techniques that can supposedly "fool" a breath analyzer (i.e., generate a lower blood alcohol content). A 2003 episode of the science television show MythBusters tested a number of methods that supposedly allow a person to fool a breath analyzer test. The methods tested included breath mints, onions, denture cream, mouthwash, pennies and batteries; all of these methods proved ineffective. The show noted that using these items to cover the smell of alcohol may fool a person, but, since they will not actually reduce a person's BrAC, there will be no effect on a breath analyzer test regardless of the quantity used, if any, it appeared that using mouthwash only raised the BrAC. Pennies supposedly produce a chemical reaction, while batteries supposedly create an electrical charge, yet neither of these methods affected the breath analyzer results. The MythBusters episode also pointed out another complication: it would be necessary to insert the item into one's mouth (for example, eat an onion, rinse with mouthwash, conceal a battery), take the breath test, and then possibly remove the item — all of which would have to be accomplished discreetly enough to avoid alerting the police officers administering the test (who would obviously become very suspicious if they noticed that a person was inserting items into their mouth prior to taking a breath test). It would likely be very difficult, especially for someone in an intoxicated state, to be able to accomplish such a feat. In addition, the show noted that breath tests are often verified with blood tests (BAC, which are more accurate) and that even if a person somehow managed to fool a breath test, a blood test would certainly confirm a person's guilt. Other substances that might reduce the BrAC reading include a bag of activated charcoal concealed in the mouth (to absorb alcohol vapor), an oxidizing gas (such as N2O, Cl2, O3, etc.) that would fool a fuel cell type detector, or an organic interferent to fool an infrared absorption detector. The infrared absorption detector is more vulnerable to interference than a laboratory instrument measuring a continuous absorption spectrum since it only makes measurements at particular discrete wavelengths. However, due to the fact that any interference can only cause higher absorption, not lower, the estimated blood alcohol content will be overestimated. Additionally, Cl2 is toxic and corrosive. A 2007 episode of the Spike network's show Manswers showed some of the more common and not-so-common ways of attempts to beat the breath analyzer, none of which work. Test 1 was to suck on a copper-coated coin such as a penny. Test 2 was to hold a battery on the tongue. Test 3 was to chew gum. None of these tests showed a "pass" reading if the subject had consumed alcohol. Law enforcement In general, two types of breathalyzer are used. Small hand-held breathalyzers are not reliable enough to provide evidence in court but reliable enough to justify an arrest. These devices may be used by officers in the field as a form of "field sobriety test" commonly called "preliminary breath test" or "preliminary alcohol screening", or as evidential devices in point of arrest testing. Larger breathalyzer devices found in police stations can be used to produce court evidence, These desktop analyzers generally use infrared spectrophotometer technology, electrochemical fuel cell technology, or a combination of the two. All breath alcohol testers used by law enforcement in the United States of America must be approved by the Department of Transportation's National Highway Traffic Safety Administration. Breath alcohol laws The breath alcohol content reading may be used in prosecutions of the crime of driving under the influence of alcohol (sometimes referred to as driving or operating while intoxicated) in several ways. Historically, states in the US initially prohibited driving with a high level of BAC, and did not have any laws regarding BrAC. A BrAC test result was merely presented as indirect evidence of BAC. Where the defendant had refused to take a subsequent blood test, the only way the state could prove BAC was by presenting scientific evidence of how alcohol in the breath gets there from alcohol in the blood, along with evidence of how to convert from one to the other. DUI defense attorneys frequently contested the scientific reliability of such evidence. Before September 2011, South Dakota relied solely on blood tests to ensure accuracy. States responded in different ways to the inability to rely on breathalyzer evidence. Many states such as California modified their statutes so to make a certain level of alcohol in the breath illegal per se. In other words, the BrAC level itself became the direct predicate evidence for conviction, with no need to estimate BAC. In per se jurisdictions such as the UK, it is automatically illegal to drive a vehicle with a sufficiently high breath alcohol concentration (BrAC). The breath analyzer reading of the operator will be offered as evidence of that crime, and challenges can only be offered on the basis of an inaccurate reading. In other states, such as California and New Jersey, the statute remains tied to BAC, but the BrAC results of certain machines have been judicially deemed presumptively accurate substitutes for blood testing when used as directed. While BrAC tests are not necessary to prove a defendant was under the influence, laws in these states create a rebuttable presumption, which means it is presumed that the driver was intoxicated given a high BrAC reading, but that presumption can be rebutted if a jury finds it unreliable or if other evidence establishes a reasonable doubt as to whether the person actually drove with a breath or blood alcohol level of 0.08% or greater. Another issue is that the BrAC is typically tested several hours after the time of driving. Some jurisdictions, such as the State of Washington, allow the use of breath analyzer test results without regard as to how much time passed between operation of the vehicle and the time the test was administered, or within a certain number of hours of testing. Other jurisdictions use retrograde extrapolation to estimate the BAC or BrAC at the time of driving. One exception to criminal prosecution is the state of Wisconsin, where a first time drunk driving offense is normally a civil ordinance violation. Breath levels There is no international consensus on the statutory ratio of blood to breath levels, ranging from 2000:1 (most of Europe) to 2100:1 (US) to 2300:1 (UK). In the US, the ratio of 2100:1 was determined based on studies done in 1930-1950, with a 1952 report of the National Safety Council establishing the 2100:1 figure. The NSC has acknowledged that more recent research shows the actual relationship is most probably higher than 2100:1 and closer to 2300:1, but opines that this difference is of minimal practical significance in law enforcement. The use of the lower 2100:1 factor errs on the side of conservativism and can only favor the driver. In early years, the range of the BrAC threshold in the US varied considerably between States. States have since adopted a uniform 0.08% BrAC level, due to federal guidelines. It is said that the federal government ensures the passage of the federal guidelines by tying traffic safety highway funds to compliance with federal guidelines on certain issues, such as the federal government ensuring that the legal drinking age be the age of 21 across the 50 states. Police in Victoria, Australia, use breathalyzers that give a recognized 20% tolerance on readings. Noel Ashby, former Victoria Police Assistant Commissioner (Traffic & Transport), claims that this tolerance is to allow for different body types. Preliminary breath tests The preliminary breath test or preliminary alcohol screening test uses small hand-held breath analyzers (hand-held breathalyzers). (The terms "preliminary breath test" ("PBT") and "preliminary alcohol screening test" reference the same devices and functions.) They are generally based on electrochemical platinum fuel cell analysis. These units are similar to some evidentiary breathalyzers, but typically are not calibrated frequently enough for evidentiary purposes. The test device typically provides numerical blood alcohol content (BAC) readings, but its primary use is for screening. In some cases, the device even has "pass/fail" indicia. For example, in Canada, PST devices, called "alcohol screening devices" are set so that, from 0 to 49 mg% it shows digits, from 50 to 99 mg% it shows the word "warn" and 100 mg% and above it shows "fail". These preliminary breath tests are sometimes categorised as part of field sobriety testing, although it is not part of the series of performance tests generally associated with field sobriety tests (FSTs) or standard field sobriety tests (SFSTs). In Canada, a preliminary non-evidentiary screening device can be approved by Parliament as an approved screening device. In order to demand a person produce a breathalyzer sample an officer must have "reasonable suspicion" that the person drove with more than 80 mg alcohol per 100 mL of blood. The demand must be within three hours of driving. Any driver that refuses can be charged under s.254 of the Criminal Code. With the legalization of cannabis, updates to the criminal code are proposed that will allow a breathalyzer test to be administered without suspicion of impairment. The US National Highway Traffic Safety Administration maintains a Conforming Products List of breath alcohol devices approved for preliminary screening use. In the United States, the main use of the preliminary breath test (PBT) is to establish probable cause for arrest. All states have implied consent laws, which means that by applying for a driver's license, drivers are agreeing to take an evidentiary chemical test (blood, breath, or urine) after being arrested for a DUI. But in US law, the arrest and subsequent test may be invalidated if it is found that the arrest lacked probable cause. The PBT establishes a baseline alcohol level that the police officer may use to justify the arrest. The result of the PBT is not generally admissible in court, except to establish probable cause, although some states, such as Idaho, permit data or "readings" from hand-held preliminary breath testers or preliminary alcohol screeners to be presented as evidence in court. In states such as Florida and Colorado, there are no penalties for refusing a PBT. Police are not obliged to advise the suspect that participation in a FST, PBT, or other pre-arrest procedures is voluntary. In contrast, formal evidentiary tests given under implied consent requirements are considered mandatory. Refusal to take a preliminary breath test in the State of Michigan subjects a non-commercial driver to a "civil infraction" fine, with no violation "points", but is not considered to be a refusal under the general "implied consent" law. In some states, the state may present evidence of refusal to take a field sobriety test in court, although this is of questionable probative value in a drunk driving prosecution. Different requirements apply in many states to drivers under DUI probation, in which case participation in a preliminary breath test may be a condition of probation, and for commercial drivers under "drug screening" requirements. Some US states, notably California, have statutes on the books penalizing preliminary breath test refusal for drivers under 21; however the Constitutionality of those statutes has not been tested. (As a practical matter, most criminal lawyers advise suspects who refuse a preliminary breath test or preliminary alcohol screening to not engage in discussion or "justifying" the refusal with the police.) Evidentiary breath tests In Canada, an evidentiary breath instrument can be designated as an approved instrument. The US National Highway Traffic Safety Administration maintains a Conforming Products List of breath alcohol devices approved for evidentiary use, Infrared instruments are also known as "evidentiary breath testers" and generally produce court-admissible results. Drinking after driving A common defense to an impaired driving charge (in appropriate circumstances) is that the consumption of alcohol occurred subsequent to driving. The typical circumstance where this comes up is when a driver consumes alcohol after a road accident, as an affirmative defense. This closely relates to absorptive stage intoxication (or bolus drinking), except that the consumption of alcohol also occurred after driving. This defense can be overcome by retrograde extrapolation (infra), but complicates prosecution. While jurisdictions that recognise absorptive stage intoxication as a defense would also accept a defense of consumption after driving, some jurisdictions penalise post-driving drinking. While laws regarding absorption of alcohol consumed before (or while) driving are generally per se, most statutes directed to post-driving consumption allow defenses for circumstances related to activity not related to. In Canada, it is illegal to be over the impaired driving limits within 3 hours of driving (given as 2 hours by CDN DOJ); however, the new law allows a "drinking after driving" defence in a situation where a driver had no reason to expect a demand by the police for breath testing. South Africa is more straightforward, with a separate penalty applied for consumption "After An Accident" until reported to the police and if so required, has been medically examined. Retrograde extrapolation The breath analyzer test is usually administered at a police station, commonly an hour or more after the arrest. Although this gives the BrAC at the time of the test, it does not by itself answer the question of what it was at the time of driving. The prosecution typically provides an estimated alcohol concentration at the time of driving utilizing retrograde extrapolation, presented by expert opinion. This involves projecting back in time to estimate the BrAC level at the time of driving, by applying the physiological properties of absorption and elimination rates in the human body. Extrapolation is calculated using five factors and a general elimination rate of 0.015/hour. Example Time of breath test-10:00pm...Result of breath test-0.080...Time of driving-9:00pm (stopped by officer)...Time of last drink-8:00pm...Last food-12:00pm. Using these facts, an expert can say the person's last drink was consumed on an empty stomach, which means absorption of the last drink (at 8:00) was complete within one hour-9:00. At the time of the stop, the driver is fully absorbed. The test result of 0.080 was at 10:00. So the one hour of elimination that has occurred since the stop is added in, making 0.080+0.015=0.095 the approximate breath alcohol concentration at the time of the stop. Consumer use Public breathalyzers are becoming a method for consumers to test themselves at the source of alcohol consumption. These are used in pubs, bars, restaurants, charities, weddings and all types of licensed events. As breathalyzer tests have increased risk of transmission of coronavirus, they were temporarily suspended from use in Sweden. Breathalyzer sensors Photovoltaic assay The photovoltaic assay, used only in the dated photoelectric intoximeter, is a form of breath testing rarely encountered today. The process works by using photocells to analyze the color change of a redox (oxidation-reduction) reaction. A breath sample is bubbled through an aqueous solution of sulfuric acid, potassium dichromate, and silver nitrate. The silver nitrate acts as a catalyst, allowing the alcohol to be oxidized at an appreciable rate. The requisite acidic condition needed for the reaction might also be provided by the sulfuric acid. In solution, ethanol reacts with the potassium dichromate, reducing the dichromate ion to the chromium (III) ion. This reduction results in a change of the solution's color from red-orange to green. The reacted solution is compared to a vial of non-reacted solution by a photocell, which creates an electric current proportional to the degree of the color change; this current moves the needle that indicates BAC. Like other methods, breath testing devices using chemical analysis are prone to false readings. Compounds that have compositions similar to ethanol, for example, could also act as reducing agents, creating the necessary color change to indicate increased BAC. Infrared spectroscopy Infrared breathalyzers allow a high degree of specificity for ethanol. Typically evidential breath alcohol instruments in police stations will work on the principle of infrared spectroscopy. Fuel cell Fuel cell gas sensors are based on the oxidation of ethanol to acetaldehyde on an electrode. The current produced is proportional to the amount of alcohol present. These sensors are very stable, typically requiring calibration every 6 months, and are the type of sensor usually found in roadside breath testing devices. Semiconductor Semiconductor gas sensors are based on the increase in conductance of a tin oxide layer in the presence of a reducing gas such as vaporized ethanol. They are found in inexpensive breathalyzers and their stability is not as reliable as fuel cell instruments. See also Coronavirus breathalyzer References External links Alcohol law Vehicle safety technologies Brands that became generic Driving under the influence Law enforcement equipment Spectroscopy Harm reduction Drug testing
Breathalyzer
[ "Physics", "Chemistry" ]
7,825
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
1,591,163
https://en.wikipedia.org/wiki/Atmospheric%20wave
An atmospheric wave is a periodic disturbance in the fields of atmospheric variables (like surface pressure or geopotential height, temperature, or wind velocity) which may either propagate (traveling wave) or be stationary (standing wave). Atmospheric waves range in spatial and temporal scale from large-scale planetary waves (Rossby waves) to minute sound waves. Atmospheric waves with periods which are harmonics of 1 solar day (e.g. 24 hours, 12 hours, 8 hours... etc.) are known as atmospheric tides. Causes and effects The mechanism for the forcing of the wave, for example, the generation of the initial or prolonged disturbance in the atmospheric variables, can vary. Generally, waves are either excited by heating or dynamic effects, for example the obstruction of the flow by mountain ranges like the Rocky Mountains in the U.S. or the Alps in Europe. Heating effects can be small-scale (like the generation of gravity waves by convection) or large-scale (the formation of Rossby waves by the temperature contrasts between continents and oceans in the Northern hemisphere winter). Atmospheric waves transport momentum, which is fed back into the background flow as the wave dissipates. This wave forcing of the flow is particularly important in the stratosphere, where this momentum deposition by planetary-scale Rossby waves gives rise to sudden stratospheric warmings and the deposition by gravity waves gives rise to the quasi-biennial oscillation. In the mathematical description of atmospheric waves, spherical harmonics are used. When considering a section of a wave along a latitude circle, this is equivalent to a sinusoidal shape. Spherical harmonics, representing individual Rossby-Haurwitz planetary wave modes, can have any orientation with respect to the axis of rotation of the planet. Remarkably - while the very existence of these planetary wave modes requires the rotation of the planet around its polar axis - the phase velocity of the individual wave modes does not depend on the relative orientation of the spherically harmonic wave mode with respect to the axis of the planet. This can be shown to be a consequence of the underlying (approximate) spherical symmetry of the planet, even though this symmetry is broken by the planet's rotation. Types of waves Because the propagation of the wave is fundamentally caused by an imbalance of the forces acting on the air (which is often thought of in terms of air parcels when considering wave motion), the types of waves and their propagation characteristics vary latitudinally, principally because the Coriolis effect on horizontal flow is maximal at the poles and zero at the equator. There are four different types of waves: sound waves (usually eliminated from the atmospheric equations of motion due to their high frequency) These are longitudinal or compression waves. The sound wave propagates in the atmosphere though a series of compressions and expansions parallel to the direction of propagation. internal gravity waves (require stable stratification of the atmosphere) inertio-gravity waves (also include a significant Coriolis effect as opposed to "normal" gravity waves) Rossby waves (can be seen in the troughs and ridges of 500 hPa geopotential caused by midlatitude cyclones and anticyclones) At the equator, mixed Rossby-gravity and Kelvin waves can also be observed. See also Atmospheric thermodynamics References Further reading Holton, James R.: An Introduction to Dynamic Meteorology 2004 Wave Waves
Atmospheric wave
[ "Physics", "Chemistry" ]
696
[ "Physical phenomena", "Atmospheric dynamics", "Waves", "Motion (physics)", "Fluid dynamics" ]
1,591,812
https://en.wikipedia.org/wiki/Thapsigargin
Thapsigargin is a non-competitive inhibitor of the sarco/endoplasmic reticulum Ca2+ ATPase (SERCA). Structurally, thapsigargin is classified as a guaianolide, and is extracted from a plant, Thapsia garganica. It is a tumor promoter in mammalian cells. Thapsigargin raises cytosolic (intracellular) calcium concentration by blocking the ability of the cell to pump calcium into the sarcoplasmic and endoplasmic reticula. Store-depletion can secondarily activate plasma membrane calcium channels, allowing an influx of calcium into the cytosol. Depletion of ER calcium stores leads to ER stress and activation of the unfolded protein response. Non-resolved ER stress can cumulatively lead to cell death. Prolonged store depletion can protect against ferroptosis via remodeling of ER-synthesized phospholipids. Thapsigargin treatment and the resulting ER calcium depletion inhibits autophagy independent of the UPR. Thapsigargin is useful in experimentation examining the impacts of increasing cytosolic calcium concentrations and ER calcium depletion. A study from the University of Nottingham showed promising results for its use against Covid-19 and other coronavirus. Biosynthesis The complete biosynthesis of thapsigargin has yet to be elucidated. A proposed biosynthesis starts with the farnesyl pyrophosphate. The first step is controlled by the enzyme germacrene B synthase. In the second step, the C(8) position is easily activated for an allylic oxidation due to the position of the double bond. The next step is the addition of the acyloxy moiety by a P450 acetyltransferase; which is a well known reaction for the synthesis of the diterpene, taxol. In the third step, the lactone ring is formed by a cytochrome P450 enzyme using NADP+. With the butyloxy group on the C(8), the formation will only generate the 6,12-lactone ring. The fourth step is an epoxidation that initiates the last step of the base guaianolide formation. In the fifth step, a P450 enzyme closes the 5 + 7 guaianolide structure. The ring closing is important, because it will proceed via 1,10 - epoxidation in order to retain the 4,5 - double bond needed in thapsigargin. It is not known whether the secondary modifications to the guaianolide occur before, or after the formation of thapsigargin, but will need to be considered when elucidating the true biosynthesis. It should also be noted, that several of these enzymes are P450s, therefore oxygen and NADPH are likely crucial to this biosynthesis as well as other cofactors such as Mg2+ and Mn2+ may be needed. Research Since inhibition of SERCA is a mechanism of action that has been used to target solid tumors, thapsigargin has attracted research interest. A prodrug of thapsigargin, mipsagargin, is currently undergoing clinical trials for the treatment of glioblastoma. The biological activity has also attracted research into the laboratory synthesis of thapsigargin. To date, three distinct syntheses have been reported: one by Steven V. Ley, one by Phil Baran., and one by P. Andrew Evans. Preclinical studies demonstrated that other effects of thapsigargin include suppression of nicotinic acetylcholine receptors activity in neurons of the guinea-pig ileum submucous plexus and rat superior cervical ganglion. Laboratory studies at the University of Nottingham, using in vitro cell cultures, indicates possible potential as a broad spectrum antiviral, with activity against the COVID-19 virus (SARS-CoV-2), a common cold virus, respiratory syncytial virus (RSV), and the influenza A virus. See also EBC-46 References Further reading Hydrolase inhibitors Sesquiterpene lactones Acetate esters Butyrate esters Azulenofurans Tertiary alcohols Cyclopentenes ATPase inhibitors Plant toxins
Thapsigargin
[ "Chemistry" ]
916
[ "Chemical ecology", "Plant toxins" ]
1,592,325
https://en.wikipedia.org/wiki/Glutamate%20decarboxylase
Glutamate decarboxylase or glutamic acid decarboxylase (GAD) is an enzyme that catalyzes the decarboxylation of glutamate to gamma-aminobutyric acid (GABA) and carbon dioxide (). GAD uses pyridoxal-phosphate (PLP) as a cofactor. The reaction proceeds as follows: In mammals, GAD exists in two isoforms with molecular weights of 67 and 65 kDa (GAD67 and GAD65), which are encoded by two different genes on different chromosomes (GAD1 and GAD2 genes, chromosomes 2 and 10 in humans, respectively). GAD67 and GAD65 are expressed in the brain where GABA is used as a neurotransmitter, and they are also expressed in the insulin-producing β-cells of the pancreas, in varying ratios depending upon the species. Together, these two enzymes maintain the major physiological supply of GABA in mammals, though it may also be synthesized from putrescine in the enteric nervous system, brain, and elsewhere by the actions of diamine oxidase and aldehyde dehydrogenase 1a1. Several truncated transcripts and polypeptides of GAD67 are detectable in the developing brain, however their function, if any, is unknown. Structure and mechanism Both isoforms of GAD are homodimeric structures, consisting of three primary domains: the PLP, C-terminal and N-terminal domains. The PLP-binding domain of this enzyme adopts a type I PLP-dependent transferase-like fold. The reaction proceeds via the canonical mechanism, involving Schiff base linkage between PLP and Lys405. PLP is held in place through base-stacking with an adjacent histidine residue, and GABA is positioned such that its carboxyl group forms a salt bridge with arginine and a hydrogen bond with glutamine. Dimerization is essential to maintaining function as the active site is found at this interface, and mutations interfering with optimal association between the 2 chains has been linked to pathology, such as schizophrenia. Interference of dimerization by GAD inhibitors such as 2-keto-4-pentenoic acid (KPA) and ethyl ketopentenoate (EKP) were also shown to lead to dramatic reductions in GABA production and incidence of seizures. Catalytic activity is mediated by a short flexible loop at the dimer interface (residues 432–442 in GAD67, and 423–433 in GAD65). In GAD67 this loop remains tethered, covering the active site and providing a catalytic environment to sustain GABA production; its mobility in GAD65 promotes a side reaction that results in release of PLP, leading to autoinactivation. The conformation of this loop is intimately linked to the C-terminal domain, which also affects the rate of autoinactivation. Moreover, GABA-bound GAD65 is intrinsically more flexible and exists as an ensemble of states, thus providing more opportunities for autoantigenicity as seen in Type 1 diabetes. GAD derived from Escherichia coli shows additional structural intricacies, including a pH-dependent conformational change. This behavior is defined by the presence of a triple helical bundle formed by the N-termini of the hexameric protein in acidic environments. Regulation of GAD65 and GAD67 Despite an extensive sequence similarity between the two genes, GAD65 and GAD67 fulfill very different roles within the human body. Additionally, research suggests that GAD65 and GAD67 are regulated by distinctly different cellular mechanisms. GAD65 and GAD67 synthesize GABA at different locations in the cell, at different developmental times, and for functionally different purposes. GAD67 is spread evenly throughout the cell while GAD65 is localized to nerve terminals. GAD67 synthesizes GABA for neuron activity unrelated to neurotransmission, such as synaptogenesis and protection from neural injury. This function requires widespread, ubiquitous presence of GABA. GAD65, however, synthesizes GABA for neurotransmission, and therefore is only necessary at nerve terminals and synapses. In order to aid in neurotransmission, GAD65 forms a complex with heat shock cognate 70 (HSC70), cysteine string protein (CSP) and vesicular GABA transporter VGAT, which, as a complex, helps package GABA into vesicles for release during neurotransmission. GAD67 is transcribed during early development, while GAD65 is not transcribed until later in life. This developmental difference in GAD67 and GAD65 reflects the functional properties of each isoform; GAD67 is needed throughout development for normal cellular functioning, while GAD65 is not needed until slightly later in development when synaptic inhibition is more prevalent. GAD67 and GAD65 are also regulated differently post-translationally. Both GAD65 and GAD67 are regulated via phosphorylation of a dynamic catalytic loop, but the regulation of these isoforms differs; GAD65 is activated by phosphorylation while GAD67 is inhibited by phosphorylation. GAD67 is predominantly found activated (~92%), whereas GAD65 is predominantly found inactivated (~72%). GAD67 is phosphorylated at threonine 91 by protein kinase A (PKA), while GAD65 is phosphorylated, and therefore regulated by, protein kinase C (PKC). Both GAD67 and GAD65 are also regulated post-translationally by pyridoxal 5’-phosphate (PLP); GAD is activated when bound to PLP and inactive when not bound to PLP. Majority of GAD67 is bound to PLP at any given time, whereas GAD65 binds PLP when GABA is needed for neurotransmission. This reflects the functional properties of the two isoforms; GAD67 must be active at all times for normal cellular functioning, and is therefore constantly activated by PLP, while GAD65 must only be activated when GABA neurotransmission occurs, and is therefore regulated according to the synaptic environment. Studies with mice also show functional differences between Gad67 and Gad65. GAD67−/− mice are born with cleft palate and die within a day after birth while GAD65−/− mice survive with a slightly increased tendency in seizures. Additionally, GAD65+/- have symptoms defined similarly to attention deficit hyperactivity disorder (ADHD) in humans. Role in the nervous system Both GAD67 and GAD65 are present in all types of synapses within the human nervous system. This includes dendrodendritic, axosomatic, and axodendritic synapses. Preliminary evidence suggests that GAD65 is dominant in the visual and neuroendocrine systems, which undergo more phasic changes. It is also believed that GAD67 is present at higher amounts in tonically active neurons. Role in pathology Autism Both GAD65 and GAD67 experience significant downregulation in cases of autism. In a comparison of autistic versus control brains, GAD65 and GAD67 experienced a downregulation average of 50% in parietal and cerebellar cortices of autistic brains. Cerebellar Purkinje cells also reported a 40% downregulation, suggesting that affected cerebellar nuclei may disrupt output to higher order motor and cognitive areas of the brain. Diabetes Both GAD67 and GAD65 are targets of autoantibodies in people who later develop type 1 diabetes mellitus or latent autoimmune diabetes. Injections with GAD65 in ways that induce immune tolerance have been shown to prevent type 1 diabetes in rodent models. In clinical trials, injections with GAD65 have been shown to preserve some insulin production for 30 months in humans with type 1 diabetes. A Cochrane systematic review also examined 1 study showing improvement of C-peptide levels in cases of Latent Autoimmune Diabetes in adults, 5 years following treatment with GAD65. Still, it is important to highlight that the studies available to be included in this review presented considerable flaws in quality and design. Stiff person syndrome High titers of autoantibodies to glutamic acid decarboxylase (GAD) are well documented in association with stiff person syndrome (SPS). Glutamic acid decarboxylase is the rate-limiting enzyme in the synthesis of γ-aminobutyric acid (GABA), and impaired function of GABAergic neurons has been implicated in the pathogenesis of SPS. Autoantibodies to GAD might be the causative agent or a disease marker. Schizophrenia and bipolar disorder Substantial dysregulation of GAD mRNA expression, coupled with downregulation of reelin, is observed in schizophrenia and bipolar disorder. The most pronounced downregulation of GAD67 was found in hippocampal stratum oriens layer in both disorders and in other layers and structures of hippocampus with varying degrees. GAD67 is a key enzyme involved in the synthesis of inhibitory neurotransmitter GABA and people with schizophrenia have been shown to express lower amounts of GAD67 in the dorsolateral prefrontal cortex compared to healthy controls. The mechanism underlying the decreased levels of GAD67 in people with schizophrenia remains unclear. Some have proposed that an immediate early gene, Zif268, which normally binds to the promoter region of GAD67 and increases transcription of GAD67, is lower in schizophrenic patients, thus contributing to decreased levels of GAD67. Since the dorsolateral prefrontal cortex (DLPFC) is involved in working memory, and GAD67 and Zif268 mRNA levels are lower in the DLPFC of schizophrenic patients, this molecular alteration may account, at least in part, for the working memory impairments associated with the disease. Parkinson disease The bilateral delivery of glutamic acid decarboxylase (GAD) by an adeno-associated viral vector into the subthalamic nucleus of patients between 30 and 75 years of age with advanced, progressive, levodopa-responsive Parkinson disease resulted in significant improvement over baseline during the course of a six-month study. Cerebellar disorders Intracerebellar administration of GAD autoantibodies to animals increases the excitability of motoneurons and impairs the production of nitric oxide (NO), a molecule involved in learning. Epitope recognition contributes to cerebellar involvement. Reduced GABA levels increase glutamate levels as a consequence of lower inhibition of subtypes of GABA receptors. Higher glutamate levels activate microglia and activation of xc(−) increases the extracellular glutamate release. Neuropathic pain Peripheral nerve injury of the sciatic nerve (a neuropathic pain model) induces a transient loss of GAD65 immunoreactive terminals in the spinal cord dorsal horn and suggests a potential involvement for these alterations in the development and amelioration of pain behaviour. Other anti-GAD-associated neurologic disorders Antibodies directed against glutamic acid decarboxylase (GAD) are increasingly found in patients with other symptoms indicative of central nervous system (CNS) dysfunction, such as ataxia, progressive encephalomyelitis with rigidity and myoclonus (PERM), limbic encephalitis, and epilepsy. The pattern of anti-GAD antibodies in epilepsy differs from type 1 diabetes and stiff-person syndrome. Role of glutamate decarboxylase in other organisms Besides the synthesis of GABA, GAD has additional functions and structural variations that are organism-dependent. In Saccharomyces cerevisiae, GAD binds the Ca2+ regulatory protein calmodulin (CaM) and is also involved in responding to oxidative stress. Similarly, GAD in plants binds calmodulin as well. This interaction occurs at the 30-50bp CAM-binding domain (CaMBD) in its C terminus and is necessary for proper regulation of GABA production. Unlike vertebrates and invertebrates, the GABA produced by GAD is used in plants to signal abiotic stress by controlling levels of intracellular Ca2+ via CaM. Binding to CaM opens Ca2+ channels and leads to an increase in Ca2+ concentrations in the cytosol, allowing Ca2+ to act as a secondary messenger and activate downstream pathways. When GAD is not bound to CaM, the CaMBD acts as an autoinhibitory domain, thus deactivating GAD in the absence of stress. Interesting, in two plant species, rice and apples, Ca2+ /CAM-independent GAD isoforms have been discovered. The C-terminus of these isoforms contain substitutions at key residues necessary to interact with CaM in the CaMBD, preventing the protein from binding to GAD. Whereas CaMBD of the isoform in rice still functions as an autoinhibitory domain, the C-terminus in the isoform in apples does not. Finally, the structure of plant GAD is a hexamer and has pH-dependent activity, with the optimal pH of 5.8 in multiple species. but also significant activity at pH 7.3 in the presence of CaM It is also believed that the control of glutamate decarboxylase has the prospect of improving citrus produce quality post-harvest. In Citrus plants, research has shown that glutamate decarboxylase plays a key role in citrate metabolism. With the increase of glutamate decarboxylase via direct exposure, citrate levels have been seen to significantly increase within plants, and in conjunction post-harvest quality maintenance was significantly improved, and rot rates decreased. Just like GAD in plants, GAD in E. coli has a hexamer structure and is more active under acidic pH; the pH optimum for E. coli GAD is 3.8-4.6. However, unlike plants and yeast, GAD in E. coli does not require calmodulin binding to function. There are also two isoforms of GAD, namely GadA and GadB, encoded by separate genes in E. coli, although both isoforms are biochemically identical. The enzyme plays a major role in conferring acid resistance and allows bacteria to temporarily survive in highly acidic environments (pH < 2.5) like the stomach. This is done by GAD decarboxylating glutamate to GABA, which requires H+ to be uptaken as a reactant and raises the pH inside the bacteria. GABA can then be exported out of E. coli cells and contribute to increasing the pH of the nearby extracellular environments. References External links Genetics, Expression Profiling Support GABA Deficits in Schizophrenia - Schizophrenia Research Forum, 25 June 2007. EC 4.1.1 Molecular neuroscience Biology of bipolar disorder GABA Glutamate (neurotransmitter)
Glutamate decarboxylase
[ "Chemistry" ]
3,272
[ "Molecular neuroscience", "Molecular biology" ]
1,592,686
https://en.wikipedia.org/wiki/Glycogen%20phosphorylase
Glycogen phosphorylase is one of the phosphorylase enzymes (). Glycogen phosphorylase catalyzes the rate-limiting step in glycogenolysis in animals by releasing glucose-1-phosphate from the terminal alpha-1,4-glycosidic bond. Glycogen phosphorylase is also studied as a model protein regulated by both reversible phosphorylation and allosteric effects. Mechanism Glycogen phosphorylase breaks up glycogen into glucose subunits (see also figure below): (α-1,4 glycogen chain)n + Pi ⇌ (α-1,4 glycogen chain)n-1 + α-D-glucose-1-phosphate. Glycogen is left with one fewer glucose molecule, and the free glucose molecule is in the form of glucose-1-phosphate. In order to be used for metabolism, it must be converted to glucose-6-phosphate by the enzyme phosphoglucomutase. Although the reaction is reversible in vitro, within the cell the enzyme only works in the forward direction as shown below because the concentration of inorganic phosphate is much higher than that of glucose-1-phosphate. Glycogen phosphorylase can act only on linear chains of glycogen (α1-4 glycosidic linkage). Its work will immediately come to a halt four residues away from α1-6 branch (which are exceedingly common in glycogen). In these situations, the debranching enzyme is necessary, which will straighten out the chain in that area. In addition, the enzyme transferase shifts a block of 3 glucosyl residues from the outer branch to the other end, and then a α1-6 glucosidase enzyme is required to break the remaining (single glucose) α1-6 residue that remains in the new linear chain. After all this is done, glycogen phosphorylase can continue. The enzyme is specific to α1-4 chains, as the molecule contains a 30-angstrom-long crevice with the same radius as the helix formed by the glycogen chain; this accommodates 4-5 glucosyl residues, but is too narrow for branches. This crevice connects the glycogen storage site to the active, catalytic site. Glycogen phosphorylase has a pyridoxal phosphate (PLP, derived from Vitamin B6) at each catalytic site. Pyridoxal phosphate links with basic residues (in this case Lys680) and covalently forms a Schiff base. Once the Schiff base linkage is formed, holding the PLP molecule in the active site, the phosphate group on the PLP readily donates a proton to an inorganic phosphate molecule, allowing the inorganic phosphate to in turn be deprotonated by the oxygen forming the α-1,4 glycosidic linkage. PLP is readily deprotonated because its negative charge is not only stabilized within the phosphate group, but also in the pyridine ring, thus the conjugate base resulting from the deprotonation of PLP is quite stable. The protonated oxygen now represents a good leaving group, and the glycogen chain is separated from the terminal glycogen in an SN1 fashion, resulting in the formation of a glucose molecule with a secondary carbocation at the 1 position. Finally, the deprotonated inorganic phosphate acts as a nucleophile and bonds with the carbocation, resulting in the formation of glucose-1-phosphate and a glycogen chain shortened by one glucose molecule. There is also an alternative proposed mechanism involving a positively charged oxygen in a half-chair conformation. Structure The glycogen phosphorylase monomer is a large protein, composed of 842 amino acids with a mass of 97.434 kDa in muscle cells. While the enzyme can exist as an inactive monomer or tetramer, it is biologically active as a dimer of two identical subunits. In mammals, the major isozymes of glycogen phosphorylase are found in muscle, liver, and brain. The brain type is predominant in adult brain and embryonic tissues, whereas the liver and muscle types are predominant in adult liver and skeletal muscle, respectively. The glycogen phosphorylase dimer has many regions of biological significance, including catalytic sites, glycogen binding sites, allosteric sites, and a reversibly phosphorylated serine residue. First, the catalytic sites are relatively buried, 15Å from the surface of the protein and from the subunit interface. This lack of easy access of the catalytic site to the surface is significant in that it makes the protein activity highly susceptible to regulation, as small allosteric effects could greatly increase the relative access of glycogen to the site. Perhaps the most important regulatory site is Ser14, the site of reversible phosphorylation very close to the subunit interface. The structural change associated with phosphorylation, and with the conversion of phosphorylase b to phosphorylase a, is the arrangement of the originally disordered residues 10 to 22 into α helices. This change increases phosphorylase activity up to 25% even in the absence of AMP, and enhances AMP activation further. The allosteric site of AMP binding on muscle isoforms of glycogen phosphorylase are close to the subunit interface just like Ser14. Binding of AMP at this site, corresponding in a change from the T state of the enzyme to the R state, results in small changes in tertiary structure at the subunit interface leading to large changes in quaternary structure. AMP binding rotates the tower helices (residues 262-278) of the two subunits 50˚ relative to one another through greater organization and intersubunit interactions. This rotation of the tower helices leads to a rotation of the two subunits by 10˚ relative to one another, and more importantly disorders residues 282-286 (the 280s loop) that block access to the catalytic site in the T state but do not in the R state. The final, perhaps most curious site on the glycogen phosphorylase protein is the so-called glycogen storage site. Residues 397-437 form this structure, which allows the protein to covalently bind to the glycogen chain a full 30 Å from the catalytic site . This site is most likely the site at which the enzyme binds to glycogen granules before initiating cleavage of terminal glucose molecules. In fact, 70% of dimeric phosphorylase in the cell exists as bound to glycogen granules rather than free floating. Clinical significance The inhibition of glycogen phosphorylase has been proposed as one method for treating type 2 diabetes. Since glucose production in the liver has been shown to increase in type 2 diabetes patients, inhibiting the release of glucose from the liver's glycogen's supplies appears to be a valid approach. The cloning of the human liver glycogen phosphorylase (HLGP) revealed a new allosteric binding site near the subunit interface that is not present in the rabbit muscle glycogen phosphorylase (RMGP) normally used in studies. This site was not sensitive to the same inhibitors as those at the AMP allosteric site, and most success has been had synthesizing new inhibitors that mimic the structure of glucose, since glucose-6-phosphate is a known inhibitor of HLGP and stabilizes the less active T-state. These glucose derivatives have had some success in inhibiting HLGP, with predicted Ki values as low as 0.016 mM. Mutations in the muscle isoform of glycogen phosphorylase (PYGM) are associated with glycogen storage disease type V (GSD V, McArdle's Disease). More than 65 mutations in the PYGM gene that lead to McArdle disease have been identified to date. Symptoms of McArdle disease include muscle weakness, myalgia, and lack of endurance, all stemming from low glucose levels in muscle tissue. Mutations in the liver isoform of glycogen phosphorylase (PYGL) are associated with Hers' Disease (glycogen storage disease type VI). Hers' disease is often associated with mild symptoms normally limited to hypoglycemia, and is sometimes difficult to diagnose due to residual enzyme activity. The brain isoform of glycogen phosphorylase (PYGB) has been proposed as a biomarker for gastric cancer. Regulation Glycogen phosphorylase is regulated through allosteric control and through phosphorylation. Phosphorylase a and phosphorylase b each exist in two forms: a T (tense) inactive state and an R (relaxed) state. Phosphorylase b is normally in the T state, inactive due to the physiological presence of ATP and glucose 6 phosphate, and phosphorylase a is normally in the R state (active). An isoenzyme of glycogen phosphorylase exists in the liver sensitive to glucose concentration, as the liver acts as a glucose exporter. In essence, liver phosphorylase is responsive to glucose, which causes a very responsive transition from the R to T form, inactivating it; furthermore, liver phosphorylase is insensitive to AMP. Hormones such as epinephrine, insulin and glucagon regulate glycogen phosphorylase using second messenger amplification systems linked to G proteins. Glucagon activates adenylate cyclase through a G protein-coupled receptor (GPCR) coupled to Gs which in turn activates adenylate cyclase to increase intracellular concentrations of cAMP. cAMP binds to and activates protein kinase A (PKA). PKA phosphorylates phosphorylase kinase, which in turn phosphorylates glycogen phosphorylase b at Ser14, converting it into the active glycogen phosphorylase a. In the liver, glucagon also activates another GPCR that triggers a different cascade, resulting in the activation of phospholipase C (PLC). PLC indirectly causes the release of calcium from the hepatocytes' endoplasmic reticulum into the cytosol. The increased calcium availability binds to the calmodulin subunit and activates glycogen phosphorylase kinase. Glycogen phosphorylase kinase activates glycogen phosphorylase in the same manner mentioned previously. Glycogen phosphorylase b is not always inactive in muscle, as it can be activated allosterically by AMP. An increase in AMP concentration, which occurs during strenuous exercise, signals energy demand. AMP activates glycogen phosphorylase b by changing its conformation from a tense to a relaxed form. This relaxed form has similar enzymatic properties as the phosphorylated enzyme. An increase in ATP concentration opposes this activation by displacing AMP from the nucleotide binding site, indicating sufficient energy stores. Upon eating a meal, there is a release of insulin, signaling glucose availability in the blood. Insulin indirectly activates protein phosphatase 1 (PP1) and phosphodiesterase via a signal transduction cascade. PP1 dephosphorylates glycogen phosphorylase a, reforming the inactive glycogen phosphorylase b. The phosphodiesterase converts cAMP to AMP. Together, they decrease the concentration of cAMP and inhibit PKA. As a result, PKA can no longer initiate the phosphorylation cascade that ends with formation of (active) glycogen phosphorylase a. Overall, insulin signaling decreases glycogenolysis to preserve glycogen stores in the cell and triggers glycogenesis. Historical significance Glycogen phosphorylase was the first allosteric enzyme to be discovered. It was isolated and its activity characterized in detail by Carl F. Cori, Gerhard Schmidt and Gerty T. Cori. Arda Green and Gerty Cori crystallized it for the first time in 1943 and illustrated that glycogen phosphorylase existed in either the or b forms depending on its phosphorylation state, as well as in the R or T states based on the presence of AMP. See also AMP deaminase deficiency (MADD) Glycogenolysis McArdle disease (GSD-V) Metabolic myopathies Purine nucleotide cycle § Pathology References Further reading External links GeneReviews/NCBI/NIH/UW entry on Glycogen Storage Disease Type VI - Hers disease Carbohydrate metabolism EC 2.4.1
Glycogen phosphorylase
[ "Chemistry" ]
2,803
[ "Carbohydrate metabolism", "Carbohydrate chemistry", "Metabolism" ]
1,592,806
https://en.wikipedia.org/wiki/Ion%20beam
An ion beam is a beam of ions, a type of charged particle beam. Ion beams have many uses in electronics manufacturing (principally ion implantation) and other industries. There are many ion beam sources, some derived from the mercury vapor thrusters developed by NASA in the 1960s. The most widely used ion beams are of singly-charged ions. Units Ion current density is typically measured in mA/cm2, and ion energy in electronvolts (eV). The use of eV is convenient for converting between voltage and energy, especially when dealing with singly charged ion beams. Broad-beam ion sources Most commercial applications use two popular types of ion source, gridded and gridless, which differ in current and power characteristics and the ability to control ion trajectories. In both cases electrons are needed to generate an ion beam. The most common types of electron emitter are hot filament and hollow cathode. Gridded ion source In a gridded ion source, DC or RF discharge are used to generate ions, which are then accelerated and decimated using grids and apertures. Here, the DC discharge current or the RF discharge power are used to control the beam current. The ion current density that can be accelerated using a gridded ion source is limited by the space charge effect, which is described by Child's law: where is the voltage between the grids, is the distance between the grids, and is the ion mass. The grids are spaced as closely as possible to increase the current density, typically . The ions used have a significant impact on the maximum ion beam current, since . All else being equal, the maximum ion beam current with krypton is only 69% of the maximum ion current of an argon beam; with xenon the ratio drops to 55%. Gridless ion sources In a gridless ion source, ions are generated by a flow of electrons, without grids. The most common gridless ion source is the end-Hall ion source, with which the discharge current and the gas flow are used to control the beam current. Applications Material modification and analysis Ion beams can be used for material modification (e.g. by sputtering or ion beam etching) and for ion beam analysis. Ion beam application, etching, or sputtering, is a technique conceptually similar to sandblasting, but using individual atoms in an ion beam to ablate a target. Reactive ion etching is an important extension that uses chemical reactivity to enhance the physical sputtering effect. In a typical use in semiconductor manufacturing, a mask can selectively expose a layer of photoresist on a substrate made of a semiconductor material, such as a silicon dioxide or gallium arsenide wafer. The wafer is developed, and for a positive photoresist, the exposed portions are removed in a chemical process. The result is a pattern left on the surface areas of the wafer that had been masked from exposure. The wafer is then placed in a vacuum chamber, and exposed to the ion beam. The impact of the ions erodes the target, abrading away the areas not covered by the photoresist. Focused ion beam (FIB) instruments have numerous applications for characterization of thin-film devices. Using a focused, high-brightness ion beam in a scanned raster pattern, material is removed (sputtered) in precise rectilinear patterns revealing a two-dimensional, or stratigraphic profile of a solid material. The most common application is to verify the integrity of the gate oxide layer in a CMOS transistor. A single excavation site exposes a cross section for analysis using a scanning electron microscope. Dual excavations on either side of a thin lamella bridge are utilized for preparing transmission electron microscope samples. Another common use of FIB instruments is for design verification and/or failure analysis of semiconductor devices. Design verification combines selective material removal with gas-assisted material deposition of conductive, dielectric, or insulating materials. Engineering prototype devices may be modified using the ion beam in combination with gas-assisted material deposition in order to rewire an integrated circuit's conductive pathways. The techniques are effectively used to verify the correlation between the CAD design and the actual functional prototype circuit, thereby avoiding the creation of a new mask for the purpose of testing design changes. Ions beams are also used for analysis purposes in Materials science. For example sputtering techniques can be used for surface analysis or depth profiling by performing secondary ion mass spectrometry. It is also possible to gain information from the spectroscopy of transmitted or backscattered primary ions, e.g. depth profiles can be obtained from Rutherford backscattering (RBS) spectra. In difference to secondary ion spectroscopy scattering based techniques like RBS are often less destructive to the sample. Biology In radiobiology a broad or focused ion beam is used to study mechanisms of inter- and intra- cellular communication, signal transduction and DNA damage and repair. Medicine Ion beams are also used in particle therapy, most often in the treatment of cancer. Space applications Ion beams produced by ion and plasma thrusters on board a spacecraft can be used to transmit a force to a nearby object (e.g. another spacecraft, an asteroid, etc.) that is irradiated by the beam. This innovative propulsion technique named Ion Beam Shepherd has been shown to be effective in the area of active space debris removal as well as asteroid deflection. High-energy ion beams High-energy ion beams produced by particle accelerators are used in atomic physics, nuclear physics and particle physics. As weapon Ion beams can theoretically be used to make a weapon, but this has not been demonstrated. Electron beam weapons were tested by the U.S. Navy in the early 20th century, but the hose instability effect prevents them from being accurate at a distance of over approximately 30 inches. See also Ion source Ion thruster Ion wind References External links Stopping parameters of ion beams in solids calculated by MELF-GOS model ISOLDE – Facility dedicated to the production of a large variety of radioactive ion beams located at CERN Plasma technology and applications Semiconductor device fabrication Semiconductor analysis Thin film deposition Ions Accelerator physics
Ion beam
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,262
[ "Matter", "Applied and interdisciplinary physics", "Thin film deposition", "Plasma physics", "Plasma technology and applications", "Microtechnology", "Coatings", "Thin films", "Semiconductor device fabrication", "Experimental physics", "Planes (geometry)", "Accelerator physics", "Solid state...
1,593,924
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%E2%80%93Witten%20model
In theoretical physics and mathematics, a Wess–Zumino–Witten (WZW) model, also called a Wess–Zumino–Novikov–Witten model, is a type of two-dimensional conformal field theory named after Julius Wess, Bruno Zumino, Sergei Novikov and Edward Witten. A WZW model is associated to a Lie group (or supergroup), and its symmetry algebra is the affine Lie algebra built from the corresponding Lie algebra (or Lie superalgebra). By extension, the name WZW model is sometimes used for any conformal field theory whose symmetry algebra is an affine Lie algebra. Action Definition For a Riemann surface, a Lie group, and a (generally complex) number, let us define the -WZW model on at the level . The model is a nonlinear sigma model whose action is a functional of a field : Here, is equipped with a flat Euclidean metric, is the partial derivative, and is the Killing form on the Lie algebra of . The Wess–Zumino term of the action is Here is the completely anti-symmetric tensor, and is the Lie bracket. The Wess–Zumino term is an integral over a three-dimensional manifold whose boundary is . Topological properties of the Wess–Zumino term For the Wess–Zumino term to make sense, we need the field to have an extension to . This requires the homotopy group to be trivial, which is the case in particular for any compact Lie group . The extension of a given to is in general not unique. For the WZW model to be well-defined, should not depend on the choice of the extension. The Wess–Zumino term is invariant under small deformations of , and only depends on its homotopy class. Possible homotopy classes are controlled by the homotopy group . For any compact, connected simple Lie group , we have , and different extensions of lead to values of that differ by integers. Therefore, they lead to the same value of provided the level obeys Integer values of the level also play an important role in the representation theory of the model's symmetry algebra, which is an affine Lie algebra. If the level is a positive integer, the affine Lie algebra has unitary highest weight representations with highest weights that are dominant integral. Such representations decompose into finite-dimensional subrepresentations with respect to the subalgebras spanned by each simple root, the corresponding negative root and their commutator, which is a Cartan generator. In the case of the noncompact simple Lie group , the homotopy group is trivial, and the level is not constrained to be an integer. Geometrical interpretation of the Wess–Zumino term If ea are the basis vectors for the Lie algebra, then are the structure constants of the Lie algebra. The structure constants are completely anti-symmetric, and thus they define a 3-form on the group manifold of G. Thus, the integrand above is just the pullback of the harmonic 3-form to the ball Denoting the harmonic 3-form by c and the pullback by one then has This form leads directly to a topological analysis of the WZ term. Geometrically, this term describes the torsion of the respective manifold. The presence of this torsion compels teleparallelism of the manifold, and thus trivialization of the torsionful curvature tensor; and hence arrest of the renormalization flow, an infrared fixed point of the renormalization group, a phenomenon termed geometrostasis. Symmetry algebra Generalised group symmetry The Wess–Zumino–Witten model is not only symmetric under global transformations by a group element in , but also has a much richer symmetry. This symmetry is often called the symmetry. Namely, given any holomorphic -valued function , and any other (completely independent of ) antiholomorphic -valued function , where we have identified and in terms of the Euclidean space coordinates , the following symmetry holds: One way to prove the existence of this symmetry is through repeated application of the Polyakov–Wiegmann identity regarding products of -valued fields: The holomorphic and anti-holomorphic currents and are the conserved currents associated with this symmetry. The singular behaviour of the products of these currents with other quantum fields determine how those fields transform under infinitesimal actions of the group. Affine Lie algebra Let be a local complex coordinate on , an orthonormal basis (with respect to the Killing form) of the Lie algebra of , and the quantization of the field . We have the following operator product expansion: where are the coefficients such that . Equivalently, if is expanded in modes then the current algebra generated by is the affine Lie algebra associated to the Lie algebra of , with a level that coincides with the level of the WZW model. If , the notation for the affine Lie algebra is . The commutation relations of the affine Lie algebra are This affine Lie algebra is the chiral symmetry algebra associated to the left-moving currents . A second copy of the same affine Lie algebra is associated to the right-moving currents . The generators of that second copy are antiholomorphic. The full symmetry algebra of the WZW model is the product of the two copies of the affine Lie algebra. Sugawara construction The Sugawara construction is an embedding of the Virasoro algebra into the universal enveloping algebra of the affine Lie algebra. The existence of the embedding shows that WZW models are conformal field theories. Moreover, it leads to Knizhnik–Zamolodchikov equations for correlation functions. The Sugawara construction is most concisely written at the level of the currents: for the affine Lie algebra, and the energy-momentum tensor for the Virasoro algebra: where the denotes normal ordering, and is the dual Coxeter number. By using the OPE of the currents and a version of Wick's theorem one may deduce that the OPE of with itself is given by which is equivalent to the Virasoro algebra's commutation relations. The central charge of the Virasoro algebra is given in terms of the level of the affine Lie algebra by At the level of the generators of the affine Lie algebra, the Sugawara construction reads where the generators of the Virasoro algebra are the modes of the energy-momentum tensor, . Spectrum WZW models with compact, simply connected groups If the Lie group is compact and simply connected, then the WZW model is rational and diagonal: rational because the spectrum is built from a (level-dependent) finite set of irreducible representations of the affine Lie algebra called the integrable highest weight representations, and diagonal because a representation of the left-moving algebra is coupled with the same representation of the right-moving algebra. For example, the spectrum of the WZW model at level is where is the affine highest weight representation of spin : a representation generated by a state such that where is the current that corresponds to a generator of the Lie algebra of . WZW models with other types of groups If the group is compact but not simply connected, the WZW model is rational but not necessarily diagonal. For example, the WZW model exists for even integer levels , and its spectrum is a non-diagonal combination of finitely many integrable highest weight representations. If the group is not compact, the WZW model is non-rational. Moreover, its spectrum may include non highest weight representations. For example, the spectrum of the WZW model is built from highest weight representations, plus their images under the spectral flow automorphisms of the affine Lie algebra. If is a supergroup, the spectrum may involve representations that do not factorize as tensor products of representations of the left- and right-moving symmetry algebras. This occurs for example in the case , and also in more complicated supergroups such as . Non-factorizable representations are responsible for the fact that the corresponding WZW models are logarithmic conformal field theories. Other theories based on affine Lie algebras The known conformal field theories based on affine Lie algebras are not limited to WZW models. For example, in the case of the affine Lie algebra of the WZW model, modular invariant torus partition functions obey an ADE classification, where the WZW model accounts for the A series only. The D series corresponds to the WZW model, and the E series does not correspond to any WZW model. Another example is the model. This model is based on the same symmetry algebra as the WZW model, to which it is related by Wick rotation. However, the is not strictly speaking a WZW model, as is not a group, but a coset. Fields and correlation functions Fields Given a simple representation of the Lie algebra of , an affine primary field is a field that takes values in the representation space of , such that An affine primary field is also a primary field for the Virasoro algebra that results from the Sugawara construction. The conformal dimension of the affine primary field is given in terms of the quadratic Casimir of the representation (i.e. the eigenvalue of the quadratic Casimir element where is the inverse of the matrix of the Killing form) by For example, in the WZW model, the conformal dimension of a primary field of spin is By the state-field correspondence, affine primary fields correspond to affine primary states, which are the highest weight states of highest weight representations of the affine Lie algebra. Correlation functions If the group is compact, the spectrum of the WZW model is made of highest weight representations, and all correlation functions can be deduced from correlation functions of affine primary fields via Ward identities. If the Riemann surface is the Riemann sphere, correlation functions of affine primary fields obey Knizhnik–Zamolodchikov equations. On Riemann surfaces of higher genus, correlation functions obey Knizhnik–Zamolodchikov–Bernard equations, which involve derivatives not only of the fields' positions, but also of the surface's moduli. Gauged WZW models Given a Lie subgroup , the gauged WZW model (or coset model) is a nonlinear sigma model whose target space is the quotient for the adjoint action of on . This gauged WZW model is a conformal field theory, whose symmetry algebra is a quotient of the two affine Lie algebras of the and WZW models, and whose central charge is the difference of their central charges. Applications The WZW model whose Lie group is the universal cover of the group has been used by Juan Maldacena and Hirosi Ooguri to describe bosonic string theory on the three-dimensional anti-de Sitter space . Superstrings on are described by the WZW model on the supergroup , or a deformation thereof if Ramond-Ramond flux is turned on. WZW models and their deformations have been proposed for describing the plateau transition in the integer quantum Hall effect. The gauged WZW model has an interpretation in string theory as Witten's two-dimensional Euclidean black hole. The same model also describes certain two-dimensional statistical systems at criticality, such as the critical antiferromagnetic Potts model. References Conformal field theory Lie groups Exactly solvable models Mathematical physics
Wess–Zumino–Witten model
[ "Physics", "Mathematics" ]
2,418
[ "Lie groups", "Mathematical structures", "Applied mathematics", "Theoretical physics", "Algebraic structures", "Mathematical physics" ]
13,537,626
https://en.wikipedia.org/wiki/Quantum%20biology
Quantum biology is the study of applications of quantum mechanics and theoretical chemistry to aspects of biology that cannot be accurately described by the classical laws of physics. An understanding of fundamental quantum interactions is important because they determine the properties of the next level of organization in biological systems. Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Moreover, quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the biological process to fundamental physics, although these effects are difficult to study and can be speculative. Currently, there exist four major life processes that have been identified as influenced by quantum effects: enzyme catalysis, sensory processes, energy transference, and information encoding. History Quantum biology is an emerging field, in the sense that most current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What Is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbrück argued that the quantum idea of complementarity was fundamental to the life sciences. In 1963, Per-Olov Löwdin published proton tunneling as another mechanism for DNA mutation. In his paper, he stated that there is a new field of study called "quantum biology". In 1979, the Soviet and Ukrainian physicist Alexander Davydov published the first textbook on quantum biology entitled Biology and Quantum Mechanics. Enzyme catalysis Enzymes have been postulated to use quantum tunneling to transfer electrons in electron transport chains. It is possible that protein quaternary architectures may have adapted to enable sustained quantum entanglement and coherence, which are two of the limiting factors for quantum tunneling in biological entities. These architectures might account for a greater percentage of quantum energy transfer, which occurs through electron transport and proton tunneling (usually in the form of hydrogen ions, H+). Tunneling refers to the ability of a subatomic particle to travel through potential energy barriers. This ability is due, in part, to the principle of complementarity, which holds that certain substances have pairs of properties that cannot be measured separately without changing the outcome of measurement. Particles, such as electrons and protons, have wave-particle duality; they can pass through energy barriers due to their wave characteristics without violating the laws of physics. In order to quantify how quantum tunneling is used in many enzymatic activities, many biophysicists utilize the observation of hydrogen ions. When hydrogen ions are transferred, this is seen as a staple in an organelle's primary energy processing network; in other words, quantum effects are most usually at work in proton distribution sites at distances on the order of an angstrom (1 Å). In physics, a semiclassical (SC) approach is most useful in defining this process because of the transfer from quantum elements (e.g. particles) to macroscopic phenomena (e.g. biochemicals). Aside from hydrogen tunneling, studies also show that electron transfer between redox centers through quantum tunneling plays an important role in enzymatic activity of photosynthesis and cellular respiration (see also Mitochondria section below). Ferritin Ferritin is an iron storage protein that is found in plants and animals. It is usually formed from 24 subunits that self-assemble into a spherical shell that is approximately 2 nm thick, with an outer diameter that varies with iron loading up to about 16 nm. Up to ~4500 iron atoms can be stored inside the core of the shell in the Fe3+ oxidation state as water-insoluble compounds such as ferrihydrite and magnetite. Ferritin is able to store electrons for at least several hours, which reduce the Fe3+ to water soluble Fe2+. Electron tunneling as the mechanism by which electrons transit the 2 nm thick protein shell was proposed as early as 1988. Electron tunneling and other quantum mechanical properties of ferritin were observed in 1992, and electron tunneling at room temperature and ambient conditions was observed in 2005. Electron tunneling associated with ferritin is a quantum biological process, and ferritin is a quantum biological agent. Electron tunneling through ferritin between electrodes is independent of temperature, which indicates that it is substantially coherent and activation-less. The electron tunneling distance is a function of the size of the ferritin. Single electron tunneling events can occur over distances of up to 8 nm through the ferritin, and sequential electron tunneling can occur up to 12 nm through the ferritin. It has been proposed that the electron tunneling is magnon-assisted and associated with magnetite microdomains in the ferritin core. Early evidence of quantum mechanical properties exhibited by ferritin in vivo was reported in 2004, where increased magnetic ordering of ferritin structures in placental macrophages was observed using small angle neutron scattering (SANS). Quantum dot solids also show increased magnetic ordering in SANS testing, and can conduct electrons over long distances. Increased magnetic ordering of ferritin cores disposed in an ordered layer on a silicon substrate with SANS testing has also been observed. Ferritin structures like those in placental macrophages have been tested in solid state configurations and exhibit quantum dot solid-like properties of conducting electrons over distances of up to 80 microns through sequential tunneling and formation of Coulomb blockades. Electron transport through ferritin in placental macrophages may be associated with an anti-inflammatory function. Conductive atomic force microscopy of substantia nigra pars compacta (SNc) tissue demonstrated evidence of electron tunneling between ferritin cores, in structures that correlate to layers of ferritin outside of neuromelanin organelles.  Evidence of ferritin layers in cell bodies of large dopamine neurons of the SNc and between those cell bodies in glial cells has also been found, and is hypothesized to be associated with neuron function. Overexpression of ferritin reduces the accumulation of reactive oxygen species (ROS), and may act as a catalyst by increasing the ability of electrons from antioxidants to neutralize ROS through electron tunneling. Ferritin has also been observed in ordered configurations in lysosomes associated with erythropoiesis, where it may be associated with red blood cell production. While direct evidence of tunneling associated with ferritin in vivo in live cells has not yet been obtained, it may be possible to do so using QDs tagged with anti-ferritin, which should emit photons if electrons stored in the ferritin core tunnel to the QD. Sensory processes Olfaction Olfaction, the sense of smell, can be broken down into two parts; the reception and detection of a chemical, and how that detection is sent to and processed by the brain. This process of detecting an odorant is still under question. One theory named the "shape theory of olfaction" suggests that certain olfactory receptors are triggered by certain shapes of chemicals and those receptors send a specific message to the brain. Another theory (based on quantum phenomena) suggests that the olfactory receptors detect the vibration of the molecules that reach them and the "smell" is due to different vibrational frequencies, this theory is aptly called the "vibration theory of olfaction." The vibration theory of olfaction, created in 1938 by Malcolm Dyson but reinvigorated by Luca Turin in 1996, proposes that the mechanism for the sense of smell is due to G-protein receptors that detect molecular vibrations due to inelastic electron tunneling, tunneling where the electron loses energy, across molecules. In this process a molecule would fill a binding site with a G-protein receptor. After the binding of the chemical to the receptor, the chemical would then act as a bridge allowing for the electron to be transferred through the protein. As the electron transfers across what would otherwise have been a barrier, it loses energy due to the vibration of the newly-bound molecule to the receptor. This results in the ability to smell the molecule. While the vibration theory has some experimental proof of concept, there have been multiple controversial results in experiments. In some experiments, animals are able to distinguish smells between molecules of different frequencies and same structure, while other experiments show that people are unaware of distinguishing smells due to distinct molecular frequencies. Vision Vision relies on quantized energy in order to convert light signals to an action potential in a process called phototransduction. In phototransduction, a photon interacts with a chromophore in a light receptor. The chromophore absorbs the photon and undergoes photoisomerization. This change in structure induces a change in the structure of the photo receptor and resulting signal transduction pathways lead to a visual signal. However, the photoisomerization reaction occurs at a rapid rate, in under 200 femtoseconds, with high yield. Models suggest the use of quantum effects in shaping the ground state and excited state potentials in order to achieve this efficiency. The sensor in the retina of the human eye is sensitive enough to detect a single photon. Single photon detection could lead to multiple different technologies. One area of development is in quantum communication and cryptography. The idea is to use a biometric system to measure the eye using only a small number of points across the retina with random flashes of photons that "read" the retina and identify the individual. This biometric system would only allow a certain individual with a specific retinal map to decode the message. This message can not be decoded by anyone else unless the eavesdropper were to guess the proper map or could read the retina of the intended recipient of the message. Energy transfer Photosynthesis Photosynthesis refers to the biological process that photosynthetic cells use to synthesize organic compounds from inorganic starting materials using sunlight. What has been primarily implicated as exhibiting non-trivial quantum behaviors is the light reaction stage of photosynthesis. In this stage, photons are absorbed by the membrane-bound photosystems. Photosystems contain two major domains, the light-harvesting complex (antennae) and the reaction center. These antennae vary among organisms. For example, bacteria use circular aggregates of chlorophyll pigments, while plants use membrane-embedded protein and chlorophyll complexes. Regardless, photons are first captured by the antennae and passed on to the reaction-center complex. Various pigment-protein complexes, such as the FMO complex in green sulfur bacteria, are responsible for transferring energy from antennae to reaction site. The photon-driven excitation of the reaction-center complex mediates the oxidation and the reduction of the primary electron acceptor, a component of the reaction-center complex. Much like the electron transport chain of the mitochondria, a linear series of oxidations and reductions drives proton (H+) pumping across the thylakoid membrane, the development of a proton motive force, and energetic coupling to the synthesis of ATP. Previous understandings of electron-excitation transference (EET) from light-harvesting antennae to the reaction center have relied on the Förster theory of incoherent EET, postulating weak electron coupling between chromophores and incoherent hopping from one to another. This theory has largely been disproven by FT electron spectroscopy experiments that show electron absorption and transfer with an efficiency of above 99%, which cannot be explained by classical mechanical models. Instead, as early as 1938, scientists theorized that quantum coherence was the mechanism for excitation-energy transfer. Indeed, the structure and nature of the photosystem places it in the quantum realm, with EET ranging from the femto- to nanosecond scale, covering sub-nanometer to nanometer distances. The effects of quantum coherence on EET in photosynthesis are best understood through state and process coherence. State coherence refers to the extent of individual superpositions of ground and excited states for quantum entities, such as excitons. Process coherence, on the other hand, refers to the degree of coupling between multiple quantum entities and their evolution as either dominated by unitary or dissipative parts, which compete with one another. Both of these types of coherence are implicated in photosynthetic EET, where a exciton is coherently delocalized over several chromophores. This delocalization allows for the system to simultaneously explore several energy paths and use constructive and destructive interference to guide the path of the exciton's wave packet. It is presumed that natural selection has favored the most efficient path to the reaction center. Experimentally, the interaction between the different frequency wave packets, made possible by long-lived coherence, will produce quantum beats. While quantum photosynthesis is still an emerging field, there have been many experimental results that support the quantum-coherence understanding of photosynthetic EET. A 2007 study claimed the identification of electronic quantum coherence at −196 °C (77 K). Another theoretical study from 2010 provided evidence that quantum coherence lives as long as 300 femtoseconds at biologically relevant temperatures (4 °C or 277 K). In that same year, experiments conducted on photosynthetic cryptophyte algae using two-dimensional photon echo spectroscopy yielded further confirmation for long-term quantum coherence. These studies suggest that, through evolution, nature has developed a way of protecting quantum coherence to enhance the efficiency of photosynthesis. However, critical follow-up studies question the interpretation of these results. Single-molecule spectroscopy now shows the quantum characteristics of photosynthesis without the interference of static disorder, and some studies use this method to assign reported signatures of electronic quantum coherence to nuclear dynamics occurring in chromophores. A number of proposals emerged to explain unexpectedly long coherence. According to one proposal, if each site within the complex feels its own environmental noise, the electron will not remain in any local minimum due to both quantum coherence and its thermal environment, but proceed to the reaction site via quantum walks. Another proposal is that the rate of quantum coherence and electron tunneling create an energy sink that moves the electron to the reaction site quickly. Other work suggested that geometric symmetries in the complex may favor efficient energy transfer to the reaction center, mirroring perfect state transfer in quantum networks. Furthermore, experiments with artificial dye molecules cast doubts on the interpretation that quantum effects last any longer than one hundred femtoseconds. In 2017, the first control experiment with the original FMO protein under ambient conditions confirmed that electronic quantum effects are washed out within 60 femtoseconds, while the overall exciton transfer takes a time on the order of a few picoseconds. In 2020 a review based on a wide collection of control experiments and theory concluded that the proposed quantum effects as long lived electronic coherences in the FMO system does not hold. Instead, research investigating transport dynamics suggests that interactions between electronic and vibrational modes of excitation in FMO complexes require a semi-classical, semi-quantum explanation for the transfer of exciton energy. In other words, while quantum coherence dominates in the short-term, a classical description is most accurate to describe long-term behavior of the excitons. Another process in photosynthesis that has almost 100% efficiency is charge transfer, again suggesting that quantum mechanical phenomena are at play. In 1966, a study on the photosynthetic bacterium Chromatium found that at temperatures below 100 K, cytochrome oxidation is temperature-independent, slow (on the order of milliseconds), and very low in activation energy. The authors, Don DeVault and Britton Chase, postulated that these characteristics of electron transfer are indicative of quantum tunneling, whereby electrons penetrate a potential barrier despite possessing less energy than is classically necessary. Mitochondria Mitochondria have been demonstrated to utilize quantum tunneling in their function as the powerhouse of eukaryotic cells. Similar to the light reactions in the thylakoid, linearly-associated membrane-bound proteins comprising the electron transport chain (ETC) energetically link the reduction of O2 with the development of a proton motive gradient (H+) across the inner membrane of the mitochondria. This energy stored as a proton motive gradient is then coupled with the synthesis of ATP. It is significant that the mitochondrion conversion of biomass into chemical ATP achieves 60-70% thermodynamic efficiency, far superior to that of man-made engines. This high degree of efficiency is largely attributed to the quantum tunnelling of electrons in the ETC and of protons in the proton motive gradient. Indeed, electron tunneling has already been demonstrated in certain elements of the ETC including NADH:ubiquinone oxidoreductase(Complex I) and CoQH2-cytochrome c reductase (Complex III). In quantum mechanics, both electrons and protons are quantum entities that exhibit wave-particle duality, exhibiting both particle and wave-like properties depending on the method of experimental observation. Quantum tunneling is a direct consequence of this wave-like nature of quantum entities that permits the passing-through of a potential energy barrier that would otherwise restrict the entity. Moreover, it depends on the shape and size of a potential barrier relative to the incoming energy of a particle. Because the incoming particle is defined by its wave function, its tunneling probability is dependent upon the potential barrier's shape in an exponential way. For example, if the barrier is relatively wide, the incoming particle's probability to tunnel will decrease. The potential barrier, in some sense, can come in the form of an actual biomaterial barrier. The inner mitochondria membrane which houses the various components of the ETC is on the order of 7.5 nm thick. The inner membrane of a mitochondrion must be overcome to permit signals (in the form of electrons, protons, H+) to transfer from the site of emittance (internal to the mitochondria) and the site of acceptance (i.e. the electron transport chain proteins). In order to transfer particles, the membrane of the mitochondria must have the correct density of phospholipids to conduct a relevant charge distribution that attracts the particle in question. For instance, for a greater density of phospholipids, the membrane contributes to a greater conductance of protons. Molecular solitons in proteins Alexander Davydov developed the quantum theory of molecular solitons in order to explain the transport of energy in protein α-helices in general and the physiology of muscle contraction in particular. He showed that the molecular solitons are able to preserve their shape through nonlinear interaction of amide I excitons and phonon deformations inside the lattice of hydrogen-bonded peptide groups. In 1979, Davydov published his complete textbook on quantum biology entitled "Biology and Quantum Mechanics" featuring quantum dynamics of proteins, cell membranes, bioenergetics, muscle contraction, and electron transport in biomolecules. Information encoding Magnetoreception Magnetoreception is the ability of animals to navigate using the inclination of the magnetic field of the Earth. A possible explanation for magnetoreception is the entangled radical pair mechanism. The radical-pair mechanism is well-established in spin chemistry, and was speculated to apply to magnetoreception in 1978 by Schulten et al.. The ratio between singlet and triplet pairs is changed by the interaction of entangled electron pairs with the magnetic field of the Earth. In 2000, cryptochrome was proposed as the "magnetic molecule" that could harbor magnetically sensitive radical-pairs. Cryptochrome, a flavoprotein found in the eyes of European robins and other animal species, is the only protein known to form photoinduced radical-pairs in animals. When it interacts with light particles, cryptochrome goes through a redox reaction, which yields radical pairs both during the photo-reduction and the oxidation. The function of cryptochrome is diverse across species, however, the photoinduction of radical-pairs occurs by exposure to blue light, which excites an electron in a chromophore. Magnetoreception is also possible in the dark, so the mechanism must rely more on the radical pairs generated during light-independent oxidation. Experiments in the lab support the basic theory that radical-pair electrons can be significantly influenced by very weak magnetic fields, i.e., merely the direction of weak magnetic fields can affect radical-pair's reactivity and therefore can "catalyze" the formation of chemical products. Whether this mechanism applies to magnetoreception and/or quantum biology, that is, whether Earth's magnetic field "catalyzes" the formation of biochemical products by the aid of radical-pairs, is not fully clear. Radical-pairs may need not be entangled, the key quantum feature of the radical-pair mechanism, to play a part in these processes. There are entangled and non-entangled radical-pairs, but disturbing only entangled radical-pairs is not possible with current technology. Researchers found evidence for the radical-pair mechanism of magnetoreception when European robins, cockroaches, and garden warblers, could no longer navigate when exposed to a radio frequency that obstructs magnetic fields and radical-pair chemistry. Further evidence came from a comparison of Cryptochrome 4 (CRY4) from migrating and non-migrating birds. CRY4 from chicken and pigeon were found to be less sensitive to magnetic fields than those from the (migrating) European robin, suggesting evolutionary optimization of this protein as a sensor of magnetic fields. DNA mutation DNA acts as the instructions for making proteins throughout the body. It consists of 4 nucleotides: guanine, thymine, cytosine, and adenine. The order of these nucleotides gives the "recipe" for the different proteins. Whenever a cell reproduces, it must copy these strands of DNA. However, sometime throughout the process of copying the strand of DNA a mutation, or an error in the DNA code, can occur. A theory for the reasoning behind DNA mutation is explained in the Lowdin DNA mutation model. In this model, a nucleotide may spontaneously change its form through a process of quantum tunneling. Because of this, the changed nucleotide will lose its ability to pair with its original base pair and consequently change the structure and order of the DNA strand. Exposure to ultraviolet light and other types of radiation can cause DNA mutation and damage. The radiation also can modify the bonds along the DNA strand in the pyrimidines and cause them to bond with themselves, creating a dimer. In many prokaryotes and plants, these bonds are repaired by a DNA-repair-enzyme photolyase. As its prefix implies, photolyase is reliant on light in order to repair the strand. Photolyase works with its cofactor FADH, flavin adenine dinucleotide, while repairing the DNA. Photolyase is excited by visible light and transfers an electron to the cofactor FADH. FADH—now in the possession of an extra electron—transfers the electron to the dimer to break the bond and repair the DNA. The electron tunnels from the FADH to the dimer. Although the range of this tunneling is much larger than feasible in a vacuum, the tunneling in this scenario is said to be "superexchange-mediated tunneling," and is possible due to the protein's ability to boost the tunneling rates of the electron. Other Other quantum phenomena in biological systems include the conversion of chemical energy into motion and brownian motors in many cellular processes. Pseudoscience Alongside the multiple strands of scientific inquiry into quantum mechanics has come unconnected pseudoscientific interest; this caused scientists to approach quantum biology cautiously. Hypotheses such as orchestrated objective reduction which postulate a link between quantum mechanics and consciousness have drawn criticism from the scientific community with some claiming it to be pseudoscientific and "an excuse for quackery". References External links Philip Ball (2015). "Quantum Biology: An Introduction". The Royal Institution Quantum Biology and the Hidden Nature of Nature, World Science Festival 2012, video of podium discussion Quantum Biology: Current Status and Opportunities, September 17-18, 2012, University of Surrey, UK Biophysics
Quantum biology
[ "Physics", "Biology" ]
5,261
[ "Applied and interdisciplinary physics", "Quantum mechanics", "Biophysics", "nan", "Quantum biology" ]
13,540,243
https://en.wikipedia.org/wiki/Prolate%20trochoidal%20mass%20spectrometer
A prolate trochoidal mass spectrometer is a chemical analysis instrument in which the ions of different mass-to-charge ratio are separated by means of mutually perpendicular electric and magnetic fields so that the ions follow a prolate trochoidal path. These devices are sometimes called cycloidal mass spectrometers, although the path is not a cycloid (the prolate trochoid path has loops, the cycloid has cusps). Applications The instruments are used for the analysis of gases and in gas chromatography-mass spectrometry. The trochoidal configuration can also be used as the basis of an electron monochromator. References External links Mass spectrometry
Prolate trochoidal mass spectrometer
[ "Physics", "Chemistry" ]
153
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
9,517,361
https://en.wikipedia.org/wiki/Fluidized%20bed%20reactor
A fluidized bed reactor (FBR) is a type of reactor device that can be used to carry out a variety of multiphase chemical reactions. In this type of reactor, a fluid (gas or liquid) is passed through a solid granular material (usually a catalyst) at high enough speeds to suspend the solid and cause it to behave as though it were a fluid. This process, known as fluidization, imparts many important advantages to an FBR. As a result, FBRs are used for many industrial applications. Basic principles The solid substrate material (the catalytic material upon which chemical species react) in the fluidized bed reactor is typically supported by a porous plate, known as a distributor. The fluid is then forced through the distributor up through the solid material. At lower fluid velocities, the solids remain in place as the fluid passes through the voids in the material. This is known as a packed bed reactor. As the fluid velocity is increased, the reactor will reach a stage where the force of the fluid on the solids is enough to balance the weight of the solid material. This stage is known as incipient fluidization and occurs at this minimum fluidization velocity. Once this minimum velocity is surpassed, the contents of the reactor bed begin to expand and swirl around much like an agitated tank or boiling pot of water. The reactor is now a fluidized bed. Depending on the operating conditions and properties of solid phase various flow regimes can be observed in this reactor. History and current uses Fluidized bed reactors are a relatively new tool in the chemical engineering field. The first fluidized bed gas generator was developed by Fritz Winkler in Germany in the 1920s. One of the first United States fluidized bed reactors used in the petroleum industry was the Catalytic Cracking Unit, created in Baton Rouge, LA in 1942 by the Standard Oil Company of New Jersey (now ExxonMobil). This FBR and the many to follow were developed for the oil and petrochemical industries. Here catalysts were used to reduce petroleum to simpler compounds through a process known as cracking. The invention of this technology made it possible to significantly increase the production of various fuels in the United States. Today, fluidized bed reactors are still used to produce gasoline and other fuels, along with many other chemicals. Many industrially produced polymers are made using FBR technology, such as rubber, vinyl chloride, polyethylene, styrenes, and polypropylene. Various utilities also use FBRs for coal gasification, nuclear power plants, and water and waste treatment settings. Used in these applications, fluidized bed reactors allow for a cleaner, more efficient process than previous standard reactor technologies. Advantages The increase in fluidized bed reactor use in today's industrial world is largely due to the inherent advantages of the technology. Uniform particle mixing: Due to the intrinsic fluid-like behavior of the solid material, fluidized beds do not experience poor mixing as in packed beds. This complete mixing allows for a uniform product that can often be hard to achieve in other reactor designs. The elimination of radial and axial concentration gradients also allows for better fluid-solid contact, which is essential for reaction efficiency and quality. Uniform temperature gradients: Many chemical reactions require the addition or removal of heat. Local hot or cold spots within the reaction bed, often a problem in packed beds, are avoided in a fluidized situation such as an FBR. In other reactor types, these local temperature differences, especially hotspots, can result in product degradation. Thus FBRs are well suited to exothermic reactions. Researchers have also learned that the bed-to-surface heat transfer coefficients for FBRs are high. Ability to operate reactor in continuous state: The fluidized bed nature of these reactors allows for the ability to continuously withdraw product and introduce new reactants into the reaction vessel. Operating at a continuous process state allows manufacturers to produce their various products more efficiently due to the removal of startup conditions in batch processes. Disadvantages As in any design, the fluidized bed reactor does have its draw-backs, which any reactor designer must take into consideration. Increased reactor vessel size: Because of the expansion of the bed materials in the reactor, a larger vessel is often required than that for a packed bed reactor. This larger vessel means that more must be spent on initial capital costs. Pumping requirements and pressure drop: The requirement for the fluid to suspend the solid material necessitates that a higher fluid velocity is attained in the reactor. In order to achieve this, more pumping power and thus higher energy costs are needed. In addition, the pressure drop associated with deep beds also requires additional pumping power. Particle entrainment: The high fluid velocities present in this style of reactor often result in fine particles becoming entrained in the fluid. These captured particles are then carried out of the reactor with the fluid, where they must be separated. This can be a very difficult and expensive problem to address depending on the design and function of the reactor. This may often continue to be a problem even with other entrainment reducing technologies. Lack of current understanding: Current understanding of the actual behavior of the materials in a fluidized bed is rather limited. It is very difficult to predict and calculate the complex mass and heat flows within the bed. Due to this lack of understanding, a pilot plant for new processes is required. Even with pilot plants, the scale-up can be very difficult and may not reflect what was experienced in the pilot trial. Erosion of internal components: The fluid-like behavior of the fine solid particles within the bed eventually results in the wear of the reactor vessel. This can require expensive maintenance and upkeep for the reaction vessel and pipes. Pressure loss scenarios: If fluidization pressure is suddenly lost, the surface area of the bed may be suddenly reduced. This can either be an inconvenience (e.g. making bed restart difficult), or may have more serious implications, such as runaway reactions (e.g. for exothermic reactions in which heat transfer is suddenly restricted). Current research and trends Due to the advantages of fluidized bed reactors, a large amount of research is devoted to this technology. Most current research aims to quantify and explain the behavior of the phase interactions in the bed. Specific research topics include particle size distributions, various transfer coefficients, phase interactions, velocity and pressure effects, and computer modeling. The aim of this research is to produce more accurate models of the inner movements and phenomena of the bed. This will enable chemical engineers to design better, more efficient reactors that may effectively deal with the current disadvantages of the technology and expand the range of FBR use. See also Chemical engineering Chemical looping combustion Chemical reactor Fluidized bed combustion Siemens process References Chemical reactors Industrial processes Fluidization
Fluidized bed reactor
[ "Chemistry", "Engineering" ]
1,384
[ "Chemical reactors", "Fluidization", "Chemical equipment", "Chemical reaction engineering" ]
9,519,121
https://en.wikipedia.org/wiki/Quadratically%20constrained%20quadratic%20program
In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form where P0, ..., Pm are n-by-n matrices and x ∈ Rn is the optimization variable. If P0, ..., Pm are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If P1, ... ,Pm are all zero, then the constraints are in fact linear and the problem is a quadratic program. Hardness A convex QCQP problem can be efficiently solved using an interior point method (in a polynomial time), typically requiring around 30-60 iterations to converge. Solving the general non-convex case is an NP-hard problem. To see this, note that the two constraints x1(x1 − 1) ≤ 0 and x1(x1 − 1) ≥ 0 are equivalent to the constraint x1(x1 − 1) = 0, which is in turn equivalent to the constraint x1 ∈ {0, 1}. Hence, any 0–1 integer program (in which all variables have to be either 0 or 1) can be formulated as a quadratically constrained quadratic program. Since 0–1 integer programming is NP-hard in general, QCQP is also NP-hard. However, even for a nonconvex QCQP problem a local solution can generally be found with a nonconvex variant of the interior point method. In some cases (such as when solving nonlinear programming problems with a sequential QCQP approach) these local solutions are sufficiently good to be accepted. Relaxation There are two main relaxations of QCQP: using semidefinite programming (SDP), and using the reformulation-linearization technique (RLT). For some classes of QCQP problems (precisely, QCQPs with zero diagonal elements in the data matrices), second-order cone programming (SOCP) and linear programming (LP) relaxations providing the same objective value as the SDP relaxation are available. Nonconvex QCQPs with non-positive off-diagonal elements can be exactly solved by the SDP or SOCP relaxations, and there are polynomial-time-checkable sufficient conditions for SDP relaxations of general QCQPs to be exact. Moreover, it was shown that a class of random general QCQPs has exact semidefinite relaxations with high probability as long as the number of constraints grows no faster than a fixed polynomial in the number of variables. Semidefinite programming When P0, ..., Pm are all positive-definite matrices, the problem is convex and can be readily solved using interior point methods, as done with semidefinite programming. Example Max Cut is a problem in graph theory, which is NP-hard. Given a graph, the problem is to divide the vertices in two sets, so that as many edges as possible go from one set to the other. Max Cut can be formulated as a QCQP, and SDP relaxation of the dual provides good lower bounds. QCQP is used to finely tune machine setting in high-precision applications such as photolithography. Solvers and scripting (programming) languages References Further reading In statistics External links NEOS Optimization Guide: Quadratic Constrained Quadratic Programming Mathematical optimization
Quadratically constrained quadratic program
[ "Mathematics" ]
721
[ "Mathematical optimization", "Mathematical analysis" ]
9,519,674
https://en.wikipedia.org/wiki/Waves%20and%20shallow%20water
When waves travel into areas of shallow water, they begin to be affected by the ocean bottom. The free orbital motion of the water is disrupted, and water particles in orbital motion no longer return to their original position. As the water becomes shallower, the swell becomes higher and steeper, ultimately assuming the familiar sharp-crested wave shape. After the wave breaks, it becomes a wave of translation and erosion of the ocean bottom intensifies. Cnoidal waves are exact periodic solutions to the Korteweg–de Vries equation in shallow water, that is, when the wavelength of the wave is much greater than the depth of the water. See also References External links Exploring the World Ocean The Oceans Water waves Water
Waves and shallow water
[ "Physics", "Chemistry", "Environmental_science" ]
147
[ "Physical phenomena", "Hydrology", "Water waves", "Waves", "Water", "Fluid dynamics" ]
9,519,906
https://en.wikipedia.org/wiki/Valve%20RF%20amplifier
A valve RF amplifier (UK and Aus.) or tube amplifier (U.S.) is a device for electrically amplifying the power of an electrical radio frequency signal. Low to medium power valve amplifiers for frequencies below the microwaves were largely replaced by solid state amplifiers during the 1960s and 1970s, initially for receivers and low power stages of transmitters, transmitter output stages switching to transistors somewhat later. Specially constructed valves are still in use for very high power transmitters, although rarely in new designs. Valve characteristics Valves are high voltage / low current devices in comparison with transistors. Tetrode and pentode valves have very flat anode current vs. anode voltage indicating high anode output impedances. Triodes show a stronger relationship between anode voltage and anode current. The high working voltage makes them well suited for radio transmitters and valves remain in use today for very high power short wave radio transmitters, where solid state techniques would require many devices in parallel, and very high supply currents. High power solid state transmitters also require a complex combination of transformers and tuning networks, whereas a valve-based transmitter would use a single, relatively simple tuned network. Thus while solid state high power short wave transmitters are technically possible, economic considerations still favor valves above 3 MHz and 10,000 watts. Radio amateurs also use valve amplifiers in the 500–1500 watt range mainly for economic reasons. Audio vs. amplifiers Valve audio amplifiers typically amplify the entire audio range between 20 Hz and 20 kHz or higher. They use an iron core transformer to provide a suitable high impedance load to the valve(s) while driving a speaker, which is typically 8 Ohms. Audio amplifiers normally use a single valve in class A, or a pair in class B or . An power amplifier is tuned to a single frequency as low as 18 kHz and as high as the range of frequencies, for the purpose of radio transmission or industrial heating. They use a narrow tuned circuit to provide the valve with a suitably high load impedance and feed a load that is typically 50 or 75 Ohms. amplifiers normally operate class C or class AB. Although the frequency ranges for audio amplifiers and amplifiers overlap, the class of operation, method of output coupling and percent operational bandwidth will differ. Power valves are capable of high frequency response, up to at least 30 MHz. Indeed, many of the Directly Heated Single Ended Triode () audio amplifiers use radio transmitting valves originally designed to operate as amplifiers in the high frequency range. Circuit advantages of valves High input impedance Tubes' input impedance is comparable to that of ‑s, higher than in bipolar transistors, which is beneficial in certain signal amplification applications. Tolerant of high voltages Valves are high voltage devices, inherently suitable for higher voltage circuits than most semiconductors. Tubes can be built oversized to improve cooling Valves can be constructed on a scale large enough to dissipate great amounts of heat. Very high-power models are designed to accommodate water- or vapor-cooling. For that reason, valves remained the only viable technology for handling very high power, and especially high power + high voltage use, such as radio and transmitters, long into the age when transistors had displaced valves in almost all other applications. However, today even for high power/voltage, tubes are increasingly becoming obsolete as new transistor technology improves tolerance of high voltages and capacity for high power. Lower investment cost Because of the simplicity of practical tube-based designs, using tubes for applications like amplifiers above the kilowatt power range can greatly lower manufacturing costs. Also, large, high value power valves (steel clad, not glass tubes) can to some extent be remanufactured to extend residual life. Electrically very robust Tubes can tolerate amazingly high overloads, which would destroy bipolar transistor systems in milliseconds (of particular significance in military and other "strategically important" systems). Indefinite shelf life Even 60 year-old tubes can be perfectly functional, and many types are available for purchase as "new-old-stock". Thus, despite known reliability issues (see next section, below), it is still perfectly possible to run most very old vacuum tube equipment. Comparative ease of replacement Being known to be subject to a number of common failure modes, most systems with tubes were designed with sockets so the tubes can be installed as plug-in devices; they are rarely, if ever, soldered into a circuit. A failed tube can simply be unplugged and replaced by a user, while the failure of a soldered-in semiconductor may constitute damage beyond economical repair for a whole product or sub-assembly. The only difficulty is determining which tube has failed. Disadvantages of valves Cost For most applications, tubes require both greater initial outlay and running expense per amplification stage, requiring more attentive budgeting of the number of stages for a given application compared to semiconductors. Short operational life In the most common applications, valves have a working life of just a few thousand hours, much shorter than solid state parts. This is due various commonplace modes of failure: Cathode depletion, open- or short-circuits (notably of the heater and grid structures), cathode ‘poisoning’, and breaking the glass shell (the glass “tube” itself). Heater failure most often happens due to the mechanical stress of a cold start. Only in certain limited, always-on professional applications, such as specialized computing and undersea cables, have specially designed valves in carefully designed circuits, and well cooled environments reached operational lives of tens or hundreds of thousands of hours. Heater supplies are required for the cathodes Beside the investment cost, the share of the power budget that goes into heating the cathode, without contributing to output, can range from few percent points of anode dissipation (in high power applications at full output), to broadly comparable to anode dissipation in small signal applications. Large circuit temperature swings in on/off cycles Massive stray heat from cathode heaters in common low power tubes means that adjoining circuits experience changes in temperature that can exceed . This requires heat resistant components. In applications this also means that all frequency-determining components may have to heat to thermal equilibrium before frequency stability is reached. While at broadcast (medium wave) receivers and in loosely tuned sets this was not a problem, in typical radio receivers and transmitters with free-running oscillators at frequencies this thermal stabilization required about one hour. On the other hand, miniature ultra-low power direct-heated valves do not produce much heat in absolute terms, cause more modest temperature swings, and allow equipment that contains few of them to stabilize sooner. No "instant on" from a cold start Valve cathodes need to heat to a glow to start conducting. In indirect-heating cathodes this could take up to 20 seconds. Apart from temperature-related instability, this meant that valves would not work instantly when powered. This led to development of always-on preheating systems for vacuum tube appliances that shortened the wait and may have reduced valve failures from thermal shock, but at the price of a continuous power drain, and an increased fire hazard. On the other hand, very small, ultra low power direct-heated valves turn on in tenths of a second from a cold start. Dangerously high voltages Anodes of tubes may require dangerously high voltages to function as intended. In general, tubes themselves will not be troubled by high voltage, but high voltages will demand extra precautions in circuit layout and design, to avoid “flashover”. Wrong impedance for convenient use High impedance output (high voltage/low current) is typically not suitable for directly driving many real world loads, notably various forms of electric motor Valves only have one polarity Compared to transistors, valves have the disadvantage of having a single polarity, whereas for most uses, transistors are available as pairs with complementary polarities (e.g., / ), making possible many circuit configurations that cannot be realized with valves. Distortion The most efficient valve-based RF amplifiers operate class C. If used with no tuned circuit in the output, this would distort the input signal, producing harmonics. However, class C amplifiers normally use a high output network which removes the harmonics, leaving an undistorted sine wave identical to the input waveform. Class C is suitable only for amplifying signals with a constant amplitude, such as , , and some (Morse code) signals. Where the amplitude of the input signal to the amplifier varies as with single-sideband modulation, amplitude modulation, video and complex digital signals, the amplifier must operate class A or AB, to preserve the envelope of the driving signal in an undistorted form. Such amplifiers are referred to as linear amplifiers. It is also common to modify the gain of an amplifier operating class C so as to produce amplitude modulation. If done in a linear manner, this modulated amplifier is capable of low distortion. The output signal can be viewed as a product of the input signal and the modulating signal. The development of broadcasting improved fidelity by using a greater bandwidth which was available in the range, and where atmospheric noise was absent. also has an inherent ability to reject noise, which is mostly amplitude modulated. Valve technology suffers high-frequency limitations due to cathode-anode transit time. However, tetrodes are successfully used into the range and triodes into the low GHz range. Modern broadcast transmitters use both valve and solid state devices, with valves tending to be more used at the highest power levels. transmitters operate class C with very low distortion. Today's digital radio that carries coded data over various phase modulations (such as , , etc.) and also the increasing demand for spectrum have forced a dramatic change in the way radio is used, e.g. the cellular radio concept. Today's cellular radio and digital broadcast standards are extremely demanding in terms of the spectral envelope and out of band emissions that are acceptable (in the case of for example, −70 dB or better just a few hundred kilohertz from center frequency). Digital transmitters must therefore operate in the linear modes, with much attention given to achieving low distortion. Applications Historic transmitters and receivers (High voltage/high power) Valve stages were used to amplify the received radio frequency signals, the intermediate frequencies, the video signal and the audio signals at the various points in the receiver. Historically (pre WWII) "transmitting tubes" were among the most powerful tubes available, were usually direct heated by thoriated filaments that glowed like light bulbs. Some tubes were built to be very rugged, capable of being driven so hard that the anode would itself glow cherry red, the anodes being machined from solid material (rather than fabricated from thin sheet) to be able to withstand this without distorting when heated. Notable tubes of this type are the 845 and 211. Later beam power tubes such as the 807 and (direct heated) 813 were also used in large numbers in (especially military) radio transmitters. Bandwidth of valve vs solid state amplifiers Today, radio transmitters are overwhelmingly solid state, even at microwave frequencies (cellular radio base stations). Depending on the application, a fair number of radio frequency amplifiers continue to have valve construction, due to their simplicity, where as, it takes several output transistors with complex splitting and combining circuits to equal the same amount of output power of a single valve. Valve amplifier circuits are significantly different from broadband solid state circuits. Solid state devices have a very low output impedance which allows matching via a broadband transformer covering a large range of frequencies, for example 1.8 to 30 MHz. With either class C or AB operation, these must include low pass filters to remove harmonics. While the proper low pass filter must be switch selected for the frequency range of interest, the result is considered to be a "no tune" design. Valve amplifiers have a tuned network that serves as both the low pass harmonic filter and impedance matching to the output load. In either case, both solid state and valve devices need such filtering networks before the RF signal is output to the load. Radio circuits Unlike audio amplifiers, in which the analog output signal is of the same form and frequency as the input signal, RF circuits may modulate low frequency information (audio, video, or data) onto a carrier (at a much higher frequency), and the circuitry comprises several distinct stages. For example, a radio transmitter may contain: an audio frequency (AF) stage (typically using conventional broadband small signal circuitry as described in Valve audio amplifier, one or more oscillator stages that generate the carrier wave, one or more mixer stages that modulate the carrier signal from the oscillator, the amplifier stage itself operating at (typically) high frequency. the Transmitter power amp itself is the only high power stage in a radio system, and operates at the carrier frequency. In AM, the modulation (frequency mixing) usually takes place in the final amplifier itself. Transmitter anode circuits The most common anode circuit is a tuned LC circuit where the anodes are connected at a voltage node. This circuit is often known as the anode tank circuit. Active (or tuned grid) amplifier An example of this used at VHF/UHF include the 4CX250B, an example of a twin tetrode is the QQV06/40A. Neutralization is a term used in TGTP (tuned grid tuned plate) amplifiers for the methods and circuits used for stabilization against unwanted oscillations at the operating frequency caused by the inadvertent introduction of some of the output signal back into the input circuits. This mainly occurs via the grid to plate capacity, but can also come via other paths, making circuit layout important. To cancel the unwanted feedback signal, a portion of the output signal is deliberately introduced into the input circuit with the same amplitude but opposite phase. When using a tuned circuit in the input, the network must match the driving source to the input impedance of the grid. This impedance will be determined by the grid current in Class C or AB2 operation. In AB1 operation, the grid circuit should be designed to avoid excessive step up voltage, which although it might provide more stage gain, as in audio designs, it will increase instability and make neutralization more critical. In common with all three basic designs shown here, the anode of the valve is connected to a resonant LC circuit which has another inductive link which allows the RF signal to be passed to the output. The circuit shown has been largely replaced by a Pi network which allows simpler adjustment and adds low pass filtering. Operation The anode current is controlled by the electrical potential (voltage) of the first grid. A DC bias is applied to the valve to ensure that the part of the transfer equation which is most suitable to the required application is used. The input signal is able to perturb (change) the potential of the grid, this in turn will change the anode current (also known as the plate current). In the RF designs shown on this page, a tuned circuit is between the anode and the high voltage supply. This tuned circuit is brought to resonance presenting an inductive load that is well matched to the valve and thus results in an efficient power transfer. As the current flowing through the anode connection is controlled by the grid, then the current flowing through the load is also controlled by the grid. One of the disadvantages of a tuned grid compared to other RF designs is that neutralization is required. Passive grid amplifier A passive grid circuit used at VHF/UHF frequencies might use the 4CX250B tetrode. An example of a twin tetrode would be the QQV06/40A. The tetrode has a screen grid which is between the anode and the first grid, which being grounded for RF, acts as a shield to reducing the effective capacitance between the first grid and the anode. The combination of the effects of the screen grid and the grid damping resistor often allow the use of this design without neutralization. The screen found in tetrodes and pentodes, greatly increases the valve's gain by reducing the effect of anode voltage on anode current. The input signal is applied to the valve's first grid via a capacitor. The value of the grid resistor determines the gain of the amplifier stage. The higher the resistor the greater the gain, the lower the damping effect and the greater the risk of instability. With this type of stage good layout is less vital. Advantages Stable, no neutralizing required normally Constant load on the exciting stage Disadvantages Low gain, more input power is required Less gain than tuned grid Less filtering than tuned grid (more broadband), hence the amplification of out of band spurious signals, such as harmonics, from an exciter is greater Grounded grid amplifier This design normally uses a triode so valves such as the 4CX250B are not suitable for this circuit, unless the screen and control grids are joined, effectively converting the tetrode into a triode. This circuit design has been used at 1296 MHz using disk seal triode valves such as the 2C39A. The grid is grounded and the drive is applied to the cathode through a capacitor. The heater supply must be isolated from the cathode as unlike the other designs the cathode is not connected to RF ground. Some valves, such as the 811A, are designed for "zero bias" operation and the cathode can be at ground potential for DC. Valves that require a negative grid bias can be used by putting a positive DC voltage on the cathode. This can be achieved by putting a zener diode between the cathode and ground or using a separate bias supply. Advantages Stable, no neutralizing required normally Some of the power from exciting stage appears in the output Disadvantages Relatively low gain, typically about 10 dB. The heater must be isolated from ground with chokes. Neutralization The valve interelectrode capacitance which exists between the input and output of the amplifier and other stray coupling may allow enough energy to feed back into input so as to cause self-oscillation in an amplifier stage. For the higher gain designs this effect must be counteracted. Various methods exist for introducing an out-of-phase signal from the output back to the input so that the effect is cancelled. Even when the feed back is not sufficient to cause oscillation it can produce other effects, such as difficult tuning. Therefore, neutralization can be helpful, even for an amplifier that does not oscillate. Many grounded grid amplifiers use no neutralization, but at 30 MHz adding it can smooth out the tuning. An important part of the neutralization of a tetrode or pentode is the design of the screen grid circuit. To provide the greatest shielding effect, the screen must be well-grounded at the frequency of operation. Many valves will have a "self-neutralizing" frequency somewhere in the VHF range. This results from a series resonance consisting of the screen capacity and the inductance of the screen lead, thus providing a very low impedance path to ground. UHF Transit time effects are important at these frequencies, so feedback is not normally usable and for performance critical applications alternative linearisation techniques have to be used such as degeneration and feedforward. Tube noise and noise figure Noise figure is not usually an issue for power amplifier valves, however, in receivers using valves it can be important. While such uses are obsolete, this information is included for historical interest. Like any amplifying device, valves add noise to the signal to be amplified. Even with a hypothetical perfect amplifier, however, noise is unavoidably present due to thermal fluctuations in the signal source (usually assumed to be at room temperature, T = 295 K). Such fluctuations cause an electrical noise power of , where kB is the Boltzmann constant and B the bandwidth. Correspondingly, the voltage noise of a resistance R into an open circuit is and the current noise into a short circuit is . The noise figure is defined as the ratio of the noise power at the output of the amplifier relative to the noise power that would be present at the output if the amplifier were noiseless (due to amplification of thermal noise of the signal source). An equivalent definition is: noise figure is the factor by which insertion of the amplifier degrades the signal to noise ratio. It is often expressed in decibels (dB). An amplifier with a 0 dB noise figure would be perfect. The noise properties of tubes at audio frequencies can be modeled well by a perfect noiseless tube having a source of voltage noise in series with the grid. For the EF86 tube, for example, this voltage noise is specified (see e.g., the Valvo, Telefunken or Philips data sheets) as 2 microvolts integrated over a frequency range of approximately 25 Hz to 10 kHz. (This refers to the integrated noise, see below for the frequency dependence of the noise spectral density.) This equals the voltage noise of a 25 kΩ resistor. Thus, if the signal source has an impedance of 25 kΩ or more, the noise of the tube is actually smaller than the noise of the source. For a source of 25 kΩ, the noise generated by tube and source are the same, so the total noise power at the output of the amplifier is twice the noise power at the output of the perfect amplifier. The noise figure is then two, or 3 dB. For higher impedances, such as 250 kΩ, the EF86's voltage noise is lower than the source's own noise. It therefore adds 1/10 of the noise power caused by the source, and the noise figure is 0.4 dB. For a low-impedance source of 250 Ω, on the other hand, the noise voltage contribution of the tube is 10 times larger than the signal source, so that the noise power is one hundred times larger than that caused by the source. The noise figure in this case is 20 dB. To obtain low noise figure the impedance of the source can be increased by a transformer. This is eventually limited by the input capacity of the tube, which sets a limit on how high the signal impedance can be made if a certain bandwidth is desired. The noise voltage density of a given tube is a function of frequency. At frequencies above 10 kHz or so, it is basically constant ("white noise"). White noise is often expressed by an equivalent noise resistance, which is defined as the resistance which produces the same voltage noise as present at the tube input. For triodes, it is approximately (2-4)/gm, where gm is the transconductivity. For pentodes, it is higher, about (5-7)/gm. Tubes with high gm thus tend to have lower noise at high frequencies. For example, it is 300 Ω for one half of the ECC88, 250 Ω for an E188CC (both have gm = 12.5 mA/V) and as low as 65 Ω for a tride-connected D3a (gm = 40 mA/V). In the audio frequency range (below 1–100 kHz), "1/f" noise becomes dominant, which rises like 1/f. (This is the reason for the relatively high noise resistance of the EF86 in the above example.) Thus, tubes with low noise at high frequency do not necessarily have low noise in the audio frequency range. For special low-noise audio tubes, the frequency at which 1/f noise takes over is reduced as far as possible, maybe to approximately a kilohertz. It can be reduced by choosing very pure materials for the cathode nickel, and running the tube at an optimized (generally low) anode current. At radio frequencies, things are more complicated: (i) The input impedance of a tube has a real component that goes down like 1/f² (due to cathode lead inductance and transit time effects). This means the input impedance can no longer be increased arbitrarily in order to reduce the noise figure. (ii) This input resistance has its own thermal noise, just like any resistor. (The "temperature" of this resistor for noise purposes is more close to the cathode temperature than to room temperature). Thus, the noise figure of tube amplifiers increases with frequency. At 200 MHz, a noise figure of 2.5 (or 4 dB) can be reached with the ECC2000 tube in an optimized "cascode"-circuit with an optimized source impedance. At 800 MHz, tubes like EC8010 have noise figures of about 10 dB or more. Planar triodes are better, but very early, transistors have reached noise figures substantially lower than tubes at UHF. Thus, the tuners of television sets were among the first parts of consumer electronics were transistors were used. Decline Semiconductor amplifiers have overwhelmingly displaced valve amplifiers for low- and medium-power applications at all frequencies. Valves continue to be used in some high-power, high-frequency amplifiers used for short wave broadcasting, VHF and UHF TV and (VHF) FM radio, also in existing "radar, countermeasures equipment, or communications equipment" using specially designed valves, such as the klystron, gyrotron, traveling-wave tube, and crossed-field amplifier; however, new designs for such products are now invariably semiconductor-based. Footnotes Works cited References Radio communication handbook (5th Ed), Radio Society of Great Britain, 1976, External links WebCite query result - AM band (medium wave, short wave) old valve type Radio The Audio Circuit - An almost complete list of manufacturers, DIY kits, materials and parts and 'how they work' sections on valve amplifiers Conversion calculator - distortion factor to distortion attenuation and THD Radio electronics Valve amplifiers
Valve RF amplifier
[ "Engineering" ]
5,372
[ "Radio electronics" ]
9,521,581
https://en.wikipedia.org/wiki/DLX%20gene%20family
Genes in the DLX family encode homeodomain transcription factors related to the Drosophila distal-less (Dll) gene. The family has been related to a number of developmental features such as jaws and limbs. The family seems to be well preserved across species. As DLX/Dll are involved in limb development in most of the major phyla, including vertebrates, it has been suggested that Dll was involved in appendage growth in an early bilaterial ancestor. Six members of the family are found in human and mice, numbered DLX1 to DLX6. They form two-gene clusters (bigene clusters) with each other. There are DLX1-DLX2, DLX3-DLX4, DLX5-DLX6 clusters in vertebrates, linked to Hox gene clusters HOXD, HOXB, and HOXA respectively. In higher fishes like the zebrafish, there are two additional DLX genes, dlx2b (dlx5) and dlx4a (dlx8). These additional genes are not linked with each other, or any other DLX gene. All six other genes remain in bigene clusters. DLX4, DLX7, DLX8 and DLX9 are the same gene in vertebrates. They are named differently because every time the same gene was found, the researchers thought they had found a new gene. Function DLX genes, like distal-less, are involved in limb development in most of the major phyla. DLX genes are involved in craniofacial morphogenesis and the tangential migration of interneurons from the subpallium to the pallium during vertebrate brain development. It has been suggested that DLX promotes the migration of interneurons by repressing a set of proteins that are normally expressed in terminally differentiated neurons and act to promote the outgrowth of dendrites and axons. Mice lacking DLX1 exhibit electrophysiological and histological evidence consistent with delayed-onset epilepsy. DLX2 has been associated with a number of areas including development of the zona limitans intrathalamica and the prethalamus. DLX4 (DLX7) is expressed in bone marrow. DLX5 and DLX6 genes are necessary for normal formation of the mandible in vertebrates. References Gene families Transcription factors
DLX gene family
[ "Chemistry", "Biology" ]
505
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
9,522,381
https://en.wikipedia.org/wiki/Fundamental%20diagram%20of%20traffic%20flow
The fundamental diagram of traffic flow is a diagram that gives a relation between road traffic flux (vehicles/hour) and the traffic density (vehicles/km). A macroscopic traffic model involving traffic flux, traffic density and velocity forms the basis of the fundamental diagram. It can be used to predict the capability of a road system, or its behaviour when applying inflow regulation or speed limits. Basic statements There is a connection between traffic density and vehicle velocity: The more vehicles are on a road, the slower their velocity will be. To prevent congestion and to keep traffic flow stable, the number of vehicles entering the control zone has to be smaller or equal to the number of vehicles leaving the zone in the same time. At a critical traffic density and a corresponding critical velocity the state of flow will change from stable to unstable. If one of the vehicles brakes in unstable flow regime the flow will collapse. The primary tool for graphically displaying information in the study traffic flow is the fundamental diagram. Fundamental diagrams consist of three different graphs: flow-density, speed-flow, and speed-density. The graphs are two dimensional graphs. All the graphs are related by the equation “flow = speed * density”; this equation is the essential equation in traffic flow. The fundamental diagrams were derived by the plotting of field data points and giving these data points a best fit curve. With the fundamental diagrams researchers can explore the relationship between speed, flow, and density of traffic. Speed-density The speed-density relationship is linear with a negative slope; therefore, as the density increases the speed of the roadway decreases. The line crosses the speed axis, y, at the free flow speed, and the line crosses the density axis, x, at the jam density. Here the speed approaches free flow speed as the density approaches zero. As the density increases, the speed of the vehicles on the roadway decreases. The speed reaches approximately zero when the density equals the jam density. Flow-density In the study of traffic flow theory, the flow-density diagram is used to determine the traffic state of a roadway. Currently, there are two types of flow density graphs: parabolic and triangular. Academia views the triangular flow-density curve as more the accurate representation of real world events. The triangular curve consists of two vectors. The first vector is the freeflow side of the curve. This vector is created by placing the freeflow velocity vector of a roadway at the origin of the flow-density graph. The second vector is the congested branch, which is created by placing the vector of the shock wave speed at zero flow and jam density. The congested branch has a negative slope, which implies that the higher the density on the congested branch the lower the flow; therefore, even though there are more cars on the road, the number of cars passing a single point is less than if there were fewer cars on the road. The intersection of freeflow and congested vectors is the apex of the curve and is considered the capacity of the roadway, which is the traffic condition at which the maximum number of vehicles can pass by a point in a given time period. The flow and capacity at which this point occurs is the optimum flow and optimum density, respectively. The flow density diagram is used to give the traffic condition of a roadway. With the traffic conditions, time-space diagrams can be created to give travel time, delay, and queue lengths of a road segment. Speed-flow Speed – flow diagrams are used to determine the speed at which the optimum flow occurs. There are currently two shapes of the speed-flow curve. The speed-flow curve also consists of two branches, the free flow and congested branches. The diagram is not a function, allowing the flow variable to exist at two different speeds. The flow variable existing at two different speeds occurs when the speed is higher and the density is lower or when the speed is lower and the density is higher, which allows for the same flow rate. In the first speed-flow diagram, the free flow branch is a horizontal line, which shows that the roadway is at free flow speed until the optimum flow is reached. Once the optimum flow is reached, the diagram switches to the congested branch, which is a parabolic shape. The second speed flow diagram is a parabola. The parabola suggests that the only time there is free flow speed is when the density approaches zero; it also suggests that as the flow increases the speed decreases. This parabolic graph also contains an optimum flow. The optimum flow also divides the free flow and congested branches on the parabolic graph. Macroscopic fundamental diagram A macroscopic fundamental diagram (MFD) is type of traffic flow fundamental diagram that relates space-mean flow, density and speed of an entire network with n number of links as shown in Figure 1. The MFD thus represents the capacity, , of the network in terms of vehicle density with being the maximum capacity of the network and being the jam density of the network. The maximum capacity or “sweet spot” of the network is the region at the peak of the MFD function. Flow The space-mean flow, , across all the links of a given network can be expressed by: , where B is the area in the time-space diagram shown in Figure 2. Density The space-mean density, , across all the links of a given network can be expressed by: , where A is the area in the time-space diagram shown in Figure 2. Speed The space-mean speed, , across all the links of a given network can be expressed by: , where B is the area in the space-time diagram shown in Figure 2. Average travel time The MFD function can be expressed in terms of the number of vehicles in the network such that: where represents the total lane miles of the network. Let be the average distance driven by a user in the network. The average travel time () is: Application of the Macroscopic Fundamental Diagram (MFD) In 2008, the traffic flow data of the city street network of Yokohama, Japan was collected using 500 fixed sensors and 140 mobile sensors. The study revealed that city sectors with approximate area of 10 km2 are expected to have well-defined MFD functions. However, the observed MFD does not produce the full MFD function in the congested region of higher densities. Most beneficially though, the MFD function of a city network was shown to be independent of the traffic demand. Thus, through the continuous collection of traffic flow data the MFD for urban neighborhoods and cities can be obtained and used for analysis and traffic engineering purposes. These MFD functions can aid agencies in improving network accessibility and help to reduce congestion by monitoring the number of vehicles in the network. In turn, using congestion pricing, perimeter control, and other various traffic control methods, agencies can maintain optimum network performance at the "sweet spot" peak capacity. Agencies can also use the MFD to estimate average trip times for public information and engineering purposes. Keyvan-Ekbatani et al. have exploited the notion of MFD to improve mobility in saturated traffic conditions via application of gating measures, based on an appropriate simple feedback control structure. They developed a simple (nonlinear and linearized) control design model, incorporating the operational MFD, which allows for the gating problem to be cast in a proper feedback control design setting. This allows for application and comparison of a variety of linear or nonlinear, feedback or predictive (e.g. Smith predictor, internal model control and other) control design methods from the control engineering arsenal; among them, a simple but efficient PI controller was developed and successfully tested in a fairly realistic microscopic simulation environment. See also Traffic flow Traffic wave Traffic congestion Three-detector problem and Newell's method References Road transport Transportation engineering
Fundamental diagram of traffic flow
[ "Engineering" ]
1,598
[ "Transportation engineering", "Civil engineering", "Industrial engineering" ]
9,522,674
https://en.wikipedia.org/wiki/Mendelian%20error
A Mendelian error in the genetic analysis of a species, describes an allele in an individual which could not have been received from either of its biological parents by Mendelian inheritance. Inheritance is defined by a set of related individuals who have the same or similar phenotypes for a locus of a particular gene. A Mendelian error means that the very structure of the inheritance as defined by analysis of the parental genes is incorrect: one parent of one individual is not actually the parent indicated; therefore the assumption is that the parental information is incorrect. Possible explanations for Mendelian errors are genotyping errors, erroneous assignment of the individuals as relatives, or de novo mutations. Mendelian error is established by demonstrating the existence of a trait which is inconsistent with every possible combination of genotype compatible with the individual. This method of determination requires pedigree checking, however, and establishing a contradiction between phenotype and pedigree is an NP-complete problem. Genetic inconsistencies which do not correspond to this definition are Non-Mendelian Errors. Statistical genetics analysis is used to detect these errors and to detect the possibility of the individual being linked to a specific disease linked to a single gene. Examples of such diseases in humans caused by single genes are Huntington's disease or Marfan syndrome. See also Gregor Mendel SNP genotyping Footnotes Mendelian error detection in complex pedigree using weighted constraint satisfaction techniques Genetics error NP-complete problems
Mendelian error
[ "Mathematics", "Biology" ]
305
[ "NP-complete problems", "Mathematical problems", "Genetics", "Computational problems" ]
9,523,459
https://en.wikipedia.org/wiki/Tank%20blanketing
Tank blanketing, also called gas sealing or tank padding, is the process of applying a gas to the empty space in a storage container. The term storage container here refers to any container that is used to store products, regardless of its size. Though tank blanketing is used for a variety of reasons, it typically involves using a buffer gas to protect products inside the storage container. A few of the benefits of blanketing include a longer product life in the container, reduced hazards, and longer equipment life cycles. Methods In 1970, Appalachian Controls Environmental (ACE) was the world’s first company to introduce a tank blanketing valve. There are now many ready-made systems available for purchase from a variety of process equipment companies. It is also possible to piece together your own system using a variety of different equipment. Regardless of which method is used, the basic requirements are the same. There must be a way of allowing the blanketing gas into the system, and a way to vent the gas should the pressure get too high. Since ACE introduced its valve many companies have engineered their own versions. Though many of the products available vary in features and applicability, the fundamental design is the same. When the pressure inside the container drops below a set point, a valve opens and allows the blanketing gas to enter. Once the pressure reaches the set point, the valve closes. As a safety feature, many systems include a pressure vent that opens when the pressure inside exceeds a maximum pressure set point. This helps to prevent the container from rupturing due to high pressure. Since most blanketing gas sources will provide gas at a much higher than desired pressure, a blanketing system will also use a pressure reducing valve to decrease the inlet pressure to the tank. Although it varies from application to application, blanketing systems usually operate at a slightly higher than atmospheric pressure (a few inches of water column above atmospheric). Higher pressures than this are generally not used as they often yield only marginal increases in results while wasting large amounts of expensive blanketing gas. Some systems also utilize inert gases to agitate the liquid contents of the container. This is desirable because products, such as citric acid, are added to food oils the tank will begin to settle over time with the heavier contents sinking to the bottom. However, a system that utilizes nitrogen sparging (and then subsequently tank blanketing once the nitrogen reaches the vapor space) may have negative impact on the products involved. Nitrogen sparging creates a significantly higher amount of surface contact between the gas and the product, which in turn creates a much larger opportunity for undesired oxidation to occur. It is possible for nitrogen that is as much 99.9% free of oxygen to increase the amount of oxidation within the product due to the high amount of surface contact. Common practices The most common gas used in blanketing is nitrogen. Nitrogen is widely used due to its inert properties, as well as its availability and relatively low cost. Tank blanketing is used for a variety of products including cooking oils, volatile combustible products, and purified water. These applications also cover a wide variety of storage containers, ranging from as large as a tank containing millions of gallons of vegetable oil down to a quart-size container or smaller. Nitrogen is appropriate for use at any of these scales. The use of an inert blanketing gas for food products helps to keep oxygen levels low in and around the product. Low levels of oxygen surrounding the product help to reduce the amount of oxidation that may occur, and increases shelf life. In the case of cooking oils, lipid oxidation can cause the oil to change its color, flavor, or aroma. It also decreases the nutrient levels in the food and can even generate toxic substances. Tank blanketing strategies are also implemented to prepare the product for transit (railcar or truck) and for final packaging before sealing the product. When considering the application for combustible products, the greatest benefit is process safety. Since fuels require oxygen to combust, reduced oxygen content in the vapor space lowers the risk of unwanted combustion. Tank blanketing is also used to keep contaminants out of a storage space. This is accomplished by creating positive pressure inside the container. This positive pressure ensures that if a leak should occur, the gas will leak out rather than having the contaminants infiltrate the container. Some examples include its use on purified water to keep unwanted minerals out and its use on food products to keep contaminants out. To ensure their safety, gas-blanketing systems for food use are regulated by the U.S. Food and Drug Administration (FDA) and must adhere to strict maintenance schedules and follow all product-contact regulations with regards to purity, toxicity, and filter specs. As with any use of inert gases, care must be taken to ensure that workers are not exposed to large quantities of nitrogen or other non-breathable substances, which can quickly result in asphyxiation and death. Use of them in commercial applications is subject to the regulation of OSHA in the USA and similar regulatory bodies elsewhere. See also Industrial gas Oxygen reduction system Inerting system References Author unavailable (2000), Fisher Controls becomes an “ACE” in tank blanketing [Electronic version]. Control Engineering Europe, July 2000, 12. Kanner, J., Rosenthal, I. (1992), An Assessment of Lipid Oxidation in Foods [Electronic version]. Pure Appl. Chem., Vol. 64, No. 12, 1959-1964. Retrieved February 15, 2007, from http://www.iupac.org/publications/pac/1992/pdf/6412x1959.pdf Amos, Kenna (1999). Leakless vapor-space valve controls unveiled. InTech, January 1999. Retrieved February 15, 2007, from http://findarticles.com/p/articles/mi_qa3739/is_199901/ai_n8840650 External sources Online Chemical Engineering Information Nitrogen properties, uses, and applications Control engineering Chemical processes
Tank blanketing
[ "Chemistry", "Engineering" ]
1,239
[ "Chemical process engineering", "Control engineering", "Chemical processes", "nan" ]
171,878
https://en.wikipedia.org/wiki/Tautochrone%20curve
A tautochrone curve or isochrone curve () is the curve for which the time taken by an object sliding without friction in uniform gravity to its lowest point is independent of its starting point on the curve. The curve is a cycloid, and the time is equal to π times the square root of the radius (of the circle which generates the cycloid) over the acceleration of gravity. The tautochrone curve is related to the brachistochrone curve, which is also a cycloid. The tautochrone problem The tautochrone problem, the attempt to identify this curve, was solved by Christiaan Huygens in 1659. He proved geometrically in his Horologium Oscillatorium, originally published in 1673, that the curve is a cycloid. The cycloid is given by a point on a circle of radius tracing a curve as the circle rolls along the axis, as: Huygens also proved that the time of descent is equal to the time a body takes to fall vertically the same distance as diameter of the circle that generates the cycloid, multiplied by . In modern terms, this means that the time of descent is , where is the radius of the circle which generates the cycloid, and is the gravity of Earth, or more accurately, the earth's gravitational acceleration. This solution was later used to solve the problem of the brachistochrone curve. Johann Bernoulli solved the problem in a paper (Acta Eruditorum, 1697). The tautochrone problem was studied by Huygens more closely when it was realized that a pendulum, which follows a circular path, was not isochronous and thus his pendulum clock would keep different time depending on how far the pendulum swung. After determining the correct path, Christiaan Huygens attempted to create pendulum clocks that used a string to suspend the bob and curb cheeks near the top of the string to change the path to the tautochrone curve. These attempts proved unhelpful for a number of reasons. First, the bending of the string causes friction, changing the timing. Second, there were much more significant sources of timing errors that overwhelmed any theoretical improvements that traveling on the tautochrone curve helps. Finally, the "circular error" of a pendulum decreases as length of the swing decreases, so better clock escapements could greatly reduce this source of inaccuracy. Later, the mathematicians Joseph Louis Lagrange and Leonhard Euler provided an analytical solution to the problem. Lagrangian solution For a simple harmonic oscillator released from rest, regardless of its initial displacement, the time it takes to reach the lowest potential energy point is always a quarter of its period, which is independent of its amplitude. Therefore, the Lagrangian of a simple harmonic oscillator is isochronous. In the tautochrone problem, if the particle's position is parametrized by the arclength from the lowest point, the kinetic energy is then proportional to , and the potential energy is proportional to the height . One way the curve in the tautochrone problem can be an isochrone is if the Lagrangian is mathematically equivalent to a simple harmonic oscillator; that is, the height of the curve must be proportional to the arclength squared: where the constant of proportionality is . Compared to the simple harmonic oscillator's Lagrangian, the equivalent spring constant is , and the time of descent is However, the physical meaning of the constant is not clear until we determine the exact analytical equation of the curve. To solve for the analytical equation of the curve, note that the differential form of the above relation is which eliminates , and leaves a differential equation for and . This is the differential equation for a cycloid when the vertical coordinate is counted from its vertex (the point with a horizontal tangent) instead of the cusp. To find the solution, integrate for in terms of : where , and the height decreases as the particle moves forward . This integral is the area under a circle, which can be done with another substitution and yield: This is the standard parameterization of a cycloid with . It's interesting to note that the arc length squared is equal to the height difference multiplied by the full arch length . "Virtual gravity" solution The simplest solution to the tautochrone problem is to note a direct relation between the angle of an incline and the gravity felt by a particle on the incline. A particle on a 90° vertical incline undergoes full gravitational acceleration , while a particle on a horizontal plane undergoes zero gravitational acceleration. At intermediate angles, the acceleration due to "virtual gravity" by the particle is . Note that is measured between the tangent to the curve and the horizontal, with angles above the horizontal being treated as positive angles. Thus, varies from to . The position of a mass measured along a tautochrone curve, , must obey the following differential equation: which, along with the initial conditions and , has solution: It can be easily verified both that this solution solves the differential equation and that a particle will reach at time from any starting position . The problem is now to construct a curve that will cause the mass to obey the above motion. Newton's second law shows that the force of gravity and the acceleration of the mass are related by: The explicit appearance of the distance, , is troublesome, but we can differentiate to obtain a more manageable form: This equation relates the change in the curve's angle to the change in the distance along the curve. We now use trigonometry to relate the angle to the differential lengths , and : Replacing with in the above equation lets us solve for in terms of : Likewise, we can also express in terms of and solve for in terms of : Substituting and , we see that these parametric equations for and are those of a point on a circle of radius rolling along a horizontal line (a cycloid), with the circle center at the coordinates : Note that ranges from . It is typical to set and so that the lowest point on the curve coincides with the origin. Therefore: Solving for and remembering that is the time required for descent, being a quarter of a whole cycle, we find the descent time in terms of the radius : (Based loosely on Proctor, pp. 135–139) Abel's solution Niels Henrik Abel attacked a generalized version of the tautochrone problem (Abel's mechanical problem), namely, given a function that specifies the total time of descent for a given starting height, find an equation of the curve that yields this result. The tautochrone problem is a special case of Abel's mechanical problem when is a constant. Abel's solution begins with the principle of conservation of energy – since the particle is frictionless, and thus loses no energy to heat, its kinetic energy at any point is exactly equal to the difference in gravitational potential energy from its starting point. The kinetic energy is , and since the particle is constrained to move along a curve, its velocity is simply , where is the distance measured along the curve. Likewise, the gravitational potential energy gained in falling from an initial height to a height is , thus: In the last equation, we have anticipated writing the distance remaining along the curve as a function of height (, recognized that the distance remaining must decrease as time increases (thus the minus sign), and used the chain rule in the form . Now we integrate from to to get the total time required for the particle to fall: This is called Abel's integral equation and allows us to compute the total time required for a particle to fall along a given curve (for which would be easy to calculate). But Abel's mechanical problem requires the converse – given , we wish to find , from which an equation for the curve would follow in a straightforward manner. To proceed, we note that the integral on the right is the convolution of with and thus take the Laplace transform of both sides with respect to variable : where . Since , we now have an expression for the Laplace transform of in terms of the Laplace transform of : This is as far as we can go without specifying . Once is known, we can compute its Laplace transform, calculate the Laplace transform of and then take the inverse transform (or try to) to find . For the tautochrone problem, is constant. Since the Laplace transform of 1 is , i.e., , we find the shape function : Making use again of the Laplace transform above, we invert the transform and conclude: It can be shown that the cycloid obeys this equation. It needs one step further to do the integral with respect to to obtain the expression of the path shape. (Simmons, Section 54). See also Beltrami identity Brachistochrone curve Calculus of variations Catenary Cycloid Uniformly accelerated motion References Bibliography External links Mathworld Plane curves Mechanics de:Zykloide#Die Tautochronie der Zykloide
Tautochrone curve
[ "Physics", "Mathematics", "Engineering" ]
1,858
[ "Plane curves", "Euclidean plane geometry", "Mechanics", "Mechanical engineering", "Planes (geometry)" ]
171,879
https://en.wikipedia.org/wiki/Brachistochrone%20curve
In physics and mathematics, a brachistochrone curve (), or curve of fastest descent, is the one lying on the plane between a point A and a lower point B, where B is not directly below A, on which a bead slides frictionlessly under the influence of a uniform gravitational field to a given end point in the shortest time. The problem was posed by Johann Bernoulli in 1696. The brachistochrone curve is the same shape as the tautochrone curve; both are cycloids. However, the portion of the cycloid used for each of the two varies. More specifically, the brachistochrone can use up to a complete rotation of the cycloid (at the limit when A and B are at the same level), but always starts at a cusp. In contrast, the tautochrone problem can use only up to the first half rotation, and always ends at the horizontal. The problem can be solved using tools from the calculus of variations and optimal control. The curve is independent of both the mass of the test body and the local strength of gravity. Only a parameter is chosen so that the curve fits the starting point A and the ending point B. If the body is given an initial velocity at A, or if friction is taken into account, then the curve that minimizes time differs from the tautochrone curve. History Galileo's problem Earlier, in 1638, Galileo Galilei had tried to solve a similar problem for the path of the fastest descent from a point to a wall in his Two New Sciences. He draws the conclusion that the arc of a circle is faster than any number of its chords,From the preceding it is possible to infer that the quickest path of all [lationem omnium velocissimam], from one point to another, is not the shortest path, namely, a straight line, but the arc of a circle. ... Consequently the nearer the inscribed polygon approaches a circle the shorter the time required for descent from A to C. What has been proven for the quadrant holds true also for smaller arcs; the reasoning is the same. Just after Theorem 6 of Two New Sciences, Galileo warns of possible fallacies and the need for a "higher science". In this dialogue Galileo reviews his own work. Galileo studied the cycloid and gave it its name, but the connection between it and his problem had to wait for advances in mathematics. Galileo’s conjecture is that “The shortest time of all [for a movable body] will be that of its fall along the arc ADB [of a quarter circle] and similar properties are to be understood as holding for all lesser arcs taken upward from the lowest limit B.” In Fig.1, from the “Dialogue Concerning the Two Chief World Systems”, Galileo claims that the body sliding along the circular arc of a quarter circle, from A to B will reach B in less time than if it took any other path from A to B. Similarly, in Fig. 2, from any point D on the arc AB, he claims that the time along the lesser arc DB will be less than for any other path from D to B. In fact, the quickest path from A to B or from D to B, the brachistochrone, is a cycloidal arc, which is shown in Fig. 3 for the path from A to B, and Fig.4 for the path from D to B, superposed on the respective circular arc. Introduction of the problem Johann Bernoulli posed the problem of the brachistochrone to the readers of Acta Eruditorum in June, 1696. He said: Bernoulli wrote the problem statement as: {{Quote |text=Given two points A and B in a vertical plane, what is the curve traced out by a point acted on only by gravity, which starts at A and reaches B in the shortest time.}} Johann and his brother Jakob Bernoulli derived the same solution, but Johann's derivation was incorrect, and he tried to pass off Jakob's solution as his own. Johann published the solution in the journal in May of the following year, and noted that the solution is the same curve as Huygens' tautochrone curve. After deriving the differential equation for the curve by the method given below, he went on to show that it does yield a cycloid. However, his proof is marred by his use of a single constant instead of the three constants, vm, 2g and D, below. Bernoulli allowed six months for the solutions but none were received during this period. At the request of Leibniz, the time was publicly extended for a year and a half. At 4 p.m. on 29 January 1697 when he arrived home from the Royal Mint, Isaac Newton found the challenge in a letter from Johann Bernoulli. Newton stayed up all night to solve it and mailed the solution anonymously by the next post. Upon reading the solution, Bernoulli immediately recognized its author, exclaiming that he "recognizes a lion from his claw mark". This story gives some idea of Newton's power, since Johann Bernoulli took two weeks to solve it.D.T. Whiteside, Newton the Mathematician, in Bechler, Contemporary Newtonian Research, p. 122. Newton also wrote, "I do not love to be dunned [pestered] and teased by foreigners about mathematical things...", and Newton had already solved Newton's minimal resistance problem, which is considered the first of the kind in calculus of variations. In the end, five mathematicians responded with solutions: Newton, Jakob Bernoulli, Gottfried Leibniz, Ehrenfried Walther von Tschirnhaus and Guillaume de l'Hôpital. Four of the solutions (excluding l'Hôpital's) were published in the same edition of the journal as Johann Bernoulli's. In his paper, Jakob Bernoulli gave a proof of the condition for least time similar to that below before showing that its solution is a cycloid. According to Newtonian scholar Tom Whiteside, in an attempt to outdo his brother, Jakob Bernoulli created a harder version of the brachistochrone problem. In solving it, he developed new methods that were refined by Leonhard Euler into what the latter called (in 1766) the calculus of variations. Joseph-Louis Lagrange did further work that resulted in modern infinitesimal calculus. Johann Bernoulli's solution Introduction In a letter to L’Hôpital, (21/12/1696), Bernoulli stated that when considering the problem of the curve of quickest descent, after only 2 days he noticed a curious affinity or connection with another no less remarkable problem leading to an ‘indirect method’ of solution. Then shortly afterwards he discovered a ‘direct method’. Direct method In a letter to Henri Basnage, held at the University of Basel Public Library, dated 30 March 1697, Johann Bernoulli stated that he had found two methods (always referred to as "direct" and "indirect") to show that the Brachistochrone was the "common cycloid", also called the "roulette". Following advice from Leibniz, he included only the indirect method in the Acta Eruditorum Lipsidae of May 1697. He wrote that this was partly because he believed it was sufficient to convince anyone who doubted the conclusion, partly because it also resolved two famous problems in optics that "the late Mr. Huygens" had raised in his treatise on light. In the same letter he criticised Newton for concealing his method. In addition to his indirect method he also published the five other replies to the problem that he received. Johann Bernoulli's direct method is historically important as a proof that the brachistochrone is the cycloid. The method is to determine the curvature of the curve at each point. All the other proofs, including Newton's (which was not revealed at the time) are based on finding the gradient at each point. In 1718, Bernoulli explained how he solved the brachistochrone problem by his direct method.The Early Period of the Calculus of Variations, by P. Freguglia and M. Giaquinta, pp. 53–57, . He explained that he had not published it in 1697, for reasons that no longer applied in 1718. This paper was largely ignored until 1904 when the depth of the method was first appreciated by Constantin Carathéodory, who stated that it shows that the cycloid is the only possible curve of quickest descent. According to him, the other solutions simply implied that the time of descent is stationary for the cycloid, but not necessarily the minimum possible. Analytic solution A body is regarded as sliding along any small circular arc Ce between the radii KC and Ke, with centre K fixed. The first stage of the proof involves finding the particular circular arc, Mm, which the body traverses in the minimum time. The line KNC intersects AL at N, and line Kne intersects it at n, and they make a small angle CKe at K. Let NK = a, and define a variable point, C on KN extended. Of all the possible circular arcs Ce, it is required to find the arc Mm, which requires the minimum time to slide between the 2 radii, KM and Km. To find Mm Bernoulli argues as follows. Let MN = x. He defines m so that MD = mx, and n so that Mm = nx + na and notes that x is the only variable and that m is finite and n is infinitely small. The small time to travel along arc Mm is , which has to be a minimum (‘un plus petit’). He does not explain that because Mm is so small the speed along it can be assumed to be the speed at M, which is as the square root of MD, the vertical distance of M below the horizontal line AL. It follows that, when differentiated this must give so that x = a. This condition defines the curve that the body slides along in the shortest time possible. For each point, M on the curve, the radius of curvature, MK is cut in 2 equal parts by its axis AL. This property, which Bernoulli says had been known for a long time, is unique to the cycloid. Finally, he considers the more general case where the speed is an arbitrary function X(x), so the time to be minimised is . The minimum condition then becomes which he writes as : and which gives MN (=x) as a function of NK (= a). From this the equation of the curve could be obtained from the integral calculus, though he does not demonstrate this. Synthetic solution He then proceeds with what he called his Synthetic Solution, which was a classical, geometrical proof, that there is only a single curve that a body can slide down in the minimum time, and that curve is the cycloid. "The reason for the synthetic demonstration, in the manner of the ancients, is to convince Mr. de la Hire. He has little time for our new analysis, describing it as false (He claims he has found 3 ways to prove that the curve is a cubic parabola)" – Letter from Johan Bernoulli to Pierre Varignon dated 27 Jul 1697. Assume AMmB is the part of the cycloid joining A to B, which the body slides down in the minimum time. Let ICcJ be part of a different curve joining A to B, which can be closer to AL than AMmB. If the arc Mm subtends the angle MKm at its centre of curvature, K, let the arc on IJ that subtends the same angle be Cc. The circular arc through C with centre K is Ce. Point D on AL is vertically above M. Join K to D and point H is where CG intersects KD, extended if necessary. Let and t be the times the body takes to fall along Mm and Ce respectively. , , Extend CG to point F where, and since , it follows that Since MN = NK, for the cycloid: , , and If Ce is closer to K than Mm then and In either case, , and it follows that If the arc, Cc subtended by the angle infinitesimal angle MKm on IJ is not circular, it must be greater than Ce, since Cec becomes a right-triangle in the limit as angle MKm approaches zero. Note, Bernoulli proves that CF > CG by a similar but different argument. From this he concludes that a body traverses the cycloid AMB in less time than any other curve ACB. Indirect method According to Fermat’s principle, the actual path between two points taken by a beam of light (which obeys Snell's law of refraction) is one that takes the least time. In 1697 Johann Bernoulli used this principle to derive the brachistochrone curve by considering the trajectory of a beam of light in a medium where the speed of light increases following a constant vertical acceleration (that of gravity g). By the conservation of energy, the instantaneous speed of a body v after falling a height y in a uniform gravitational field is given by: , The speed of motion of the body along an arbitrary curve does not depend on the horizontal displacement. Bernoulli noted that Snell's law of refraction gives a constant of the motion for a beam of light in a medium of variable density: , where vm is the constant and represents the angle of the trajectory with respect to the vertical. The equations above lead to two conclusions: At the onset, the angle must be zero when the particle speed is zero. Hence, the brachistochrone curve is tangent to the vertical at the origin. The speed reaches a maximum value when the trajectory becomes horizontal and the angle θ = 90°. Assuming for simplicity that the particle (or the beam) with coordinates (x,y) departs from the point (0,0) and reaches maximum speed after falling a vertical distance D: . Rearranging terms in the law of refraction and squaring gives: which can be solved for dx in terms of dy: . Substituting from the expressions for v and vm above gives: which is the differential equation of an inverted cycloid generated by a circle of diameter D=2r, whose parametric equation is: where φ is a real parameter, corresponding to the angle through which the rolling circle has rotated. For given φ, the circle's centre lies at . In the brachistochrone problem, the motion of the body is given by the time evolution of the parameter: where t is the time since the release of the body from the point (0,0). Jakob Bernoulli's solution Johann's brother Jakob showed how 2nd differentials can be used to obtain the condition for least time. A modernized version of the proof is as follows. If we make a negligible deviation from the path of least time, then, for the differential triangle formed by the displacement along the path and the horizontal and vertical displacements, . On differentiation with dy fixed we get, . And finally rearranging terms gives, where the last part is the displacement for given change in time for 2nd differentials. Now consider the changes along the two neighboring paths in the figure below for which the horizontal separation between paths along the central line is d2x (the same for both the upper and lower differential triangles). Along the old and new paths, the parts that differ are, For the path of least times these times are equal so for their difference we get, And the condition for least time is, which agrees with Johann's assumption based on the law of refraction. Newton's solution Introduction In June 1696, Johann Bernoulli had used the pages of the Acta Eruditorum Lipsidae to pose a challenge to the international mathematical community: to find the form of the curve joining two fixed points so that a mass will slide down along it, under the influence of gravity alone, in the minimum amount of time. The solution was originally to be submitted within six months. At the suggestion of Leibniz, Bernoulli extended the challenge until Easter 1697, by means of a printed text called "Programma", published in Groningen, in the Netherlands. The Programma is dated 1 January 1697, in the Gregorian Calendar. This was 22 December 1696 in the Julian Calendar, in use in Britain. According to Newton's niece, Catherine Conduitt, Newton learned of the challenge at 4 pm on 29 January and had solved it by 4 am the following morning. His solution, communicated to the Royal Society, is dated 30 January. This solution, later published anonymously in the Philosophical Transactions, is correct but does not indicate the method by which Newton arrived at his conclusion. Bernoulli, writing to Henri Basnage in March 1697, indicated that even though its author, "by an excess of modesty", had not revealed his name, yet even from the scant details supplied it could be recognised as Newton's work, "as the lion by its claw" (in Latin, ex ungue Leonem). D. T. Whiteside notes that the letter in French has ex ungue Leonem preceded by the French word comme. The much quoted version tanquam ex ungue Leonem is due to David Brewster's 1855 book on the life and works of Newton. Bernoulli's intention was, Whiteside argues, simply to indicate he could tell the anonymous solution was Newton's, just as it was possible to tell that an animal was a lion given its claw; it was not meant to suggest that Bernoulli considered Newton to be the lion among mathematicians, as it has since come to be interpreted. John Wallis, who was 80 years old at the time, had learned of the problem in September 1696 from Johann Bernoulli's youngest brother Hieronymus, and had spent three months attempting a solution before passing it in December to David Gregory, who also failed to solve it. After Newton had submitted his solution, Gregory asked him for the details and made notes from their conversation. These can be found in the University of Edinburgh Library, manuscript A , dated 7 March 1697. Either Gregory did not understand Newton's argument, or Newton's explanation was very brief. However, it is possible, with a high degree of confidence, to construct Newton's proof from Gregory's notes, by analogy with his method to determine the solid of minimum resistance (Principia, Book 2, Proposition 34, Scholium 2). A detailed description of his solution of this latter problem is included in the draft of a letter in 1694, also to David Gregory. In addition to the minimum time curve problem, there was a second problem that Newton also solved at the same time. Both solutions appeared anonymously in Philosophical Transactions of the Royal Society, for January 1697. The Brachistochrone problem Fig. 1, shows Gregory’s diagram (except the additional line IF is absent from it, and Z, the start point has been added). The curve ZVA is a cycloid and CHV is its generating circle. Since it appears that the body is moving upward from e to E, it must be assumed that a small body is released from Z and slides along the curve to A, without friction, under the action of gravity. Consider a small arc eE, which the body is ascending. Assume that it traverses the straight line eL to point L, horizontally displaced from E by a small distance, o, instead of the arc eE. Note, that eL is not the tangent at e, and that o is negative when L is between B and E. Draw the line through E parallel to CH, cutting eL at n. From a property of the cycloid, En is the normal to the tangent at E, and similarly the tangent at E is parallel to VH. Since the displacement EL is small, it differs little in direction from the tangent at E so that the angle EnL is close to a right-angle. In the limit as the arc eE approaches zero, eL becomes parallel to VH, provided o is small compared to eE making the triangles EnL and CHV similar. Also en approaches the length of chord eE, and the increase in length, , ignoring terms in and higher, which represent the error due to the approximation that eL and VH are parallel. The speed along eE or eL can be taken as that at E, proportional to , which is as CH, since This appears to be all that Gregory’s note contains. Let t be the additional time to reach L, Therefore, the increase in time to traverse a small arc displaced at one endpoint depends only on the displacement at the endpoint and is independent of the position of the arc. However, by Newton’s method, this is just the condition required for the curve to be traversed in the minimum time possible. Therefore, he concludes that the minimum curve must be the cycloid. He argues as follows. Assuming now that Fig. 1 is the minimum curve not yet determined, with vertical axis CV, and the circle CHV removed, and Fig. 2 shows part of the curve between the infinitesimal arc eE and a further infinitesimal arc Ff a finite distance along the curve. The extra time, t, to traverse eL (rather than eE) is nL divided by the speed at E (proportional to ), ignoring terms in and higher: , At L the particle continues along a path LM, parallel to the original EF, to some arbitrary point M. As it has the same speed at L as at E, the time to traverse LM is the same as it would have been along the original curve EF. At M it returns to the original path at point f. By the same reasoning, the reduction in time, T, to reach f from M rather than from F is The difference (t – T) is the extra time it takes along the path compared to the original : plus terms in and higher (1) Because is the minimum curve, (t – T) is must be greater than zero, whether o is positive or negative. It follows that the coefficient of o in (1) must be zero: (2) in the limit as eE and fF approach zero. Note since is the minimum curve it has to be assumed that the coefficient of is greater than zero. Clearly there has to be 2 equal and opposite displacements, or the body would not return to the endpoint, A, of the curve. If e is fixed, and if f is considered a variable point higher up the curve, then for all such points, f, is constant (equal to ). By keeping f fixed and making e variable it is clear that is also constant. But, since points, e and f are arbitrary, equation (2) can be true only if , everywhere, and this condition characterises the curve that is sought. This is the same technique he uses to find the form of the Solid of Least Resistance. For the cycloid, , so that , which was shown above to be constant, and the Brachistochrone is the cycloid. Newton gives no indication of how he discovered that the cycloid satisfied this last relation. It may have been by trial and error, or he may have recognised immediately that it implied the curve was the cycloid. See also Aristotle's wheel paradox Beltrami identity Calculus of variations Catenary Newton's minimal resistance problem Trochoid Uniformly accelerated motion References External links Brachistochrone ( at MathCurve, with excellent animated examples) The Brachistochrone, Whistler Alley Mathematics. Table IV from Bernoulli's article in Acta Eruditorum 1697 Brachistochrones'' by Michael Trott and Brachistochrone Problem by Okay Arik, Wolfram Demonstrations Project. The Brachistochrone problem at MacTutor Geodesics Revisited — Introduction to geodesics including two ways of derivation of the equation of geodesic with brachistochrone as a special case of a geodesic. Optimal control solution to the Brachistochrone problem in Python. The straight line, the catenary, the brachistochrone, the circle, and Fermat Unified approach to some geodesics. Plane curves Mechanics
Brachistochrone curve
[ "Physics", "Mathematics", "Engineering" ]
5,083
[ "Plane curves", "Euclidean plane geometry", "Mechanics", "Mechanical engineering", "Planes (geometry)" ]
171,905
https://en.wikipedia.org/wiki/Langlands%20program
In mathematics, the Langlands program is a set of conjectures about connections between number theory and geometry. It was proposed by . It seeks to relate Galois groups in algebraic number theory to automorphic forms and representation theory of algebraic groups over local fields and adeles. It was described by Edward Frenkel as "grand unified theory of mathematics." As an explanation to a non-specialist: the program provides constructs for a generalised and somewhat unified framework, to characterise the structures that underpin numbers and their abstractions; thus the invariants which base them... through analytical methods. The Langlands program consists of theoretical abstractions, which challenge even specialist mathematicians. Basically, the fundamental lemma of the project links the generalized fundamental representation of a finite field with its group extension to the automorphic forms under which it is invariant. This is accomplished through abstraction to higher dimensional integration, by an equivalence to a certain analytical group as an absolute extension of its algebra. This allows an analytical functional construction of powerful invariance transformations for a number field to its own algebraic structure. The meaning of such a construction is nuanced, but its specific solutions and generalizations are far-reaching. The consequence for proof of existence to such theoretical objects, implies an analytical method for constructing the categoric mapping of fundamental structures for virtually any number field. As an analogue to the possible exact distribution of primes; the Langlands program allows a potential general tool for the resolution of invariance at the level of generalized algebraic structures. This in turn permits a somewhat unified analysis of arithmetic objects through their automorphic functions... The Langlands view allows a general analysis of structuring number-abstractions. This description is at once a reduction and over-generalization of the program's proper theorems – although these mathematical concepts illustrate its basic ideas. Background The Langlands program is built on existing ideas: the philosophy of cusp forms formulated a few years earlier by Harish-Chandra and , the work and Harish-Chandra's approach on semisimple Lie groups, and in technical terms the trace formula of Selberg and others. What was new in Langlands' work, besides technical depth, was the proposed connection to number theory, together with its rich organisational structure hypothesised (so-called functoriality). Harish-Chandra's work exploited the principle that what can be done for one semisimple (or reductive) Lie group, can be done for all. Therefore, once the role of some low-dimensional Lie groups such as GL(2) in the theory of modular forms had been recognised, and with hindsight GL(1) in class field theory, the way was open to speculation about GL(n) for general n > 2. The cusp form idea came out of the cusps on modular curves but also had a meaning visible in spectral theory as "discrete spectrum", contrasted with the "continuous spectrum" from Eisenstein series. It becomes much more technical for bigger Lie groups, because the parabolic subgroups are more numerous. In all these approaches technical methods were available, often inductive in nature and based on Levi decompositions amongst other matters, but the field remained demanding. From the perspective of modular forms, examples such as Hilbert modular forms, Siegel modular forms, and theta-series had been developed. Objects The conjectures have evolved since Langlands first stated them. Langlands conjectures apply across many different groups over many different fields for which they can be stated, and each field offers several versions of the conjectures. Some versions are vague, or depend on objects such as Langlands groups, whose existence is unproven, or on the L-group that has several non-equivalent definitions. Objects for which Langlands conjectures can be stated: Representations of reductive groups over local fields (with different subcases corresponding to archimedean local fields, p-adic local fields, and completions of function fields) Automorphic forms on reductive groups over global fields (with subcases corresponding to number fields or function fields). Analogues for finite fields. More general fields, such as function fields over the complex numbers. Conjectures The conjectures can be stated variously in ways that are closely related but not obviously equivalent. Reciprocity The starting point of the program was Emil Artin's reciprocity law, which generalizes quadratic reciprocity. The Artin reciprocity law applies to a Galois extension of an algebraic number field whose Galois group is abelian; it assigns L-functions to the one-dimensional representations of this Galois group, and states that these L-functions are identical to certain Dirichlet L-series or more general series (that is, certain analogues of the Riemann zeta function) constructed from Hecke characters. The precise correspondence between these different kinds of L-functions constitutes Artin's reciprocity law. For non-abelian Galois groups and higher-dimensional representations of them, L-functions can be defined in a natural way: Artin L-functions. Langlands' insight was to find the proper generalization of Dirichlet L-functions, which would allow the formulation of Artin's statement in Langland's more general setting. Hecke had earlier related Dirichlet L-functions with automorphic forms (holomorphic functions on the upper half plane of the complex number plane that satisfy certain functional equations). Langlands then generalized these to automorphic cuspidal representations, which are certain infinite dimensional irreducible representations of the general linear group GL(n) over the adele ring of (the rational numbers). (This ring tracks all the completions of see p-adic numbers.) Langlands attached automorphic L-functions to these automorphic representations, and conjectured that every Artin L-function arising from a finite-dimensional representation of the Galois group of a number field is equal to one arising from an automorphic cuspidal representation. This is known as his reciprocity conjecture. Roughly speaking, this conjecture gives a correspondence between automorphic representations of a reductive group and homomorphisms from a Langlands group to an L-group. This offers numerous variations, in part because the definitions of Langlands group and L-group are not fixed. Over local fields this is expected to give a parameterization of L-packets of admissible irreducible representations of a reductive group over the local field. For example, over the real numbers, this correspondence is the Langlands classification of representations of real reductive groups. Over global fields, it should give a parameterization of automorphic forms. Functoriality The functoriality conjecture states that a suitable homomorphism of L-groups is expected to give a correspondence between automorphic forms (in the global case) or representations (in the local case). Roughly speaking, the Langlands reciprocity conjecture is the special case of the functoriality conjecture when one of the reductive groups is trivial. Generalized functoriality Langlands generalized the idea of functoriality: instead of using the general linear group GL(n), other connected reductive groups can be used. Furthermore, given such a group G, Langlands constructs the Langlands dual group LG, and then, for every automorphic cuspidal representation of G and every finite-dimensional representation of LG, he defines an L-function. One of his conjectures states that these L-functions satisfy a certain functional equation generalizing those of other known L-functions. He then goes on to formulate a very general "Functoriality Principle". Given two reductive groups and a (well behaved) morphism between their corresponding L-groups, this conjecture relates their automorphic representations in a way that is compatible with their L-functions. This functoriality conjecture implies all the other conjectures presented so far. It is of the nature of an induced representation construction—what in the more traditional theory of automorphic forms had been called a 'lifting', known in special cases, and so is covariant (whereas a restricted representation is contravariant). Attempts to specify a direct construction have only produced some conditional results. All these conjectures can be formulated for more general fields in place of : algebraic number fields (the original and most important case), local fields, and function fields (finite extensions of Fp(t) where p is a prime and Fp(t) is the field of rational functions over the finite field with p elements). Geometric conjectures The geometric Langlands program, suggested by Gérard Laumon following ideas of Vladimir Drinfeld, arises from a geometric reformulation of the usual Langlands program that attempts to relate more than just irreducible representations. In simple cases, it relates -adic representations of the étale fundamental group of an algebraic curve to objects of the derived category of -adic sheaves on the moduli stack of vector bundles over the curve. A 9-person collaborative project led by Dennis Gaitsgory announced a proof of the (categorical, unramified) geometric Langlands conjecture leveraging Hecke eigensheaf as part of the proof. Status The Langlands conjectures for GL(1, K) follow from (and are essentially equivalent to) class field theory. Langlands proved the Langlands conjectures for groups over the archimedean local fields (the real numbers) and (the complex numbers) by giving the Langlands classification of their irreducible representations. Lusztig's classification of the irreducible representations of groups of Lie type over finite fields can be considered an analogue of the Langlands conjectures for finite fields. Andrew Wiles' proof of modularity of semistable elliptic curves over rationals can be viewed as an instance of the Langlands reciprocity conjecture, since the main idea is to relate the Galois representations arising from elliptic curves to modular forms. Although Wiles' results have been substantially generalized, in many different directions, the full Langlands conjecture for remains unproved. In 1998, Laurent Lafforgue proved Lafforgue's theorem verifying the Langlands conjectures for the general linear group GL(n, K) for function fields K. This work continued earlier investigations by Drinfeld, who proved the case GL(2, K) in the 1980s. In 2018, Vincent Lafforgue established the global Langlands correspondence (the direction from automorphic forms to Galois representations) for connected reductive groups over global function fields. Local Langlands conjectures proved the local Langlands conjectures for the general linear group GL(2, K) over local fields. proved the local Langlands conjectures for the general linear group GL(n, K) for positive characteristic local fields K. Their proof uses a global argument. proved the local Langlands conjectures for the general linear group GL(n, K) for characteristic 0 local fields K. gave another proof. Both proofs use a global argument. gave another proof. Fundamental lemma In 2008, Ngô Bảo Châu proved the "fundamental lemma", which was originally conjectured by Langlands and Shelstad in 1983 and being required in the proof of some important conjectures in the Langlands program. Implications To a lay reader or even nonspecialist mathematician, abstractions within the Langlands program can be somewhat impenetrable. However, there are some strong and clear implications for proof or disproof of the fundamental Langlands conjectures. As the program posits a powerful connection between analytic number theory and generalizations of algebraic geometry, the idea of 'Functoriality' between abstract algebraic representations of number fields and their analytical prime constructions results in powerful functional tools allowing an exact quantification of prime distributions. This, in turn, yields the capacity for classification of diophantine equations and further abstractions of algebraic functions. Furthermore, if the reciprocity of such generalized algebras for the posited objects exists, and if their analytical functions can be shown to be well-defined, some very deep results in mathematics could be within reach of proof. Examples include: rational solutions of elliptic curves, topological construction of algebraic varieties, and the famous Riemann hypothesis. Such proofs would be expected to utilize abstract solutions in objects of generalized analytical series, each of which relates to the invariance within structures of number fields. Additionally, some connections between the Langlands program and M theory have been posited, as their dualities connect in nontrivial ways, providing potential exact solutions in superstring theory (as was similarly done in group theory through monstrous moonshine). Simply put, the Langlands project implies a deep and powerful framework of solutions, which touches the most fundamental areas of mathematics, through high-order generalizations in exact solutions of algebraic equations, with analytical functions, as embedded in geometric forms. It allows a unification of many distant mathematical fields into a formalism of powerful analytical methods. See also Jacquet–Langlands correspondence Erlangen program Notes References External links The work of Robert Langlands Zeta and L-functions Representation theory of Lie groups Automorphic forms Conjectures History of mathematics
Langlands program
[ "Mathematics" ]
2,745
[ "Unsolved problems in mathematics", "Langlands program", "Conjectures", "Mathematical problems", "Number theory" ]
172,199
https://en.wikipedia.org/wiki/Faltings%27s%20theorem
Faltings's theorem is a result in arithmetic geometry, according to which a curve of genus greater than 1 over the field of rational numbers has only finitely many rational points. This was conjectured in 1922 by Louis Mordell, and known as the Mordell conjecture until its 1983 proof by Gerd Faltings. The conjecture was later generalized by replacing by any number field. Background Let be a non-singular algebraic curve of genus over . Then the set of rational points on may be determined as follows: When , there are either no points or infinitely many. In such cases, may be handled as a conic section. When , if there are any points, then is an elliptic curve and its rational points form a finitely generated abelian group. (This is Mordell's Theorem, later generalized to the Mordell–Weil theorem.) Moreover, Mazur's torsion theorem restricts the structure of the torsion subgroup. When , according to Faltings's theorem, has only a finite number of rational points. Proofs Igor Shafarevich conjectured that there are only finitely many isomorphism classes of abelian varieties of fixed dimension and fixed polarization degree over a fixed number field with good reduction outside a fixed finite set of places. Aleksei Parshin showed that Shafarevich's finiteness conjecture would imply the Mordell conjecture, using what is now called Parshin's trick. Gerd Faltings proved Shafarevich's finiteness conjecture using a known reduction to a case of the Tate conjecture, together with tools from algebraic geometry, including the theory of Néron models. The main idea of Faltings's proof is the comparison of Faltings heights and naive heights via Siegel modular varieties. Later proofs Paul Vojta gave a proof based on Diophantine approximation. Enrico Bombieri found a more elementary variant of Vojta's proof. Brian Lawrence and Akshay Venkatesh gave a proof based on -adic Hodge theory, borrowing also some of the easier ingredients of Faltings's original proof. Consequences Faltings's 1983 paper had as consequences a number of statements which had previously been conjectured: The Mordell conjecture that a curve of genus greater than 1 over a number field has only finitely many rational points; The Isogeny theorem that abelian varieties with isomorphic Tate modules (as -modules with Galois action) are isogenous. A sample application of Faltings's theorem is to a weak form of Fermat's Last Theorem: for any fixed there are at most finitely many primitive integer solutions (pairwise coprime solutions) to , since for such the Fermat curve has genus greater than 1. Generalizations Because of the Mordell–Weil theorem, Faltings's theorem can be reformulated as a statement about the intersection of a curve with a finitely generated subgroup of an abelian variety . Generalizing by replacing by a semiabelian variety, by an arbitrary subvariety of , and by an arbitrary finite-rank subgroup of leads to the Mordell–Lang conjecture, which was proved in 1995 by McQuillan following work of Laurent, Raynaud, Hindry, Vojta, and Faltings. Another higher-dimensional generalization of Faltings's theorem is the Bombieri–Lang conjecture that if is a pseudo-canonical variety (i.e., a variety of general type) over a number field , then is not Zariski dense in . Even more general conjectures have been put forth by Paul Vojta. The Mordell conjecture for function fields was proved by Yuri Ivanovich Manin and by Hans Grauert. In 1990, Robert F. Coleman found and fixed a gap in Manin's proof. Notes Citations References → Contains an English translation of → Gives Vojta's proof of Faltings's Theorem. (Translation: ) Diophantine geometry Theorems in number theory Theorems in algebraic geometry
Faltings's theorem
[ "Mathematics" ]
832
[ "Theorems in algebraic geometry", "Theorems in number theory", "Theorems in geometry", "Mathematical problems", "Mathematical theorems", "Number theory" ]
172,244
https://en.wikipedia.org/wiki/Simulated%20annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optimum. It is often used when the search space is discrete (for example the traveling salesman problem, the boolean satisfiability problem, protein structure prediction, and job-shop scheduling). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound. The name of the algorithm comes from annealing in metallurgy, a technique involving heating and controlled cooling of a material to alter its physical properties. Both are attributes of the material that depend on their thermodynamic free energy. Heating and cooling the material affects both the temperature and the thermodynamic free energy or Gibbs energy. Simulated annealing can be used for very hard computational optimization problems where exact algorithms fail; even though it usually achieves an approximate solution to the global minimum, it could be enough for many practical problems. The problems solved by SA are currently formulated by an objective function of many variables, subject to several mathematical constraints. In practice, the constraint can be penalized as part of the objective function. Similar techniques have been independently introduced on several occasions, including Pincus (1970), Khachaturyan et al (1979, 1981), Kirkpatrick, Gelatt and Vecchi (1983), and Cerny (1985). In 1983, this approach was used by Kirkpatrick, Gelatt Jr., Vecchi, for a solution of the traveling salesman problem. They also proposed its current name, simulated annealing. This notion of slow cooling implemented in the simulated annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored. Accepting worse solutions allows for a more extensive search for the global optimal solution. In general, simulated annealing algorithms work as follows. The temperature progressively decreases from an initial positive value to zero. At each time step, the algorithm randomly selects a solution close to the current one, measures its quality, and moves to it according to the temperature-dependent probabilities of selecting better or worse solutions, which during the search respectively remain at 1 (or positive) and decrease toward zero. The simulation can be performed either by a solution of kinetic equations for probability density functions, or by using a stochastic sampling method. The method is an adaptation of the Metropolis–Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, published by N. Metropolis et al. in 1953. Overview The state s of some physical systems, and the function E(s) to be minimized, is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy. The basic iteration At each step, the simulated annealing heuristic considers some neighboring state s* of the current state s, and probabilistically decides between moving the system to state s* or staying in state s. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted. The neighbors of a state Optimization of a solution involves evaluating the neighbors of a state of the problem, which are new states produced through conservatively altering a given state. For example, in the traveling salesman problem each state is typically defined as a permutation of the cities to be visited, and the neighbors of any state are the set of permutations produced by swapping any two of these cities. The well-defined way in which the states are altered to produce neighboring states is called a "move", and different moves give different sets of neighboring states. These moves usually result in minimal alterations of the last state, in an attempt to progressively improve the solution through iteratively improving its parts (such as the city connections in the traveling salesman problem). It is even better to reverse the order of an interval of cities. This is a smaller move since swapping two cities can be achieved by twice reversing an interval. Simple heuristics like hill climbing, which move by finding better neighbor after better neighbor and stop when they have reached a solution which has no neighbors that are better solutions, cannot guarantee to lead to any of the existing better solutions their outcome may easily be just a local optimum, while the actual best solution would be a global optimum that could be different. Metaheuristics use the neighbors of a solution as a way to explore the solution space, and although they prefer better neighbors, they also accept worse neighbors in order to avoid getting stuck in local optima; they can find the global optimum if run for a long enough amount of time. Acceptance probabilities The probability of making the transition from the current state to a candidate new state is specified by an acceptance probability function , that depends on the energies and of the two states, and on a global time-varying parameter called the temperature. States with a smaller energy are better than those with a greater energy. The probability function must be positive even when is greater than . This feature prevents the method from becoming stuck at a local minimum that is worse than the global one. When tends to zero, the probability must tend to zero if and to a positive value otherwise. For sufficiently small values of , the system will then increasingly favor moves that go "downhill" (i.e., to lower energy values), and avoid those that go "uphill." With the procedure reduces to the greedy algorithm, which makes only the downhill transitions. In the original description of simulated annealing, the probability was equal to 1 when —i.e., the procedure always moved downhill when it found a way to do so, irrespective of the temperature. Many descriptions and implementations of simulated annealing still take this condition as part of the method's definition. However, this condition is not essential for the method to work. The function is usually chosen so that the probability of accepting a move decreases when the difference increases—that is, small uphill moves are more likely than large ones. However, this requirement is not strictly necessary, provided that the above requirements are met. Given these properties, the temperature plays a crucial role in controlling the evolution of the state of the system with regard to its sensitivity to the variations of system energies. To be precise, for a large , the evolution of is sensitive to coarser energy variations, while it is sensitive to finer energy variations when is small. The annealing schedule The name and inspiration of the algorithm demand an interesting feature related to the temperature variation to be embedded in the operational characteristics of the algorithm. This necessitates a gradual reduction of the temperature as the simulation proceeds. The algorithm starts initially with set to a high value (or infinity), and then it is decreased at each step following some annealing schedule—which may be specified by the user but must end with towards the end of the allotted time budget. In this way, the system is expected to wander initially towards a broad region of the search space containing good solutions, ignoring small features of the energy function; then drift towards low-energy regions that become narrower and narrower, and finally move downhill according to the steepest descent heuristic. For any given finite problem, the probability that the simulated annealing algorithm terminates with a global optimal solution approaches 1 as the annealing schedule is extended. This theoretical result, however, is not particularly helpful, since the time required to ensure a significant probability of success will usually exceed the time required for a complete search of the solution space. Pseudocode The following pseudocode presents the simulated annealing heuristic as described above. It starts from a state and continues until a maximum of steps have been taken. In the process, the call should generate a randomly chosen neighbour of a given state ; the call should pick and return a value in the range , uniformly at random. The annealing schedule is defined by the call , which should yield the temperature to use, given the fraction of the time budget that has been expended so far. Let For through (exclusive): Pick a random neighbour, If : Output: the final state Selecting the parameters In order to apply the simulated annealing method to a specific problem, one must specify the following parameters: the state space, the energy (goal) function , the candidate generator procedure , the acceptance probability function , and the annealing schedule AND initial temperature . These choices can have a significant impact on the method's effectiveness. Unfortunately, there are no choices of these parameters that will be good for all problems, and there is no general way to find the best choices for a given problem. The following sections give some general guidelines. Sufficiently near neighbour Simulated annealing may be modeled as a random walk on a search graph, whose vertices are all possible states, and whose edges are the candidate moves. An essential requirement for the function is that it must provide a sufficiently short path on this graph from the initial state to any state which may be the global optimum the diameter of the search graph must be small. In the traveling salesman example above, for instance, the search space for n = 20 cities has n! = 2,432,902,008,176,640,000 (2.4 quintillion) states; yet the number of neighbors of each vertex is edges (coming from n choose 20), and the diameter of the graph is . Transition probabilities To investigate the behavior of simulated annealing on a particular problem, it can be useful to consider the transition probabilities that result from the various design choices made in the implementation of the algorithm. For each edge of the search graph, the transition probability is defined as the probability that the simulated annealing algorithm will move to state when its current state is . This probability depends on the current temperature as specified by , on the order in which the candidate moves are generated by the function, and on the acceptance probability function . (Note that the transition probability is not simply , because the candidates are tested serially.) Acceptance probabilities The specification of , , and is partially redundant. In practice, it's common to use the same acceptance function for many problems and adjust the other two functions according to the specific problem. In the formulation of the method by Kirkpatrick et al., the acceptance probability function was defined as 1 if , and otherwise. This formula was superficially justified by analogy with the transitions of a physical system; it corresponds to the Metropolis–Hastings algorithm, in the case where T=1 and the proposal distribution of Metropolis–Hastings is symmetric. However, this acceptance probability is often used for simulated annealing even when the function, which is analogous to the proposal distribution in Metropolis–Hastings, is not symmetric, or not probabilistic at all. As a result, the transition probabilities of the simulated annealing algorithm do not correspond to the transitions of the analogous physical system, and the long-term distribution of states at a constant temperature need not bear any resemblance to the thermodynamic equilibrium distribution over states of that physical system, at any temperature. Nevertheless, most descriptions of simulated annealing assume the original acceptance function, which is probably hard-coded in many implementations of SA. In 1990, Moscato and Fontanari, and independently Dueck and Scheuer, proposed that a deterministic update (i.e. one that is not based on the probabilistic acceptance rule) could speed-up the optimization process without impacting on the final quality. Moscato and Fontanari conclude from observing the analogous of the "specific heat" curve of the "threshold updating" annealing originating from their study that "the stochasticity of the Metropolis updating in the simulated annealing algorithm does not play a major role in the search of near-optimal minima". Instead, they proposed that "the smoothening of the cost function landscape at high temperature and the gradual definition of the minima during the cooling process are the fundamental ingredients for the success of simulated annealing." The method subsequently popularized under the denomination of "threshold accepting" due to Dueck and Scheuer's denomination. In 2001, Franz, Hoffmann and Salamon showed that the deterministic update strategy is indeed the optimal one within the large class of algorithms that simulate a random walk on the cost/energy landscape. Efficient candidate generation When choosing the candidate generator , one must consider that after a few iterations of the simulated annealing algorithm, the current state is expected to have much lower energy than a random state. Therefore, as a general rule, one should skew the generator towards candidate moves where the energy of the destination state is likely to be similar to that of the current state. This heuristic (which is the main principle of the Metropolis–Hastings algorithm) tends to exclude very good candidate moves as well as very bad ones; however, the former are usually much less common than the latter, so the heuristic is generally quite effective. In the traveling salesman problem above, for example, swapping two consecutive cities in a low-energy tour is expected to have a modest effect on its energy (length); whereas swapping two arbitrary cities is far more likely to increase its length than to decrease it. Thus, the consecutive-swap neighbor generator is expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat shorter path to the optimum (with swaps, instead of ). A more precise statement of the heuristic is that one should try the first candidate states for which is large. For the "standard" acceptance function above, it means that is on the order of or less. Thus, in the traveling salesman example above, one could use a function that swaps two random cities, where the probability of choosing a city-pair vanishes as their distance increases beyond . Barrier avoidance When choosing the candidate generator one must also try to reduce the number of "deep" local minima—states (or sets of connected states) that have much lower energy than all its neighboring states. Such "closed catchment basins" of the energy function may trap the simulated annealing algorithm with high probability (roughly proportional to the number of states in the basin) and for a very long time (roughly exponential on the energy difference between the surrounding states and the bottom of the basin). As a rule, it is impossible to design a candidate generator that will satisfy this goal and also prioritize candidates with similar energy. On the other hand, one can often vastly improve the efficiency of simulated annealing by relatively simple changes to the generator. In the traveling salesman problem, for instance, it is not hard to exhibit two tours , , with nearly equal lengths, such that (1) is optimal, (2) every sequence of city-pair swaps that converts to goes through tours that are much longer than both, and (3) can be transformed into by flipping (reversing the order of) a set of consecutive cities. In this example, and lie in different "deep basins" if the generator performs only random pair-swaps; but they will be in the same basin if the generator performs random segment-flips. Cooling schedule The physical analogy that is used to justify simulated annealing assumes that the cooling rate is low enough for the probability distribution of the current state to be near thermodynamic equilibrium at all times. Unfortunately, the relaxation time—the time one must wait for the equilibrium to be restored after a change in temperature—strongly depends on the "topography" of the energy function and on the current temperature. In the simulated annealing algorithm, the relaxation time also depends on the candidate generator, in a very complicated way. Note that all these parameters are usually provided as black box functions to the simulated annealing algorithm. Therefore, the ideal cooling rate cannot be determined beforehand and should be empirically adjusted for each problem. Adaptive simulated annealing algorithms address this problem by connecting the cooling schedule to the search progress. Other adaptive approaches such as Thermodynamic Simulated Annealing, automatically adjusts the temperature at each step based on the energy difference between the two states, according to the laws of thermodynamics. Restarts Sometimes it is better to move back to a solution that was significantly better rather than always moving from the current state. This process is called restarting of simulated annealing. To do this we set s and e to sbest and ebest and perhaps restart the annealing schedule. The decision to restart could be based on several criteria. Notable among these include restarting based on a fixed number of steps, based on whether the current energy is too high compared to the best energy obtained so far, restarting randomly, etc. Related methods Interacting Metropolis–Hasting algorithms (a.k.a. sequential Monte Carlo) combines simulated annealing moves with an acceptance-rejection of the best-fitted individuals equipped with an interacting recycling mechanism. Quantum annealing uses "quantum fluctuations" instead of thermal fluctuations to get through high but thin barriers in the target function. Stochastic tunneling attempts to overcome the increasing difficulty simulated annealing runs have in escaping from local minima as the temperature decreases, by 'tunneling' through barriers. Tabu search normally moves to neighbouring states of lower energy, but will take uphill moves when it finds itself stuck in a local minimum; and avoids cycles by keeping a "taboo list" of solutions already seen. Dual-phase evolution is a family of algorithms and processes (to which simulated annealing belongs) that mediate between local and global search by exploiting phase changes in the search space. Reactive search optimization focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution. Genetic algorithms maintain a pool of solutions rather than just one. New candidate solutions are generated not only by "mutation" (as in SA), but also by "recombination" of two solutions from the pool. Probabilistic criteria, similar to those used in SA, are used to select the candidates for mutation or combination, and for discarding excess solutions from the pool. Memetic algorithms search for solutions by employing a set of agents that both cooperate and compete in the process; sometimes the agents' strategies involve simulated annealing procedures for obtaining high-quality solutions before recombining them. Annealing has also been suggested as a mechanism for increasing the diversity of the search. Graduated optimization digressively "smooths" the target function while optimizing. Ant colony optimization (ACO) uses many ants (or agents) to traverse the solution space and find locally productive areas. The cross-entropy method (CE) generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration. Harmony search mimics musicians in improvisation where each musician plays a note to find the best harmony together. Stochastic optimization is an umbrella set of methods that includes simulated annealing and numerous other approaches. Particle swarm optimization is an algorithm modeled on swarm intelligence that finds a solution to an optimization problem in a search space, or models and predicts social behavior in the presence of objectives. The runner-root algorithm (RRA) is a meta-heuristic optimization algorithm for solving unimodal and multimodal problems inspired by the runners and roots of plants in nature. Intelligent water drops algorithm (IWD) which mimics the behavior of natural water drops to solve optimization problems Parallel tempering is a simulation of model copies at different temperatures (or Hamiltonians) to overcome the potential barriers. Multi-objective simulated annealing algorithms have been used in multi-objective optimization. See also References Further reading A. Das and B. K. Chakrabarti (Eds.), Quantum Annealing and Related Optimization Methods, Lecture Note in Physics, Vol. 679, Springer, Heidelberg (2005) V.Vassilev, A.Prahova: "The Use of Simulated Annealing in the Control of Flexible Manufacturing Systems", International Journal INFORMATION THEORIES & APPLICATIONS, VOLUME 6/1999 External links Simulated Annealing A Javascript app that allows you to experiment with simulated annealing. Source code included. "General Simulated Annealing Algorithm" An open-source MATLAB program for general simulated annealing exercises. Self-Guided Lesson on Simulated Annealing A Wikiversity project. Google in superposition of using, not using quantum computer Ars Technica discusses the possibility that the D-Wave computer being used by Google may, in fact, be an efficient simulated annealing co-processor. A Simulated Annealing-Based Multiobjective Optimization Algorithm: AMOSA. Metaheuristics Optimization algorithms and methods Monte Carlo methods
Simulated annealing
[ "Physics" ]
4,350
[ "Monte Carlo methods", "Computational physics" ]
172,291
https://en.wikipedia.org/wiki/Drag%20coefficient
In fluid dynamics, the drag coefficient (commonly denoted as: , or ) is a dimensionless quantity that is used to quantify the drag or resistance of an object in a fluid environment, such as air or water. It is used in the drag equation in which a lower drag coefficient indicates the object will have less aerodynamic or hydrodynamic drag. The drag coefficient is always associated with a particular surface area. The drag coefficient of any object comprises the effects of the two basic contributors to fluid dynamic drag: skin friction and form drag. The drag coefficient of a lifting airfoil or hydrofoil also includes the effects of lift-induced drag. The drag coefficient of a complete structure such as an aircraft also includes the effects of interference drag. Definition The drag coefficient is defined as where: is the drag force, which is by definition the force component in the direction of the flow velocity; is the mass density of the fluid; is the flow speed of the object relative to the fluid; is the reference area The reference area depends on what type of drag coefficient is being measured. For automobiles and many other objects, the reference area is the projected frontal area of the vehicle. This may not necessarily be the cross-sectional area of the vehicle, depending on where the cross-section is taken. For example, for a sphere (note this is not the surface area = ). For airfoils, the reference area is the nominal wing area. Since this tends to be large compared to the frontal area, the resulting drag coefficients tend to be low, much lower than for a car with the same drag, frontal area, and speed. Airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume (volume to the two-thirds power). Submerged streamlined bodies use the wetted surface area. Two objects having the same reference area moving at the same speed through a fluid will experience a drag force proportional to their respective drag coefficients. Coefficients for unstreamlined objects can be 1 or more, for streamlined objects much less. As a caution, note that although the above is the conventional definition for the drag coefficient, there are other definitions that one may encounter in the literature. The reason for this is that the conventional definition makes the most sense when one is in the Newton regime, such as what happens at high Reynolds number, where it makes sense to scale the drag to the momentum flux into the frontal area of the object. But, there are other flow regimes. In particular at very low Reynolds number, it is more natural to write the drag force as being proportional to a drag coefficient multiplied by the speed of the object (rather than the square of the speed of the object). An example of such a regime is the study of the mobility of aerosol particulates, such as smoke particles. This leads to a different formal definition of the "drag coefficient," of course. Cauchy momentum equation In the non dimensional form of the Cauchy momentum equation, the skin drag coefficient or skin friction coefficient is referred to the transversal area (the area normal to the drag force, so the coefficient is locally defined as: where: is the local shear stress, which is by definition the stress component in the direction of the local flow velocity; is the local dynamic pressure of the fluid is the local mass density of the fluid; is the local flow speed of the fluid Background The drag equation is essentially a statement that the drag force on any object is proportional to the density of the fluid and proportional to the square of the relative flow speed between the object and the fluid. The factor of comes from the dynamic pressure of the fluid, which is equal to the kinetic energy density. The value of is not a constant but varies as a function of flow speed, flow direction, object position, object size, fluid density and fluid viscosity. Speed, kinematic viscosity and a characteristic length scale of the object are incorporated into a dimensionless quantity called the Reynolds number . is thus a function of . In a compressible flow, the speed of sound is relevant, and is also a function of Mach number . For certain body shapes, the drag coefficient only depends on the Reynolds number , Mach number and the direction of the flow. For low Mach number , the drag coefficient is independent of Mach number. Also, the variation with Reynolds number within a practical range of interest is usually small, while for cars at highway speed and aircraft at cruising speed, the incoming flow direction is also more-or-less the same. Therefore, the drag coefficient can often be treated as a constant. For a streamlined body to achieve a low drag coefficient, the boundary layer around the body must remain attached to the surface of the body for as long as possible, causing the wake to be narrow. A high form drag results in a broad wake. The boundary layer will transition from laminar to turbulent if Reynolds number of the flow around the body is sufficiently great. Larger velocities, larger objects, and lower viscosities contribute to larger Reynolds numbers. For other objects, such as small particles, one can no longer consider that the drag coefficient is constant, but certainly is a function of Reynolds number. At a low Reynolds number, the flow around the object does not transition to turbulent but remains laminar, even up to the point at which it separates from the surface of the object. At very low Reynolds numbers, without flow separation, the drag force is proportional to instead of ; for a sphere this is known as Stokes' law. The Reynolds number will be low for small objects, low velocities, and high viscosity fluids. A equal to 1 would be obtained in a case where all of the fluid approaching the object is brought to rest, building up stagnation pressure over the whole front surface. The top figure shows a flat plate with the fluid coming from the right and stopping at the plate. The graph to the left of it shows equal pressure across the surface. In a real flat plate, the fluid must turn around the sides, and full stagnation pressure is found only at the center, dropping off toward the edges as in the lower figure and graph. Only considering the front side, the of a real flat plate would be less than 1; except that there will be suction on the backside: a negative pressure (relative to ambient). The overall of a real square flat plate perpendicular to the flow is often given as 1.17. Flow patterns and therefore for some shapes can change with the Reynolds number and the roughness of the surfaces. Drag coefficient examples General In general, is not an absolute constant for a given body shape. It varies with the speed of airflow (or more generally with Reynolds number ). A smooth sphere, for example, has a that varies from high values for laminar flow to 0.47 for turbulent flow. Although the drag coefficient decreases with increasing , the drag force increases. Aircraft As noted above, aircraft use their wing area as the reference area when computing , while automobiles (and many other objects) use projected frontal area; thus, coefficients are not directly comparable between these classes of vehicles. In the aerospace industry, the drag coefficient is sometimes expressed in drag counts where 1 drag count = 0.0001 of a . Automobile Blunt and streamlined body flows Concept The force between a fluid and a body, when there is relative motion, can only be transmitted by normal pressure and tangential friction stresses. So, for the whole body, the drag part of the force, which is in-line with the approaching fluid motion, is composed of frictional drag (viscous drag) and pressure drag (form drag). The total drag and component drag forces can be related as follows: where: is the planform area of the body, is the wet surface of the body, is the pressure drag coefficient, is the friction drag coefficient, is the unit vector in the direction of the shear stress acting on the body surface dS, is the unit vector in the direction perpendicular to the body surface dS, pointing from the fluid to the solid, magnitude of the shear stress acting on the body surface dS, is the pressure far away from the body (note that this constant does not affect the final result), is pressure at surface dS, is the unit vector in direction of free stream flow Therefore, when the drag is dominated by a frictional component, the body is called a streamlined body; whereas in the case of dominant pressure drag, the body is called a blunt or bluff body. Thus, the shape of the body and the angle of attack determine the type of drag. For example, an airfoil is considered as a body with a small angle of attack by the fluid flowing across it. This means that it has attached boundary layers, which produce much less pressure drag. The wake produced is very small and drag is dominated by the friction component. Therefore, such a body (here an airfoil) is described as streamlined, whereas for bodies with fluid flow at high angles of attack, boundary layer separation takes place. This mainly occurs due to adverse pressure gradients at the top and rear parts of an airfoil. Due to this, wake formation takes place, which consequently leads to eddy formation and pressure loss due to pressure drag. In such situations, the airfoil is stalled and has higher pressure drag than friction drag. In this case, the body is described as a blunt body. A streamlined body looks like a fish (tuna), Oropesa, etc. or an airfoil with small angle of attack, whereas a blunt body looks like a brick, a cylinder or an airfoil with high angle of attack. For a given frontal area and velocity, a streamlined body will have lower resistance than a blunt body. Cylinders and spheres are taken as blunt bodies because the drag is dominated by the pressure component in the wake region at high Reynolds number. To reduce this drag, either the flow separation could be reduced or the surface area in contact with the fluid could be reduced (to reduce friction drag). This reduction is necessary in devices like cars, bicycle, etc. to avoid vibration and noise production. See also Automotive aerodynamics Automobile drag coefficient Ballistic coefficient Drag crisis Zero-lift drag coefficient Notes References L. J. Clancy (1975): Aerodynamics. Pitman Publishing Limited, London, Abbott, Ira H., and Von Doenhoff, Albert E. (1959): Theory of Wing Sections. Dover Publications Inc., New York, Standard Book Number 486-60586-8 Hoerner, Dr. Sighard F., Fluid-Dynamic Drag, Hoerner Fluid Dynamics, Bricktown New Jersey, 1965. Bluff Body: http://user.engineering.uiowa.edu/~me_160/lecture_notes/Bluff%20Body2.pdf Drag of Blunt Bodies and Streamlined Bodies: http://www.princeton.edu/~asmits/Bicycle_web/blunt.html Hucho, W.H., Janssen, L.J., Emmelmann, H.J. 6(1975): The optimization of body details-A method for reducing the aerodynamics drag. SAE 760185. Drag (physics) Aerospace engineering Dimensionless numbers of fluid mechanics
Drag coefficient
[ "Chemistry", "Engineering" ]
2,302
[ "Drag (physics)", "Aerospace engineering", "Fluid dynamics" ]
172,317
https://en.wikipedia.org/wiki/Ray%20transfer%20matrix%20analysis
Ray transfer matrix analysis (also known as ABCD matrix analysis) is a mathematical form for performing ray tracing calculations in sufficiently simple problems which can be solved considering only paraxial rays. Each optical element (surface, interface, mirror, or beam travel) is described by a ray transfer matrix which operates on a vector describing an incoming light ray to calculate the outgoing ray. Multiplication of the successive matrices thus yields a concise ray transfer matrix describing the entire optical system. The same mathematics is also used in accelerator physics to track particles through the magnet installations of a particle accelerator, see electron optics. This technique, as described below, is derived using the paraxial approximation, which requires that all ray directions (directions normal to the wavefronts) are at small angles relative to the optical axis of the system, such that the approximation remains valid. A small further implies that the transverse extent of the ray bundles ( and ) is small compared to the length of the optical system (thus "paraxial"). Since a decent imaging system where this is the case for all rays must still focus the paraxial rays correctly, this matrix method will properly describe the positions of focal planes and magnifications, however aberrations still need to be evaluated using full ray-tracing techniques. Matrix definition The ray tracing technique is based on two reference planes, called the input and output planes, each perpendicular to the optical axis of the system. At any point along the optical train an optical axis is defined corresponding to a central ray; that central ray is propagated to define the optical axis further in the optical train which need not be in the same physical direction (such as when bent by a prism or mirror). The transverse directions and (below we only consider the direction) are then defined to be orthogonal to the optical axes applying. A light ray enters a component crossing its input plane at a distance from the optical axis, traveling in a direction that makes an angle with the optical axis. After propagation to the output plane that ray is found at a distance from the optical axis and at an angle with respect to it. and are the indices of refraction of the media in the input and output plane, respectively. The ABCD matrix representing a component or system relates the output ray to the input according to where the values of the 4 matrix elements are thus given by and This relates the ray vectors at the input and output planes by the ray transfer matrix () , which represents the optical component or system present between the two reference planes. A thermodynamics argument based on the blackbody radiation can be used to show that the determinant of a RTM is the ratio of the indices of refraction: As a result, if the input and output planes are located within the same medium, or within two different media which happen to have identical indices of refraction, then the determinant of is simply equal to 1. A different convention for the ray vectors can be employed. Instead of using , the second element of the ray vector is , which is proportional not to the ray angle per se but to the transverse component of the wave vector. This alters the ABCD matrices given in the table below where refraction at an interface is involved. The use of transfer matrices in this manner parallels the matrices describing electronic two-port networks, particularly various so-called ABCD matrices which can similarly be multiplied to solve for cascaded systems. Some examples Free space example As one example, if there is free space between the two planes, the ray transfer matrix is given by: where is the separation distance (measured along the optical axis) between the two reference planes. The ray transfer equation thus becomes: and this relates the parameters of the two rays as: Thin lens example Another simple example is that of a thin lens. Its RTM is given by: where is the focal length of the lens. To describe combinations of optical components, ray transfer matrices may be multiplied together to obtain an overall RTM for the compound optical system. For the example of free space of length followed by a lens of focal length : Note that, since the multiplication of matrices is non-commutative, this is not the same RTM as that for a lens followed by free space: Thus the matrices must be ordered appropriately, with the last matrix premultiplying the second last, and so on until the first matrix is premultiplied by the second. Other matrices can be constructed to represent interfaces with media of different refractive indices, reflection from mirrors, etc. Eigenvalues A ray transfer matrix can be regarded as a linear canonical transformation. According to the eigenvalues of the optical system, the system can be classified into several classes. Assume the ABCD matrix representing a system relates the output ray to the input according to We compute the eigenvalues of the matrix that satisfy eigenequation by calculating the determinant Let , and we have eigenvalues . According to the values of and , there are several possible cases. For example: A pair of real eigenvalues: and , where . This case represents a magnifier or . This case represents unity matrix (or with an additional coordinate reverter) . . This case occurs if but not only if the system is either a unity operator, a section of free space, or a lens A pair of two unimodular, complex conjugated eigenvalues and . This case is similar to a separable Fractional Fourier Transform. Matrices for simple optical components Relation between geometrical ray optics and wave optics The theory of Linear canonical transformation implies the relation between ray transfer matrix (geometrical optics) and wave optics. Common decomposition There exist infinite ways to decompose a ray transfer matrix into a concatenation of multiple transfer matrices. For example in the special case when : . Resonator stability RTM analysis is particularly useful when modeling the behavior of light in optical resonators, such as those used in lasers. At its simplest, an optical resonator consists of two identical facing mirrors of 100% reflectivity and radius of curvature , separated by some distance . For the purposes of ray tracing, this is equivalent to a series of identical thin lenses of focal length , each separated from the next by length . This construction is known as a lens equivalent duct or lens equivalent waveguide. The of each section of the waveguide is, as above, analysis can now be used to determine the stability of the waveguide (and equivalently, the resonator). That is, it can be determined under what conditions light traveling down the waveguide will be periodically refocused and stay within the waveguide. To do so, we can find all the "eigenrays" of the system: the input ray vector at each of the mentioned sections of the waveguide times a real or complex factor is equal to the output one. This gives: which is an eigenvalue equation: where is the identity matrix. We proceed to calculate the eigenvalues of the transfer matrix: leading to the characteristic equation where is the trace of the , and is the determinant of the . After one common substitution we have: where is the stability parameter. The eigenvalues are the solutions of the characteristic equation. From the quadratic formula we find Now, consider a ray after passes through the system: If the waveguide is stable, no ray should stray arbitrarily far from the main axis, that is, must not grow without limit. Suppose Then both eigenvalues are real. Since one of them has to be bigger than 1 (in absolute value), which implies that the ray which corresponds to this eigenvector would not converge. Therefore, in a stable waveguide, and the eigenvalues can be represented by complex numbers: with the substitution . For let and be the eigenvectors with respect to the eigenvalues and respectively, which span all the vector space because they are orthogonal, the latter due to The input vector can therefore be written as for some constants and After waveguide sectors, the output reads which represents a periodic function. Gaussian beams The same matrices can also be used to calculate the evolution of Gaussian beams propagating through optical components described by the same transmission matrices. If we have a Gaussian beam of wavelength radius of curvature (positive for diverging, negative for converging), beam spot size and refractive index , it is possible to define a complex beam parameter by: (, , and are functions of position.) If the beam axis is in the direction, with waist at and Rayleigh range , this can be equivalently written as This beam can be propagated through an optical system with a given ray transfer matrix by using the equation: where is a normalization constant chosen to keep the second component of the ray vector equal to . Using matrix multiplication, this equation expands as Dividing the first equation by the second eliminates the normalization constant: It is often convenient to express this last equation in reciprocal form: Example: Free space Consider a beam traveling a distance through free space, the ray transfer matrix is and so consistent with the expression above for ordinary Gaussian beam propagation, i.e. As the beam propagates, both the radius and waist change. Example: Thin lens Consider a beam traveling through a thin lens with focal length . The ray transfer matrix is and so Only the real part of is affected: the wavefront curvature is reduced by the power of the lens , while the lateral beam size remains unchanged upon exiting the thin lens. Higher rank matrices Methods using transfer matrices of higher dimensionality, that is , , and , are also used in optical analysis. In particular, propagation matrices are used in the design and analysis of prism sequences for pulse compression in femtosecond lasers. See also Transfer-matrix method (optics) Linear canonical transformation Footnotes References Further reading External links Thick lenses (Matrix methods) ABCD Matrices Tutorial Provides an example for a system matrix of an entire system. ABCD Calculator An interactive calculator to help solve ABCD matrices. Geometrical optics Accelerator physics
Ray transfer matrix analysis
[ "Physics" ]
2,075
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]