id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
7,490,280 | https://en.wikipedia.org/wiki/Spent%20caustic | Spent caustic is a waste industrial caustic solution that has become exhausted and is no longer useful (or spent). Spent caustics are made of sodium hydroxide or potassium hydroxide, water, and contaminants. The contaminants have consumed the majority of the sodium (or potassium) hydroxide and thus the caustic liquor is spent, for example, in one common application H2S (gas) is scrubbed by the NaOH (aqueous) to form NaHS (aq) and H2O (l), thus consuming the caustic.
Types
Ethylene spent caustic comes from the caustic scrubbing of cracked gas from an ethylene cracker. This liquor is produced by a caustic scrubbing tower. Ethylene product gas is contaminated with (g) and (g), and those contaminants are removed by absorption in the caustic scrubbing tower to produce (aq) and (aq). The sodium hydroxide is consumed and the resulting wastewater (ethylene spent caustic) is contaminated with the sulfides and carbonates and a small fraction of organic compounds.
Refinery spent caustic comes from multiple sources: the Merox processing of gasoline; the Merox processing of kerosene/jet fuel; and the caustic scrubbing/Merox processing of LPG. In these streams sulfides and organic acids are removed from the product streams into the caustic phase. The sodium hydroxide is consumed and the resulting wastewaters (cresylic for gasoline; naphthenic for kerosene/jet fuel; sulfidic for LPG -spent caustics) are often mixed and called refinery spent caustic. This spent caustic is contaminated with sulfides, carbonates, and in many cases a high fraction of organic acids.
Treatment technologies
Spent caustics are malodorous wastewaters that are difficult to treat in conventional wastewater processes. Typically the material is disposed of by high dilution with biotreatment, deep well injection, incineration, wet air oxidation, Humid Peroxide Oxidation or other speciality processes. Most ethylene spent caustics are disposed of through wet air oxidation.
References
Suarez, F. "Pluses and Minuses of Caustic Treating", Hydrocarbon Processing, pp 117–123, Oct 1996.
Maugans, C.; Ellis, C. "Age Old Solution for Today's SO2 and NOx", Pollution Engineering, April 2004.
Carlos T.; Maugans, C. "Wet Air Oxidation of Refinery Spent Caustic: A Refinery Case Study", NPRA Conference, San Antonio, TX, September 2000. WAO for Refinery Spent Caustic
Kumfer, B.; Felch, C.; Maugans, C. "Wet Air Oxidation of Spent Caustic in Petroleum Refineries", NPRA, March 2010, San Antonio, TX. WAO for Spent Caustic in Petroleum Refineries
Water pollution
Waste | Spent caustic | [
"Physics",
"Chemistry",
"Environmental_science"
] | 641 | [
"Materials",
"Waste",
"Matter",
"Water pollution"
] |
7,493,205 | https://en.wikipedia.org/wiki/Prime%20triplet | In number theory, a prime triplet is a set of three prime numbers in which the smallest and largest of the three differ by 6. In particular, the sets must have the form or . With the exceptions of and , this is the closest possible grouping of three prime numbers, since one of every three sequential odd numbers is a multiple of three, and hence not prime (except for 3 itself).
Examples
The first prime triplets are
(5, 7, 11), (7, 11, 13), (11, 13, 17), (13, 17, 19), (17, 19, 23), (37, 41, 43), (41, 43, 47), (67, 71, 73), (97, 101, 103), (101, 103, 107), (103, 107, 109), (107, 109, 113), (191, 193, 197), (193, 197, 199), (223, 227, 229), (227, 229, 233), (277, 281, 283), (307, 311, 313), (311, 313, 317), (347, 349, 353), (457, 461, 463), (461, 463, 467), (613, 617, 619), (641, 643, 647), (821, 823, 827), (823, 827, 829), (853, 857, 859), (857, 859, 863), (877, 881, 883), (881, 883, 887)
Subpairs of primes
A prime triplet contains a single pair of:
Twin primes: or ;
Cousin primes: or ; and
Sexy primes: .
Higher-order versions
A prime can be a member of up to three prime triplets - for example, 103 is a member of , and . When this happens, the five involved primes form a prime quintuplet.
A prime quadruplet contains two overlapping prime triplets, and .
Conjecture on prime triplets
Similarly to the twin prime conjecture, it is conjectured that there are infinitely many prime triplets. The first known gigantic prime triplet was found in 2008 by Norman Luhn and François Morain. The primes are with . the largest known proven prime triplet contains primes with 20008 digits, namely the primes with .
The Skewes number for the triplet is 87613571, and for the triplet it is 337867.
References
External links
Classes of prime numbers
Unsolved problems in number theory | Prime triplet | [
"Mathematics"
] | 569 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Number theory",
"Unsolved problems in number theory"
] |
7,493,966 | https://en.wikipedia.org/wiki/Loss-of-pressure-control%20accident | A loss-of-pressure-control accident (LOPA) is a mode of failure for a nuclear reactor that involves the pressure of the confined coolant falling below specification. Most commercial types of nuclear reactor use a pressure vessel to maintain pressure in the reactor plant. This is necessary in a pressurized water reactor to prevent boiling in the core, which could lead to a nuclear meltdown. This is also necessary in other types of reactor plants to prevent moderators from having uncontrolled properties.
Pressure is controlled in a pressurized water reactor to ensure that the core itself does not reach its boiling point in which the water will turn into steam and rapidly decrease the heat being transferred from the fuel to the moderator. By a combination of heaters and spray valves, pressure is controlled in the pressurizer vessel which is connected to the reactor plant. Because the pressurizer vessel and the reactor plant are connected, the pressure of the steam space pressurizes the entire reactor plant to ensure the pressure is above that which would allow boiling in the reactor core. The pressurizer vessel itself may be maintained much hotter than the rest of the reactor plant to ensure pressure control, because in the liquid throughout the reactor plant, pressure applied at any point has an effect on the entire system, whereas the heat transfer is limited by ambient and other losses.
Causes of a loss of pressure control
Many failures in a reactor plant or its supporting auxiliaries could cause a loss of pressure control, including:
Inadvertent isolation of the pressurizing vessel from the reactor plant, via the closing of an isolation valve or mechanically clogged piping. Because of this possibility, no commercial nuclear power plant has any kind of valve in the connection between the pressuriser and the reactor coolant circuit. To avoid clogging anywhere in the primary circuit, the coolant is kept very clean, and the connecting pipe between the pressuriser and the reactor coolant circuit is short and large diameter.
A rupture in the pressurizer vessel, which would also be a loss-of-coolant accident. In most reactor plant designs, however, this would not limit flowrate through the core and therefore would behave like a loss-of-pressure-control-accident rather than a loss-of-coolant accident.
Failure of either the spray nozzles (failing open would inhibit raising pressure as the relatively cool spray collapses the pressurizer vessel bubble) or the heaters of the pressurizing system.
Thermal Stratification of the liquid portion of the pressurizer. When the liquid portion of the pressurizer becomes stratified, the lower layers of water (furthest from the steam bubble) are subcooled and as the steam bubble slowly condenses, pressurizer pressure will appear relatively constant but actually will be slowly lowering. When the operator energizes pressurizer heaters to maintain or raise pressure, pressure will continue to drop until the subcooled water is heated up by the pressurizer heaters to the saturation temperature corresponding to the pressure of the steam (bubble) portion of the pressurizer. During this reheating period, pressure control will be lost, since pressure will still be dropping when it is desired to raise pressure.
Results of a loss of pressure control in a pressurized water reactor
When pressure control is lost in a reactor plant, depending on the level of heat being generated by the reactor plant, the heat being removed by the steam or other auxiliary systems, the initial pressure, and the normal operating temperature of the plant, it could take minutes or even hours for operators to see significant trends in core behaviour.
For whatever power level the reactor is currently operating at, a certain amount of enthalpy is present in the coolant. This enthalpy is proportional to temperature, therefore, the hotter the plant, the higher the pressure must be maintained to prevent boiling. When pressure drops to the saturation point, dryout in the coolant channels will occur.
As the reactor heats the water flowing through coolant channels, subcooled nucleate boiling takes place, in which some of the water becomes small bubbles of steam on the cladding of the fuel rods. These are then stripped from the fuel cladding and into the coolant channel by the flow of water. Normally, these bubbles collapse in the channel, transferring enthalpy to the surrounding coolant. When the pressure is below the saturation pressure for the given temperature, the bubbles will not collapse. As more bubbles accumulate in the channel and combine, the steam space within the channel becomes larger and larger until steam blankets the fuel cell walls. Once the fuel cell walls are blanketed with steam, the rate of heat transfer lowers significantly. Heat is not transferred out of the fuel rods as fast as it is being generated, potentially causing a nuclear meltdown. Because of this potential, all nuclear power plants have reactor protection systems that automatically shut down the reactor if the pressure in the primary circuit falls below a safe level, or if the subcooling margin falls below a safe level. Once the reactor is shut down, the rate at which residual heat is generated in the fuel rods is similar to that of an electric kettle, and the fuel rods can be safely cooled just by being submerged in water at normal atmospheric pressure.
References
Nuclear safety and security
Nuclear reactors
Civilian nuclear power accidents | Loss-of-pressure-control accident | [
"Technology"
] | 1,094 | [
"Environmental impact of nuclear power",
"Civilian nuclear power accidents"
] |
7,497,569 | https://en.wikipedia.org/wiki/Systems%20immunology | Systems immunology is a research field under systems biology that uses mathematical approaches and computational methods to examine the interactions within cellular and molecular networks of the immune system. The immune system has been thoroughly analyzed as regards to its components and function by using a "reductionist" approach, but its overall function can't be easily predicted by studying the characteristics of its isolated components because they strongly rely on the interactions among these numerous constituents. It focuses on in silico experiments rather than in vivo.
Recent studies in experimental and clinical immunology have led to development of mathematical models that discuss the dynamics of both the innate and adaptive immune system. Most of the mathematical models were used to examine processes in silico that can't be done in vivo. These processes include: the activation of T cells, cancer-immune interactions, migration and death of various immune cells (e.g. T cells, B cells and neutrophils) and how the immune system will respond to a certain vaccine or drug without carrying out a clinical trial.
Techniques of modelling in Immune cells
The techniques that are used in immunology for modelling have a quantitative and qualitative approach, where both have advantages and disadvantages. Quantitative models predict certain kinetic parameters and the behavior of the system at a certain time point or concentration point. The disadvantage is that it can only be applied to a small number of reactions and prior knowledge about some kinetic parameters is needed. On the other hand, qualitative models can take into account more reactions but in return they provide less details about the kinetics of the system. The only thing in common is that both approaches lose simplicity and become useless when the number of components drastically increase.
Ordinary Differential Equation model
Ordinary differential equations (ODEs) are used to describe the dynamics of biological systems. ODEs are used on a microscopic, mesoscopic and macroscopic scale to examine continuous variables. The equations represent the time evolution of observed variables such as concentrations of protein, transcription factors or number of cell types. They are usually used for modelling immunological synapses, microbial recognition and cell migration. Over the last 10 years, these models have been used to study the sensitivity of TCR to agonist ligands and the roles of CD4 and CD8 co-receptors.
Kinetic rates of these equations are represented by binding and dissociation rates of the interacting species. These models are able to present the concentration and steady state of each interacting molecule in the network.
ODE models are defined by linear and non-linear equations, where the nonlinear ones are used more often because they are easier to simulate on a computer (in silico) and to analyse. The limitation of this model is that for every network, the kinetics of each molecule has to be known so that this model could be applied.
The ODE model was used to examine how antigens bind to the B cell receptor. This model was very complex because it was represented by 1122 equations and six signalling proteins. The software tool that was used for the research was BioNetGen. The outcome of the model is according to the in vivo experiment.
The Epstein-Barr virus (EBV) was mathematically modeled with 12 equations to investigate three hypotheses that explain the higher occurrence of mononucleosis in younger people. After running numerical simulations, only the first two hypotheses were supported by the model.
Partial Differential Equation model
Partial differential equation (PDE) models are an extended version of the ODE model, which describes the time evolution of each variable in both time and space. PDEs are used on a microscopic level for modeling continuous variables in the sensing and recognition of pathogens pathway. They are also applied for physiological modeling to describe how proteins interact and where their movement is directed in an immunological synapse. These derivatives are partial because they are calculated with the respect to time and also with the respect to space. Sometimes a physiological variable such as age in cell division can be used instead of the spatial variables. Comparing the PDE models, which take into account the spatial distribution of cells, to the ODE ones, the PDEs are computationally more demanding. Spatial dynamics are an important aspect of cell signalling as it describes the motion of cells within a three dimensional compartment. T cells move around in a three dimensional lymph node while TCRs are located on the surface of cell membranes and therefore move within a two dimensional compartment.
The spatial distribution of proteins is important especially upon T cell stimulation, when an immunological synapse is made, therefore this model was used in a study where the T cell was activated by a weak agonist peptide.
Particle-based Stochastic model
Particle-based stochastic models are obtained based on the dynamics of an ODE model. What differs this model from others, is that it considers the components of the model as discrete variables, not continuous like the previous ones. They examine particles on a microscopic and mesoscopic level in immune-specific transduction pathways and immune cells-cancer interactions, respectively. The dynamics of the model are determined by the Markov process, which in this case, expresses the probability of each possible state in the system upon time in a form of differential equations. The equations are difficult to solve analytically, so simulations on the computer are performed as kinetic Monte Carlo schemes. The simulation is commonly carried out with the Gillespie algorithm, which uses reaction constants that are derived from chemical kinetic rate constants to predict whether a reaction is going to occur. Stochastic simulations are more computationally demanding and therefore the size and scope of the model is limited.
The stochastic simulation was used to show that the Ras protein, which is a crucial signalling molecule in T cells, can have an active and inactive form. It provided insight to a population of lymphocytes that upon stimulation had active and inactive subpopulations.
Co-receptors have an important role in the earliest stages of T cell activation and a stochastic simulation was used to explain the interactions as well as to model the migrating cells in a lymph node.
This model was used to examine T cell proliferation in the lymphoid system.
Agent-based models
Agent-based modeling (ABM) is a type of modelling where the components of the system that are being observed, are treated as discrete agents and represent an individual molecule or cell. The components - agents, called in this system, can interact with other agents and the environment.
ABM has the potential to observe events on a multiscale level and is becoming more popular in other disciplines. It has been used for modelling the interactions between CD8+ T cells and Beta cells in Diabetes I and modelling the rolling and activation of leukocytes.
Boolean model
Logic models are used to model the life cycles of cells, immune synapse, pathogen recognition and viral entries on a microscopic and mesoscopic level. Unlike the ODE models, details about the kinetics and concentrations of interacting species isn't required in logistic models. Each biochemical species is represented as a node in the network and can have a finite number of discrete states, usually two, for example: ON/OFF, high/low, active/inactive. Usually, logic models, with only two states are considered as Boolean models. When a molecule is in the OFF state, it means that the molecule isn't present at a high enough level to make a change in the system, not that it has zero concentration. Therefore, when it is in the ON state it has reached a high enough amount to initiate a reaction. This method was first introduced by Kauffman. The limit of this model is that it can only provide qualitative approximations of the system and it can’t perfectly model concurrent events.
This method has been used to explore special pathways in the immune system such as affinity maturation and hypermutation in the humoral immune system and tolerance to pathologic rheumatoid factors. Simulation tools that support this model are DDlab, Cell-Devs and IMMSIM-C. IMMSIM-C is used more often than the others, as it doesn’t require knowledge in the computer programming field. The platform is available as a public web application and finds usage in undergraduate immunology courses at various universities (Princeton, Genoa, etc.).
For modelling with statecharts, only Rhapsody has been used so far in systems immunology. It can translate the statechart into executable Java and C++ codes.
This method was also used to build a model of the Influenza Virus Infection. Some of the results were not in accordance with earlier research papers and the Boolean network showed that the amount of activated macrophages increased for both young and old mice, while others suggest that there is a decrease.
The SBML (Systems Biology Markup Language) was supposed to cover only models with ordinary differential equations, but recently it was upgraded so that Boolean models could be applied. Almost all modeling tools are compatible with SBML. There are a few more software packages for modeling with Boolean models: BoolNet, GINsim and Cell Collective.
Computer tools
To model a system by using differential equations, the computer tool has to perform various tasks such as model construction, calibration, verification, analysis, simulation and visualization. There isn’t a single software tool that satisfies the mentioned criteria, so multiple tools need to be used.
GINsim
GINsim is a computer tool that generates and simulates genetic networks based on discrete variables. Based on the regulatory graphs and logical parameters, GINsim calculates the temporal evolution of the system which is returned as a State Transition Graph (STG) where the states are represented by nodes and transitions by arrows.
It was used to examine how T cells respond upon activation of the TCR and TLR5 pathway. These processes were observed both separately and in combination. First, the molecular maps and logic models for both TCR and TLR5 pathways were built and then merged. Molecular maps were produced in CellDesigner based on data from literature and various databases, such as KEGG and Reactome. The logical models were generated by GINsim where each component has the value of either 0 or 1 or additional values when modified. Logical rules are then applied to each component, which are called logical nodes in this network. After merging the final model consists of 128 nodes. The results of modelling were in accordance with the experimental ones, where it was demonstrated that the TLR5 is a costimulatory receptor for CD4+ T cells.
Boolnet
Boolnet is a R package which contains tools for reconstruction, analysis and visualization of Boolean networks.
Cell Collective
The Cell Collective is a scientific platform which enables scientists to build, analyse and simulate biological models without formulating mathematical equations and coding. It has a Knowledge Base component built in it which extends the knowledge of individual entities (proteins, genes, cells, etc.) into dynamical models. The data is qualitative but it takes into account the dynamical relationship between the interacting species. The models are simulated in real-time and everything is done on the web.
BioNetGen
BioNetGen (BNG) is an open-source software package that is used in rule-based modeling of complex systems such as gene regulation, cell signaling and metabolism. The software uses graphs to represent different molecules and their functional domains and rules to explain the interactions between them. In terms of immunology, it was used to model intracellular signalling pathways of the TLR-4 cascade.
DSAIRM
DSAIRM (Dynamical Systems Approach to Immune Response Modeling) is a R package that is designed for studying infection and immune response dynamics without prior knowledge of coding.
Other useful applications and learning environments are: Gepasi, Copasi, BioUML, Simbiology (MATLAB) and Bio-SPICE.
Conferences
The first conference in Synthetic and Systems Immunology was hosted in Ascona by CSF and ETH Zurich. It took place in the first days of May 2019 where over fifty researchers, from different scientific fields were involved. Among all presentations that were held, the best went to Dr. Govinda Sharma who invented a platform for screening TCR epitopes.
Cold Spring Harbor Laboratory (CSHL) from New York, in March 2019, hosted a meeting where the focus was to exchange ideas between experimental, computational and mathematical biologists that study the immune system in depth. The topics for the meeting where: Modelling and Regulatory networks, the future of Synthetic and Systems Biology and Immunoreceptors.
Further reading
A Plaidoyer for ‘Systems Immunology’
Systems and Synthetic Immunology
Systems Biology
Current Topics in Microbiology and Immunology
The FRiND model
The Multiscale Systems Immunology project
Modelling with BioNetGen
References
Branches of immunology
Bioinformatics
Systems biology
Computational fields of study | Systems immunology | [
"Technology",
"Engineering",
"Biology"
] | 2,634 | [
"Biological engineering",
"Computational fields of study",
"Branches of immunology",
"Bioinformatics",
"Computing and society",
"Systems biology"
] |
14,845,077 | https://en.wikipedia.org/wiki/High%20Speed%20Civil%20Transport | The High Speed Civil Transport (HSCT) was the focus of the NASA High-Speed Research (HSR) program, which intended to develop the technology needed to design and build a supersonic transport that would be environmentally acceptable and economically feasible. The aircraft was to be a future supersonic passenger aircraft, baselined to cruise at Mach 2.4, or more than twice the speed of sound. The project started in 1990 and ended in 1999.
It was meant to cross the Atlantic or the Pacific Ocean in half the time of a non-supersonic aircraft. It was also intended to be fuel efficient, carry 300 passengers, and allow customers to buy tickets at a price only slightly higher than those of subsonic aircraft. The goal was to provide sufficient technology for an industry-led product launch decision in 2002, and if a product was launched, a maiden flight within 20 years.
The program was based on the successes and failures of the British/French Concorde and the Russian Tupolev Tu-144, as well as a previous NASA Supersonic Transport (SST) program from the early 1970s (for the latter, see Lockheed L-2000 and Boeing 2707.) While the Concorde and Tu-144 programs both yielded production aircraft, neither was produced in sufficient numbers to pay for their development costs.
History
In 1989, NASA and industry partners began investigating the feasibility of radically higher-speed passenger aircraft. By 1990 the design converged to a Mach 2.4, 300-passenger capable aircraft, and the High-Speed Research program was started. The project was split into two phases which examined a variety of areas for development. The first phase "focused on the development of technology concepts for environmental compatibility". The second phase aimed to demonstrate the environmental technologies and other high-risk technologies for economic viability.
Phase 1 focused on several environmental concerns: NOx emissions which can deplete the ozone layer, community noise, sonic boom noise, and high-altitude radiation. Tests relevant to each concern were carried out. A U-2 spy plane, renamed to the ER-2, was used to measure high-altitude emissions from a Concorde jet and to measure the radiation environment at high altitudes. New engine nozzle technologies were tested to reduce takeoff and landing noise. Sonic boom mitigation technologies were tested using an SR-71 Blackbird, but were considered to be economically unviable; instead, HSCT would be limited to subsonic speeds over land.
Phase 2 demonstrated several key technologies' economic viability. Two F-16XLs were used to test supersonic laminar flow control and to validate advanced CFD design methods. Instead of using the droop nose like that on the Concorde, an "external vision" system would have replaced the cockpit windows entirely with computer-generated graphics made available to the pilots on cockpit displays. Finally, a variety of materials were designed and tested against the very high temperature of Mach 2.4 flight, with titanium and a unique variety of carbon fiber being leading candidates for different areas of the craft.
Though the project was largely successful, it was canceled in 1999 due to budget constraints, as well as Boeing withdrawing interest (i.e. funding) from the project.
References
Citations
Bibliography
External links
NASA aeronautical programs
Supersonic transports
Abandoned civil aircraft projects of the United States | High Speed Civil Transport | [
"Physics"
] | 673 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
14,846,432 | https://en.wikipedia.org/wiki/Scorpion%20toxin | Scorpion toxins are proteins found in the venom of scorpions. Their toxic effect may be mammal- or insect-specific and acts by binding with varying degrees of specificity to members of the Voltage-gated ion channel superfamily; specifically, voltage-gated sodium channels, voltage-gated potassium channels, and Transient Receptor Potential (TRP) channels. The result of this action is to activate or inhibit the action of these channels in the nervous and cardiac organ systems. For instance, α-scorpion toxins MeuNaTxα-12 and MeuNaTxα-13 from Mesobuthus eupeus are neurotoxins that target voltage-gated Na+ channels (Navs), inhibiting fast inactivation. In vivo assays of MeuNaTxα-12 and MeuNaTxα-13 effects on mammalian and insect Navs show differential potency. These recombinants (MeuNaTxα-12 and MeuNaTxα-13) exhibit their preferential affinity for mammalian and insect Na+ channels at the α-like toxins' active site, site 3, in order to inactivate the cell membrane depolarization faster[6]. The varying sensitivity of different Navs to MeuNaTxα-12 and MeuNaTxα-13 may be dependent on the substitution of a conserved Valine residue for a Phenylalanine residue at position 1630 of the LD4:S3-S4 subunit or due to various changes in residues in the LD4:S5-S6 subunit of the Navs. Ultimately, these actions can serve the purpose of warding off predators by causing pain (e.g., through the activation of sodium channels or TRP channels in sensory neurons) or to subdue predators (e.g., in the case of inhibition of cardiac ion channels).
The family includes related short- and long-chain scorpion toxins. It also contains a group of proteinase inhibitors from the plants Arabidopsis thaliana and Brassica spp.
The Brassica napus (oil seed rape) and Sinapis alba (white mustard) inhibitors, inhibit the catalytic activity of bovine beta-trypsin and bovine alpha-chymotrypsin, which belong to MEROPS peptidase family S1 ().
This group of proteins is now used in the creation of insecticides, vaccines, and protein engineering scaffolds.
Structure
The complete covalent structure of several such toxins has been deduced: They comprise around 66 amino acid residues forming a three stranded anti-parallel beta sheet over which lies an alpha helix of approximately three turns. Four disulfide bridges cross-link the structure of the long-chain toxins whereas the short toxins contain only three. BmKAEP, an anti-epilepsy peptide isolated from the venom of the Manchurian scorpion, shows similarity to both scorpion neurotoxins and anti-insect toxins.
Function
The toxin's molecular function is to inhibit ion channels. The two types of Na+ channel toxins can be divided into two groups (alpha and beta) based on their functional effects. Beta (β) toxins shift the voltage-dependence of activation to more negative potentials, making the channel more likely to open at membrane potentials where activation would normally not occur. Alpha (α) toxins inhibit the fast inactivation mechanism, prolonging Na+ current through the channel. The toxins are used in insecticides, vaccines, and protein engineering scaffolds. The toxins are now used to treat cancer patients by injecting fluorescent scorpion toxin into cancerous tissue to show tumor boundaries. Scorpion toxin genes are also used to kill insect pests by creating hypervirulent fungus in the insect through gene insertion.
Subfamilies
Neurotoxin
References
External links
Scorpion short toxins in PROSITE
Science News: Scorpion Toxin Tells an Evolutionary Tale
Protein toxins
Peripheral membrane proteins | Scorpion toxin | [
"Chemistry"
] | 823 | [
"Protein toxins",
"Toxins by chemical classification"
] |
14,848,938 | https://en.wikipedia.org/wiki/Drugs%20secreted%20in%20the%20kidney | This is a table of drugs that are secreted in the kidney.
Acid medication are, because of pH partition, secreted to a higher extent when urine is basic. In the same way, basic medications are secreted to a higher extent when urine is acidic.
References
Pharmacokinetics | Drugs secreted in the kidney | [
"Chemistry"
] | 62 | [
"Pharmacology",
"Pharmacokinetics"
] |
5,744,837 | https://en.wikipedia.org/wiki/Friction%20of%20distance | Friction of distance is a core principle of geography that states that movement incurs some form of cost, in the form of physical effort, energy, time, and/or the expenditure of other resources, and that these costs are proportional to the distance traveled. This cost is thus a resistance against movement, analogous (but not directly related) to the effect of friction against movement in classical mechanics. The subsequent preference for minimizing distance and its cost underlies a vast array of geographic patterns from economic agglomeration to wildlife migration, as well as many of the theories and techniques of spatial analysis, such as Tobler's first law of geography, network routing, and cost distance analysis. To a large degree, friction of distance is the primary reason why geography is relevant to many aspects of the world, although its importance (and perhaps the importance of geography) has been decreasing with the development of transportation and communication technologies.
History
It is not known who first coined the term "friction of distance," but the effect of distance-based costs on geographic activity and geographic patterns has been a core element of academic geography since its initial rise in the 19th Century. von Thünen's isolated state model of exurban land use (1826), possibly the earliest geographic theory, directly incorporated the cost of transportation of different agricultural products as one of the determinants for how far from a town each type of goods could be produced profitably. The industrial location theory of Alfred Weber (1909) and the central place theory of Walter Christaller (1933) were also basically optimizations of space to minimize travel costs.
By the 1920s, social scientists began to incorporate principles of physics (more precisely, some of its mathematical formalizations), such as gravity, specifically the inverse square law found in Newton's law of universal gravitation. Geographers quickly identified a number of situations in which the interaction between places, whether migration between cities or the distribution of residences willing to patronize a shop, exhibited this distance decay due to the advantages of minimizing distance traveled. Gravity models and other Distance optimization models became widespread during the quantitative revolution of the 1950s and the subsequent rise of spatial analysis. Gerald Carrothers (1956) was one of the first to explicitly use the analogy of "friction" to conceptualize the effect of distance, suggesting that these distance optimizations needed to acknowledge that the effect varies according to localized factors. Ian McHarg, as published in Design with Nature (1969), was among those who developed the multifaceted nature of distance costs, although he did not initially employ mathematical or computational methods to optimize them.
In the era of geographic information systems, starting in the 1970s, many of the existing proximity models and new algorithms were automated as analysis tools, making them significantly easier to use by a wider set of professionals. These tools have tended to focus on problems that could be solved deterministically, such as buffers, Cost distance analysis, interpolation and network routing. Other problems that apply the friction of distance are much more difficult (i.e., NP-hard), such as the traveling salesman problem and cluster analysis, and automated tools to solve them (usually using heuristic algorithms such as k-means clustering) are less widely available, or only recently available, in GIS software.
Distance Costs
As an illustration, picture a hiker standing on the side of an isolated wooded mountain, who wishes to travel to the other side of the mountain. There are essentially an infinite number of paths she could take to get there. Traveling directly over the mountain peak is "expensive," in that every ten meters spent climbing requires significant effort. Traveling ten meters cross country through the woods requires significantly more time and effort than traveling ten meters along a developed trail or through open meadow. Taking a level route along a road going around the mountain has a much lower cost (in both effort and time) for every ten meters, but the total cost accumulates over a much longer distance. In each case, the amount of time and/or effort required to travel ten meters is a measurement of the friction of distance. Determining the optimal route requires balancing these costs, and can be solved using the technique of cost distance analysis.
In another, very common example, a person wants to drive from his home to the nearest hospital. Of the many (but finite) possible routes through the road network, the one with the shortest distance passes through residential neighborhoods with low speed limits and frequent stops. An alternative route follows a bypass highway around the neighborhoods, having a significantly longer distance, with much higher speed limits and infrequent stops. Thus, this alternative has a much lower unit friction of distance (in this case, time), but it accumulates over a greater distance, requiring calculations to determine the optimal (taking the least total travel time), perhaps using the network analysis algorithms commonly found in web maps such as Google Maps.
The costs that are proportional to distance can take a number of forms, each of which may or may not be relevant in a given geographic situation:
Travel cost, the resources required to move through space. This is most commonly time, energy, or fuel consumption, but may also include more subjective costs such as nuisance.
Traffic cost, the impedance resulting from the aggregate volume of travelers exceeding the optimum capacity of the space (usually a linear network in this case).
Construction cost, the resources required to build the infrastructure that makes travel through the space possible, such as roads, pipes, and cables.
Environmental impacts, the negative effects on the natural or human environment caused by the infrastructure or the travel along it. For example, one would want to minimize the length of residential neighborhood or wetland destroyed to build a highway.
Some of these costs are easily quantifiable and measurable, such as transit time, fuel consumption, and construction costs, thus naturally lending themselves to optimization algorithms. That said, there may be a significant amount of uncertainty in predicting them due to variability over time (e.g., travel time through a road network depending on changing traffic volume) or variability in individual situations (e.g., how fast a person wishes to drive). Other costs are much more difficult to measure due to their qualitative or subjective nature, such as political protest or ecological impact; these typically require the creation of "pseudo-measures" in the form of indices or scales to operationalize.
All of these costs are fields in that they are spatially intensive (a "density" of cost per unit distance) and vary over space. The cost field (often called a cost surface) may be a continuous, smooth function or may have abrupt changes. This variability of cost occurs both in unconstrained (two- or three-dimensional) space, as well as in constrained networks, such as roads and cable telecommunications.
Applications
A large number of geographic theories, spatial analysis techniques, and GIS applications are directly based on the practical effects of friction of distance:
Tobler's first law of geography, formalized as spatial autocorrelation, states that nearby locations are more likely to similar in many aspects than distant locations, typically being the result of a history of greater interactions between them.
gravity models, distance decay and other models of spatial interaction are based on the tendency of the volume of interaction between two locations to decrease as the distance between them increases due to the friction of distance, often in a pattern that is analogous (mathematically, not physically) to the Inverse-square law of many of the properties in Physics, such as illuminance and gravity.
Economic agglomeration is the tendency of institutions that frequently interact with each other to move close together in physical space, such as the concentration of business services (advertising, finance, etc.) in large cities to be near corporate headquarters.
Location theory includes a number of theories and techniques for determining the optimal location to site a particular activity, based on minimizing travel costs. Notable examples include the classical early 20th Century theories of Johann Heinrich von Thünen, Walter Christaller, and Alfred Weber, and GIS-era algorithms for Location-allocation.
Network analysis includes a number of problems and techniques for modeling travel constrained to a linear network or graph, such as roads, public utilities, or streams. Many of these are optimization problems to minimize travel cost, such as the ubiquitous Dijkstra's algorithm to find the minimal cost path between two locations.
Cost distance analysis, a series of algorithms for finding minimal-cost paths through an unconstrained space in which cost varies as a field.
Migration of humans and animals is often seen as the result of balancing the advantages of remaining stationary (due to the friction of distance) with "push/pull" factors that encourage one to leave one location or to move to another location.
Spatial diffusion is the gradual spread of culture, ideas, and institutions across space over time, in which the desirability of one place adopting the traits of a separate place overcome the friction of distance.
Time geography explores how human activity is affected by the constraints of movement, especially temporal costs.
Time-space Convergence
Historically, the friction of distance was very high for most types of movement, making long-distance movement and interaction relatively slow and rare (but not non-existent). The result was a strongly localized human geography, manifested in aspects as varied as language and economy. One of the most profound effects of the technological advances since 1800, including the railroad, the automobile, and the telephone, has been to drastically reduce the costs of moving people, goods, and information over long distances. This led to widespread diffusion and integration, ultimately resulting in many of the aspects of globalization. The geographic effect of this diminishing friction of distance is called time-space convergence or cost-space convergence.
Of these technologies, telecommunications, especially the Internet, has perhaps had the most profound effect. Although there are still distance-based costs of transmitting information, such as the laying of cable and the generation of electromagnetic signal energy (traditionally manifesting in ways such as long-distance telephone charges), these are now so small for any meaningful unit of information that they are no longer managed in a distance-based form, but are bundled into fixed (not based on distance) service costs. For example, some portion of the fee for mobile telephone service covers the higher costs of long-distance service, but the customer does not see it, and thus does not make communication decisions based on distance. The rise of free shipping has similar causes and effects on retail trade.
It has been argued that the virtual elimination of the friction of distance in many aspects of society has resulted in the "death of Geography," in which relative location is no longer relevant to many tasks in which it formerly played a crucial role. It is now possible to conduct many interactions over global distances almost as easily as over local distances, including retail trade, business-to-business services, and some types of remote work. Thus, these services could be theoretically provided from anywhere with equal cost. The COVID-19 pandemic has tested and accelerated many of these trends.
Conversely, others have seen a strengthening in the geographic effects of other aspects of life, or perhaps the increasing focus on them as traditional distance-based aspects have become less relevant. This includes the lifestyle amenities of a place, such as local natural landscapes or urban nightlife that must be experienced in person (thus requiring physical travel and thus entailing the friction of distance). Also, many people prefer in-person interactions that could technically be conducted remotely, such as business meetings, education, tourism, and shopping, which should make distance-based effects relevant for the foreseeable future. The contrasting trends of "frictional" and "frictionless" factors have necessitated a more nuanced analysis of geography than the traditional blanket statements of location always mattering, or the recent claims that location does not matter at all.
References
Human migration
International factor movements
Economic geography
Anthropology
Distance | Friction of distance | [
"Physics",
"Mathematics"
] | 2,434 | [
"Distance",
"Physical quantities",
"Quantity",
"Size",
"Space",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
5,745,135 | https://en.wikipedia.org/wiki/TGF%20beta%201 | Transforming growth factor beta 1 or TGF-β1 is a polypeptide member of the transforming growth factor beta superfamily of cytokines. It is a secreted protein that performs many cellular functions, including the control of cell growth, cell proliferation, cell differentiation, and apoptosis. In humans, TGF-β1 is encoded by the gene.
Function
TGF-β is a multifunctional set of peptides that controls proliferation, differentiation, and other functions in many cell types. TGF-β acts synergistically with transforming growth factor-alpha (TGF-α) in inducing transformation. It also acts as a negative autocrine growth factor. Dysregulation of TGF-β activation and signaling may result in apoptosis. Many cells synthesize TGF-β and almost all of them have specific receptors for this peptide. TGF-β1, TGF-β2, and TGF-β3 all function through the same receptor signaling systems.
TGF-β1 was first identified in human platelets as a protein with a molecular mass of 25 kilodaltons with a potential role in wound healing. It was later characterized as a large protein precursor (containing 390 amino acids) that was proteolytically processed to produce a mature peptide of 112 amino acids.
TGF-β1 plays an important role in controlling the immune system, and shows different activities on different types of cell, or cells at different developmental stages. Most immune cells (or leukocytes) secrete TGF-β1.
T cells
Some T cells (e.g. regulatory T cells) release TGF-β1 to inhibit the actions of other T cells. Specifically, TGF-β1 prevents the interleukin(IL)-1- & interleukin-2-dependent proliferation in activated T cells, as well as the activation of quiescent helper T cells and cytotoxic T cells. Similarly, TGF-β1 can inhibit the secretion and activity of many other cytokines including interferon-γ, tumor necrosis factor-alpha (TNF-α), and various interleukins. It can also decrease the expression levels of cytokine receptors, such as the IL-2 receptor to down-regulate the activity of immune cells. However, TGF-β1 can also increase the expression of certain cytokines in T cells and promote their proliferation, particularly if the cells are immature.
B cells
TGF-β1 has similar effects on B cells that also vary according to the differentiation state of the cell. It inhibits proliferation, stimulates apoptosis of B cells, and controls the expression of antibody, transferrin and MHC class II proteins on immature and mature B cells.
Myeloid cells
The effects of TGF-β1 on macrophages and monocytes are predominantly suppressive; this cytokine can inhibit the proliferation of these cells and prevent their production of reactive oxygen (e.g. superoxide (O2−)) and nitrogen (e.g. nitric oxide (NO)) intermediates. However, as with other cell types, TGF-β1 can also have the opposite effect on cells of myeloid origin. For example, TGF-β1 acts as a chemoattractant, directing an immune response to certain pathogens. Likewise, macrophages and monocytes respond to low levels of TGF-β1 in a chemotactic manner. Furthermore, the expression of monocytic cytokines (such as interleukin(IL)-1α, IL-1β, and TNF-α), and macrophage's phagocytic can be increased by the action of TGF-β1.
TGF-β1 reduces the efficacy of the MHC II in astrocytes and dendritic cells, which in turn decreases the activation of appropriate helper T cell populations.
Interactions
TGF beta 1 has been shown to interact with:
Decorin,
EIF3I
LTBP1,
TGF beta receptor 1, and
YWHAE.
References
Further reading
External links
Proteins
TGFβ domain | TGF beta 1 | [
"Chemistry"
] | 873 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
5,745,198 | https://en.wikipedia.org/wiki/CA-group | In mathematics, in the realm of group theory, a group is said to be a CA-group or centralizer abelian group if the centralizer of any nonidentity element is an abelian subgroup. Finite CA-groups are of historical importance as an early example of the type of classifications that would be used in the Feit–Thompson theorem and the classification of finite simple groups. Several important infinite groups are CA-groups, such as free groups, Tarski monsters, and some Burnside groups, and the locally finite CA-groups have been classified explicitly. CA-groups are also called commutative-transitive groups (or CT-groups for short) because commutativity is a transitive relation amongst the non-identity elements of a group if and only if the group is a CA-group.
History
Locally finite CA-groups were classified by several mathematicians from 1925 to 1998. First, finite CA-groups were shown to be simple or solvable in . Then in the Brauer–Suzuki–Wall theorem , finite CA-groups of even order were shown to be Frobenius groups, abelian groups, or two dimensional projective special linear groups over a finite field of even order, PSL(2, 2f) for f ≥ 2. Finally, finite CA-groups of odd order were shown to be Frobenius groups or abelian groups in , and so in particular, are never non-abelian simple.
CA-groups were important in the context of the classification of finite simple groups. Michio Suzuki showed that every finite, simple, non-abelian, CA-group is of even order. This result was first extended to the Feit–Hall–Thompson theorem showing that finite, simple, non-abelian, CN-groups had even order, and then to the Feit–Thompson theorem which states that every finite, simple, non-abelian group is of even order. A textbook exposition of the classification of finite CA-groups is given as example 1 and 2 in . A more detailed description of the Frobenius groups appearing is included in , where it is shown that a finite, solvable CA-group is a semidirect product of an abelian group and a fixed-point-free automorphism, and that conversely every such semidirect product is a finite, solvable CA-group. Wu also extended the classification of Suzuki et al. to locally finite groups.
Examples
Every abelian group is a CA-group, and a group with a non-trivial center is a CA-group if and only if it is abelian. The finite CA-groups are classified: the solvable ones are semidirect products of abelian groups by cyclic groups such that every non-trivial element acts fixed-point-freely and include groups such as the dihedral groups of order 4k+2, and the alternating group on 4 points of order 12, while the nonsolvable ones are all simple and are the 2-dimensional projective special linear groups PSL(2, 2n) for n ≥ 2. Infinite CA-groups include free groups, PSL(2, R), and Burnside groups of large prime exponent, . Some more recent results in the infinite case are included in , including a classification of locally finite CA-groups. Wu also observes that Tarski monsters are obvious examples of infinite simple CA-groups.
Works cited
Properties of groups | CA-group | [
"Mathematics"
] | 702 | [
"Mathematical structures",
"Algebraic structures",
"Properties of groups"
] |
5,745,210 | https://en.wikipedia.org/wiki/CN-group | In mathematics, in the area of algebra known as group theory, a more than fifty-year effort was made to answer a conjecture of : are all groups of odd order solvable? Progress was made by showing that CA-groups, groups in which the centralizer of a non-identity element is abelian, of odd order are solvable . Further progress was made showing that CN-groups, groups in which the centralizer of a non-identity element is nilpotent, of odd order are solvable . The complete solution was given in , but further work on CN-groups was done in , giving more detailed information about the structure of these groups. For instance, a non-solvable CN-group G is such that its largest solvable normal subgroup O∞(G) is a 2-group, and the quotient is a group of even order.
Examples
Solvable CN groups include
Nilpotent groups
Frobenius groups whose Frobenius complement is nilpotent
3-step groups, such as the symmetric group S4
Non-solvable CN groups include:
The Suzuki simple groups
The groups PSL2(F2n) for n>1
The group PSL2(Fp) for p>3 a Fermat prime or Mersenne prime.
The group PSL2(F9)
The group PSL3(F4)
References
Finite groups
Group theory
Properties of groups | CN-group | [
"Mathematics"
] | 293 | [
"Mathematical structures",
"Finite groups",
"Properties of groups",
"Group theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
5,745,363 | https://en.wikipedia.org/wiki/Evolutionary%20pressure | Evolutionary pressure, selective pressure or selection pressure is exerted by factors that reduce or increase reproductive success in a portion of a population, driving natural selection. It is a quantitative description of the amount of change occurring in processes investigated by evolutionary biology, but the formal concept is often extended to other areas of research.
In population genetics, selective pressure is usually expressed as a selection coefficient.
Amino acids selective pressure
It has been shown that putting an amino acid bio-synthesizing gene like HIS4 gene under amino acid selective pressure in yeast causes enhancement of expression of adjacent genes which is due to the transcriptional co-regulation of two adjacent genes in Eukaryota.
Antibiotic resistance
Drug resistance in bacteria is an example of an outcome of natural selection. When a drug is used on a species of bacteria, those that cannot resist die and do not produce offspring, while those that survive potentially pass on the resistance gene to the next generation (vertical gene transmission). The resistance gene can also be passed on to one bacterium by another of a different species (horizontal gene transmission). Because of this, the drug resistance increases over generations. For example, in hospitals, environments are created where pathogens such as C. difficile have developed a resistance to antibiotics. Antibiotic resistance is made worse by the misuse of antibiotics. Antibiotic resistance is encouraged when antibiotics are used to treat non-bacterial diseases, and when antibiotics are not used for the prescribed amount of time or in the prescribed dose. Antibiotic resistance may arise out of standing genetic variation in a population or de novo mutations in the population. Either pathway could lead to antibiotic resistance, which may be a form of evolutionary rescue.
Nosocomial infections
Clostridioides difficile, gram-positive bacteria species that inhabits the gut of mammals, exemplifies one type of bacteria that is a major cause of death by nosocomial infections.
When symbiotic gut flora populations are disrupted (e.g., by antibiotics), one becomes more vulnerable to pathogens. The rapid evolution of antibiotic resistance places an enormous selective pressure on the advantageous alleles of resistance passed down to future generations. The Red Queen hypothesis shows that the evolutionary arms race between pathogenic bacteria and humans is a constant battle for evolutionary advantages in outcompeting each other. The evolutionary arms race between the rapidly evolving virulence factors of the bacteria and the treatment practices of modern medicine requires evolutionary biologists to understand the mechanisms of resistance in these pathogenic bacteria, especially considering the growing number of infected hospitalized patients. The evolved virulence factors pose a threat to patients in hospitals, who are immunocompromised from illness or antibiotic treatment. Virulence factors are the characteristics that the evolved bacteria have developed to increase pathogenicity. One of the virulence factors of C. difficile that largely constitutes its resistance to antibiotics is its toxins: enterotoxin TcdA and cytotoxin TcdB. Toxins produce spores that are difficult to inactivate and remove from the environment. This is especially true in hospitals where an infected patient's room may contain spores for up to 20 weeks. Combating the threat of the rapid spread of CDIs is therefore dependent on hospital sanitation practices removing spores from the environment. A study published in the American Journal of Gastroenterology found that to control the spread of CDIs glove use, hand hygiene, disposable thermometers and disinfection of the environment are necessary practices in health facilities. The virulence of this pathogen is remarkable and may take a radical change at sanitation approaches used in hospitals to control CDI outbreaks.
Natural selection in humans
The malaria parasite can exert a selective pressure on human populations. This pressure has led to natural selection for erythrocytes carrying the sickle cell hemoglobin gene mutation (Hb S)—causing sickle cell anaemia—in areas where malaria is a major health concern, because the condition grants some resistance to this infectious disease.
Resistance to herbicides and pesticides
Just as with the development of antibiotic resistance in bacteria, resistance to pesticides and herbicides has begun to appear with commonly used agricultural chemicals. For example:
In the US, studies have shown that fruit flies that infest orange groves were becoming resistant to malathion, a pesticide used to kill them.
In Hawaii and Japan, the diamondback moth developed a resistance to Bacillus thuringiensis, which is used in several commercial crops including Bt corn, about three years after it began to be used heavily.
In England, rats in certain areas have developed such a strong resistance to rat poison that they can consume up to five times as much of it as normal rats without dying.
DDT is no longer effective in controlling mosquitoes that transmit malaria in some places, a fact that contributed to a resurgence of the disease.
In the southern United States, the weed Amaranthus palmeri, which interferes with production of cotton, has developed widespread resistance to the herbicide glyphosate.
In the Baltic Sea, decreases in salinity has encouraged the emergence of a new species of brown seaweed, Fucus radicans.
Humans exerting evolutionary pressure
Human activity can lead to unintended changes in the environment. The human activity will have a possible negative effect on a certain population, causing many individuals from said population to die due to not being adapted to this new pressure. The individuals that are better adapted to this new pressure will survive and reproduce at a higher rate than those who are at a disadvantage. This occurs over many generations until the population as a whole is better adapted to the pressure. This is natural selection at work, but the pressure is coming from man-made activity such as building roads or hunting. This is seen in the below examples of cliff swallows and elk. However, not all human activity that causes an evolutionary pressure happens unintentionally. This is demonstrated in dog domestication and the subsequent selective breeding that resulted in the various breeds known today.
Rattlesnakes
In more heavily (human) populated and trafficked areas, reports have been increasing of rattlesnakes that do not rattle. This phenomenon is commonly attributed to selective pressure by humans, who often kill the snakes when they are discovered. Non-rattling snakes are more likely to go unnoticed, so survive to reproduce offspring that, like themselves, are less likely to rattle.
Cliff swallows
Populations of cliff swallows in Nebraska have displayed morphological changes in their wings after many years of living next to roads. Collecting data for over 30 years, researchers noticed a decline in wingspan of living swallow populations, while also noting a decrease in the number of cliff swallows killed by passing cars. Those cliff swallows that were killed by passing cars showed a larger wingspan than the population as a whole. Confounding effects such as road usage, car size, and population size were shown to have no impact on the study.
Elk
Evolutionary pressure imposed by humans is also seen in elk populations. These studies do not look at morphological differences, but behavioral differences. Faster and more mobile male elk were shown to be more likely to fall prey to hunters. The hunters create an environment where the more active animals are more likely to succumb to predation than less active animals. Female elk who survived past two years, would decrease their activity as each year passed, leaving more shy female elk that were more likely to survive. Female elk in a separate study also showed behavioral differences, with older females displaying the timid behavior that one would expect from this selection.
Dog domestication
Since the domestication of dogs, they have evolved alongside humans due to pressure from humans and the environment. This began by humans and wolves sharing the same area, with a pressure to coexist eventually leading to their domestication. Evolutionary pressure from humans led to many different breeds that paralleled the needs of the time, whether it was a need for protecting livestock or assisting in the hunt. Hunting and herding were a couple of the first reasons for humans artificially selecting for traits they deemed beneficial. This selective breeding does not stop there, but extends to humans selecting for certain traits deemed desirable in their domesticated dogs, such as size and color, even if they are not necessarily beneficial to the human in a tangible way. An unintended consequence of this selection is that domesticated dogs also tend to have heritable diseases depending on what specific breed they encompass.
See also
Notes
Evolutionary biology | Evolutionary pressure | [
"Biology"
] | 1,724 | [
"Evolutionary biology"
] |
5,745,790 | https://en.wikipedia.org/wiki/Conway%20puzzle | Conway's puzzle, or blocks-in-a-box, is a packing problem using rectangular blocks, named after its inventor, mathematician John Conway. It calls for packing thirteen 1 × 2 × 4 blocks, one 2 × 2 × 2 block, one 1 × 2 × 2 block, and three 1 × 1 × 3 blocks into a 5 × 5 × 5 box.
Solution
The solution of the Conway puzzle is straightforward once one realizes, based on parity considerations, that the three 1 × 1 × 3 blocks need to be placed so that precisely one of them appears in each 5 × 5 × 1 slice of the cube. This is analogous to similar insight that facilitates the solution of the simpler Slothouber–Graatsma puzzle.
See also
Soma cube
References
External links
The Conway puzzle in Stewart Coffin's "The Puzzling World of Polyhedral Dissections"
Packing problems
Tiling puzzles
Mechanical puzzle cubes
John Horton Conway | Conway puzzle | [
"Physics",
"Mathematics"
] | 192 | [
"Packing problems",
"Tessellation",
"Recreational mathematics",
"Tiling puzzles",
"Mathematical problems",
"Symmetry"
] |
5,746,828 | https://en.wikipedia.org/wiki/Follicle-stimulating%20hormone%20receptor | The follicle-stimulating hormone receptor or FSH receptor (FSHR) is a transmembrane receptor that interacts with the follicle-stimulating hormone (FSH) and represents a G protein-coupled receptor (GPCR). Its activation is necessary for the hormonal functioning of FSH. FSHRs are found in the ovary, testis, and uterus.
FSHR gene
The gene for the FSHR is found on chromosome 2 p21 in humans. The gene sequence of the FSHR consists of about 2,080 nucleotides.
Receptor structure
The FSHR consists of 695 amino acids and has a molecular mass of about 76 kDa. Like other GPCRs, the FSH-receptor possesses seven membrane-spanning domains or transmembrane helices.
The extracellular domain of the receptor contains 11 leucine-rich repeats and is glycosylated. It has two subdomains, a hormone-binding subdomain followed by a signal-specificity subdomain. The hormone-binding subdomain is responsible for the high-affinity hormone binding, and the signal-specificity subdomain, containing a sulfated tyrosine at position 335 (sTyr) in a hinge loop, is required for the hormone activity.
The transmembrane domain contains two highly conserved cysteine residues that build disulfide bonds to stabilize the receptor structure. A highly conserved Asp-Arg-Tyr triplet motif is present in GPCR family members in general and may be of importance to transmit the signal. In FSHR and its closely related other glycoprotein hormone receptor members (LHR and TSHR), this conserved triplet motif is a variation Glu-Arg-Trp sequence.
The C-terminal domain is intracellular and brief, rich in serine and threonine residues for possible phosphorylation.
Ligand binding and signal transduction
Upon initial binding to the LRR region of FSHR, FSH reshapes its conformation to form a new pocket. FSHR then inserts its sulfotyrosine from the hinge loop into the pockets and activates the 7-helical transmembrane domain. This event leads to a transduction of the signal that activates the Gs protein that is bound to the receptor internally. With FSH attached, the receptor shifts conformation and, thus, mechanically activates the G protein, which detaches from the receptor and activates the cAMP system.
It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive states. The binding of FSH to the receptor shifts the equilibrium between active and inactive receptors. FSH and FSH-agonists shift the equilibrium in favor of active states; FSH antagonists shift the equilibrium in favor of inactive states.
Phosphorylation by cAMP-dependent protein kinases
Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the Gs protein (that was activated by the FSH-receptor) via adenylate cyclase and cyclic AMP (cAMP).
These protein kinases are present as tetramers with two regulatory units and two catalytic units. Upon binding of cAMP to the regulatory units, the catalytic units are released and initiate the phosphorylation of proteins, leading to the physiologic action. The cyclic AMP-regulatory dimers are degraded by phosphodiesterase and release 5’AMP. DNA in the cell nucleus binds to phosphorylated proteins through the cyclic AMP response element (CRE), which results in the activation of genes.
The signal is amplified by the involvement of cAMP and the resulting phosphorylation. The process is modified by prostaglandins. Other cellular regulators are participate are the intracellular calcium concentration modified by phospholipase, nitric acid, and other growth factors.
The FSH receptor can also activate the extracellular signal-regulated kinases (ERK). In a feedback mechanism, these activated kinases phosphorylate the receptor.
Action
In the ovary, the FSH receptor is necessary for follicular development and expressed on the granulosa cells.
In the male, the FSH receptor has been identified on the Sertoli cells that are critical for spermatogenesis.
The FSHR is expressed during the luteal phase in the secretory endometrium of the uterus.
FSH receptor is selectively expressed on the surface of the blood vessels of a wide range of carcinogenic tumors.
Receptor regulation
Upregulation
Upregulation refers to the increase in the number of receptor sites on the membrane. Estrogen upregulates FSH receptor sites. In turn, FSH stimulates granulosa cells to produce estrogens. This synergistic activity of estrogen and FSH allows for follicle growth and development in the ovary.
Desensitization
The FSHR become desensitized when exposed to FSH for some time. A key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases. This process uncouples Gs protein from the FSHR. Another way to desensitize is to uncouple the regulatory and catalytic units of the cAMP system.
Downregulation
Downregulation refers to the decrease in the number of receptor sites. This can be accomplished by metabolizing bound FSHR sites. The bound FSH-receptor complex is brought by lateral migration to a "coated pit," where such units are concentrated and then stabilized by a framework of clathrins. A pinched-off coated pit is internalized and degraded by lysosomes. Proteins may be metabolized or the receptor can be recycled.
Modulators
Antibodies to FSHR can interfere with FSHR activity.
FSH abnormalities
Some patients with ovarian hyperstimulation syndrome may have mutations in the gene for FSHR, making them more sensitive to gonadotropin stimulation.
Women with 46 XX gonadal dysgenesis experience primary amenorrhea with hypergonadotropic hypogonadism. There are forms of 46 xx gonadal dysgenesis wherein abnormalities in the FSH-receptor have been reported and are thought to be the cause of the hypogonadism.
Polymorphism may affect FSH receptor populations and lead to poorer responses in infertile women receiving FSH medication for IVF.
Alternative splicing of the FSHR gene may be implicated in subfertility in males
Ligands
Follicle-stimulating hormone (FSH) is an agonist of the FSHR.
Small-molecule positive allosteric modulators of the FSHR have been developed.
History
Alfred G. Gilman and Martin Rodbell received the 1994 Nobel Prize in Medicine and Physiology for "their discovery of G-proteins and the role of these proteins in signal transduction in cells".
See also
Luteinizing hormone/choriogonadotropin receptor
References
G protein-coupled receptors
Gonadotropin-releasing hormone and gonadotropins
Signal transduction
LRR proteins
Human female endocrine system | Follicle-stimulating hormone receptor | [
"Chemistry",
"Biology"
] | 1,526 | [
"G protein-coupled receptors",
"Neurochemistry",
"Biochemistry",
"Signal transduction"
] |
5,747,450 | https://en.wikipedia.org/wiki/Heronian%20mean | In mathematics, the Heronian mean H of two non-negative real numbers A and B is given by the formula
It is named after Hero of Alexandria.
Properties
Just like all means, the Heronian mean is symmetric (it does not depend on the order in which its two arguments are given) and idempotent (the mean of any number with itself is the same number).
The Heronian mean of the numbers A and B is a weighted mean of their arithmetic and geometric means:
Therefore, it lies between these two means, and between the two given numbers.
Application in solid geometry
The Heronian mean may be used in finding the volume of a frustum of a pyramid or cone. The volume is equal to the product of the height of the frustum and the Heronian mean of the areas of the opposing parallel faces.
A version of this formula, for square frusta, appears in the Moscow Mathematical Papyrus from Ancient Egyptian mathematics, whose content dates to roughly 1850 BC.
References
Means | Heronian mean | [
"Physics",
"Mathematics"
] | 205 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
5,748,142 | https://en.wikipedia.org/wiki/CALPUFF |
CALPUFF is an advanced, integrated Lagrangian puff modeling system for the simulation of atmospheric pollution dispersion distributed by the Atmospheric Studies Group at TRC Solutions.
It is maintained by the model developers and distributed by TRC.
The model has been adopted by the United States Environmental Protection Agency (EPA) in its Guideline on Air Quality Models as a preferred model for assessing long range transport of pollutants and their impacts on Federal Class I areas and on a case-by-case basis for certain near-field applications involving complex meteorological conditions.
The integrated modeling system consists of three main components and a set of preprocessing and postprocessing programs. The main components of the modeling system are CALMET (a diagnostic 3-dimensional meteorological model), CALPUFF (an air quality dispersion model), and CALPOST (a postprocessing package). Each of these programs has a graphical user interface (GUI). In addition to these components, there are numerous other processors that may be used to prepare geophysical (land use and terrain) data in many standard formats, meteorological data (surface, upper air, precipitation, and buoy data), and interfaces to other models such as the Penn State/NCAR Mesoscale Model (MM5), the National Centers for Environmental Prediction (NCEP) Eta model and the RAMS meteorological model.
The CALPUFF model is designed to simulate the dispersion of buoyant, puff or continuous point and area pollution sources as well as the dispersion of buoyant, continuous line sources. The model also includes algorithms for handling the effect of downwash by nearby buildings in the path of the pollution plumes.
History
The CALPUFF model was originally developed by the Sigma Research Corporation (SRC) in the late 1980s under contract with the California Air Resources Board (CARB) and it was first issued in about 1990.
The Sigma Research Corporation subsequently became part of Earth Tech, Inc. After the US EPA designated CALPUFF as a preferred model in their Guideline on Air Quality Models, Earth Tech served as the designated distributor of the model.
In April 2006, ownership of the model switched from Earth Tech to the TRC Environmental Corporation. More recently ownership transferred to Exponent, who are currently (December 2015) responsible for maintaining and distributing the model.
See also
Air pollution dispersion terminology
Atmospheric dispersion modeling
List of atmospheric dispersion models
References
Further reading
External links
src.com: Official CALPUFF website — ASG at TRC.
EPA.gov: Preferred and Recommended Models by the U.S. EPA
Air pollution
Atmospheric dispersion modeling
Air pollution in California
Air pollution in the United States | CALPUFF | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 554 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
5,748,599 | https://en.wikipedia.org/wiki/PUFF-PLUME | PUFF-PLUME is a model used to help predict how air pollution disperses in the atmosphere. It is a Gaussian atmospheric transport chemical/radionuclide dispersion model that includes wet and dry deposition, real-time input of meteorological observations and forecasts, dose estimates from inhalation and gamma shine (i.e., radiation), and puff or continuous plume dispersion modes. It was first developed by the Pacific Northwest National Laboratory (PNNL) in the 1970s.
It is the primary model for emergency response use for atmospheric releases at the Savannah River Site of the United States Department of Energy. It is one of a suite of codes for atmospheric releases and is used primarily for first-cut results in emergency situations. (Other codes containing more detailed mathematical and physical models are available for use when a short response time is not the over-riding consideration.)
See also
Bibliography of atmospheric dispersion modeling
Atmospheric dispersion modeling
List of atmospheric dispersion models
Further reading
For those who are unfamiliar with air pollution dispersion modelling and would like to learn more about the subject, it is suggested that either one of the following books be read:
www.crcpress.com
External links
OFCM Directory of Atmospheric Transport and Diffusion Consequence Assessment Models
General and Specific Characteristics for Model: PUFF-PLUME
Atmospheric dispersion modeling
Savannah River Site | PUFF-PLUME | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 277 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
11,184,941 | https://en.wikipedia.org/wiki/Cotton%E2%80%93Mouton%20effect | In physical optics, the Cotton–Mouton effect is the birefringence in a liquid in the presence of a constant transverse magnetic field. It is a similar but stronger effect than the Voigt effect (in which the medium is a gas). Its electric analog is the Kerr effect.
It was discovered in 1905 by Aimé Cotton and Henri Mouton, working in collaboration and publishing in Comptes rendus hebdomadaires des séances de l'Académie des sciences.
When a linearly polarized wave propagates perpendicularly to a magnetic field (e.g. in a magnetized plasma), it can become elliptically polarized. Because a linearly polarized wave is some combination of in-phase X and O modes, and because X and O waves propagate with different phase velocities, there is elliptization of the emerging beam. As the waves propagate, the phase difference (δ) between EX and EO increases.
See also
Cotton effect
References
Magneto-optic effects
Liquids | Cotton–Mouton effect | [
"Physics",
"Chemistry",
"Materials_science"
] | 220 | [
"Physical phenomena",
"Phases of matter",
"Electric and magnetic fields in matter",
"Optical phenomena",
"Magneto-optic effects",
"Matter",
"Liquids"
] |
11,184,994 | https://en.wikipedia.org/wiki/Sara%20Mednick | Sara C. Mednick is a sleep researcher at the University of California, Irvine. Her research focuses on the relationship between napping and performance. She is the author of several papers and a mass market book, Take a Nap! Change Your Life. She graduated with her PhD in psychology from Harvard University studying under Ken Nakayama and Robert Stickgold.
Mednick contends that humans have a biological need for an afternoon nap. "There's actually biological dips in our rhythm and in our alertness that seem to go along with the natural state of the way we used to be, probably from way back when we were allowed to nap more regularly," she told Diane Sawyer on Good Morning America.
"There is something very specific about the timing of the nap," she is quoted as saying in The Times (London). "It should be at about 2pm or 3pm. It's the time when most humans and animals experience what is called a post-prandial dip or low ebb. It's a dip in cogno-processing and physiological responses, when a lot of us actually do feel sleepy."
Coffee is an inferior substitute, Mednick believes. "In all of my research, what I found is that when I have people not drink caffeine but take a nap instead, they actually perform much better on a wide range of memory tasks," she told Neal Conan on NPR's Talk of the Nation. A video of her short Science Network lecture on nap research, at the Salk Institute in February 2007, can be viewed online.
Journalist Gregg Easterbrook named Dr. Mednick "2008 Tuesday Morning Quarterback Person of the Year", (although this doesn't appear to be an official award of any kind), citing her work to improve people's lives through napping:
References
External links
Take a Nap! Change Your Life (Workman, 2006) .
Siesta Time Article in The Economist about the restorative power of mid-day naps on visual performance tests.
Take a Nap, Dr. Mednick's website
Year of birth missing (living people)
Living people
American neuroscientists
American women neuroscientists
Sleep researchers
University of California, Riverside faculty
Harvard Graduate School of Arts and Sciences alumni
21st-century American psychologists
21st-century American women scientists | Sara Mednick | [
"Biology"
] | 476 | [
"Sleep researchers",
"Behavior",
"Sleep"
] |
11,185,249 | https://en.wikipedia.org/wiki/Hydrophobic-polar%20protein%20folding%20model | The hydrophobic-polar protein folding model is a highly simplified model for examining protein folds in space. First proposed by Ken Dill in 1985, it is the most known type of lattice protein: it stems from the observation that hydrophobic interactions between amino acid residues are the driving force for proteins folding into their native state. All amino acid types are classified as either hydrophobic (H) or polar (P), and the folding of a protein sequence is defined as a self-avoiding walk in a 2D or 3D lattice. The HP model imitates the hydrophobic effect by assigning a negative (favorable) weight to interactions between adjacent, non-covalently bound H residues. Proteins that have minimum energy are assumed to be in their native state.
The HP model can be expressed in both two and three dimensions, generally with square lattices, although triangular lattices have been used as well. It has also been studied on general regular lattices.
Randomized search algorithms are often used to tackle the HP folding problem. This includes stochastic, evolutionary algorithms like the Monte Carlo method, genetic algorithms, and ant colony optimization. While no method has been able to calculate the experimentally determined minimum energetic state for long protein sequences, the most advanced methods today are able to come close.
For some model variants/lattices, it is possible to compute optimal structures (with maximal number of H-H contacts) using constraint programming techniques as e.g. implemented within the CPSP-tools webserver.
Even though the HP model abstracts away many of the details of protein folding, it is still an NP-hard problem on both 2D and 3D square lattices.
A Monte Carlo method, named FRESS, was developed and appears to perform well on HP models.
See also
Protein structure prediction
Lattice proteins
References
External links
CPSP-tools webserver for optimal structure prediction in unrestricted 3D lattices
Protein structure | Hydrophobic-polar protein folding model | [
"Chemistry"
] | 387 | [
"Protein structure",
"Structural biology"
] |
13,739,906 | https://en.wikipedia.org/wiki/Immunosenescence | Immunosenescence is the gradual deterioration of the immune system, brought on by natural age advancement. A 2020 review concluded that the adaptive immune system is affected more than the innate immune system. Immunosenescence involves both the host's capacity to respond to infections and the development of long-term immune memory. Age-associated immune deficiency is found in both long- and short-lived species as a function of their age relative to life expectancy rather than elapsed time.
It has been studied in animal models including mice, marsupials and monkeys. Immunosenescence is a contributory factor to the increased frequency of morbidity and mortality among the elderly. Along with anergy and T-cell exhaustion, immunosenescence belongs among the major immune system dysfunctional states. However, while T-cell anergy is a reversible condition, as of 2020 no techniques for immunosenescence reversal had been developed.
Immunosenescence is not a random deteriorative phenomenon, rather it appears to inversely recapitulate an evolutionary pattern. Most of the parameters affected by immunosenescence appear to be under genetic control. Immunosenescence can be envisaged as the result of the continuous challenge of the unavoidable exposure to a variety of antigens such as viruses and bacteria.
Age-associated decline in immune function
Aging of the immune system is a controversial phenomenon. Senescence refers to replicative senescence from cell biology, which describes the condition when the upper limit of cell divisions (Hayflick limit) has been exceeded, and such cells commit apoptosis or lose their functional properties. Immunosenescence generally means a robust shift in both structural and functional parameters that has a clinically relevant outcome. Thymus involution is probably the most relevant factor responsible for immunosenescence. Thymic involution is common in most mammals; in humans it begins after puberty, as the immunological defense against most novel antigens is necessary mainly during infancy and childhood.
The major characteristic of the immunosenescent phenotype is a shift in T-cell subpopulation distribution. As the thymus involutes, the number of naive T cells (especially CD8+) decreases, thus naive T cells homeostatically proliferate into memory T cells as a compensation. It is believed that the conversion to memory phenotype can be accelerated by restimulation of the immune system by persistent pathogens such as CMV and HSV. By age 40, an estimated 50% to 85% of adults have contracted human cytomegalovirus (HCMV). Recurring infections by latent herpes viruses can exhaust the immune system of elderly persons. Consistent, repeated stimulation by such pathogens leads to preferential differentiation of the T-cell memory phenotype, and a 2020 review reported that CD8+ T-cell precursors, specific for the most rare and less frequently present antigens shed the most. Such a distribution shift leads to increased susceptibility to non-persistent infection, cancer, autoimmune diseases, cardiovascular health conditions and many others.
T cells are not the only immune cells affected by aging:
Hematopoietic stem cells (HSC), which provide the regulated lifelong supply of leukocyte progenitors that differentiate into specialised immune cells diminish in their self-renewal capacity. This is due to the accumulation of oxidative damage to DNA by aging and cellular metabolic activity and telomeric shortening.
The number of phagocytes declines in aged hosts, coupled with an intrinsic reduction of bactericidal activity.
Natural killer (NK) cell cytotoxicity and the antigen-presenting function of dendritic cells diminishes with age. The age-associated impairment of dendritic antigen-presenting cells (APCs) translates into a deficiency in cell-mediated immunity and thus, the inability for effector T-lymphocytes to modulate an adaptive immune response.
Humoral immunity declines, caused by a reduction in the population of antibody producing B-cells along with a smaller immunoglobulin diversity and affinity.
In addition to changes in immune responses, the beneficial effects of inflammation devoted to the neutralisation of dangerous and harmful agents early in life and in adulthood become detrimental late in life in a period largely not foreseen by evolution, according to the antagonistic pleiotropy theory of aging. Changes in the lymphoid compartment are not solely responsible for the malfunctioning of the immune system. Although myeloid cell production does not seem to decline with age, macrophages become dysregulated as a consequence of environmental changes.
T-cell biomarkers of age-dependent dysfunction
T cells' functional capacity is most influenced by aging effects. Age-related alterations are evident in all T-cell development stages, making them a significant factor in immunosenescence. T-cell function decline begins with the progressive involution of the thymus, which is the organ essential for T-cell maturation. This decline in turn reduces IL-2 production and reduction/exhaustion on the number of thymocytes (i.e. immature T cells), thus reducing peripheral naïve T cell output. Once matured and circulating throughout the peripheral system, T cells undergo deleterious age-dependent changes. This leaves the body practically devoid of virgin T cells, which makes it more prone to a variety of diseases.
shift in the CD4+/CD8+ ratio
the accumulation and clonal expansion of memory and effector T cells
impaired development of CD4+ T follicular helper cells (specialized in facilitating peripheral B cell maturation, and the generation of antibody-producing plasma cells and memory B cells)
deregulation of intracellular signal transduction capabilities
diminished capacity to produce effector lymphokines
shrinkage of antigen-recognition repertoire of T-cell receptor (TcR) diversity
down-regulation of CD28 costimulatory molecules
cytotoxic activity of Natural Killer T cells (NKTs) decreases due to reduction of the expression of cytotoxicity activating receptors (NKp30, NKp46, etc.) and (simultaneously) increase in the expression of the inhibitory (KIR, NKG2C, etc.) receptors of NK cells
reduction of cytotoxic activity due to impaired expression of associated molecules such as IFN-γ, granzyme B or perforin
impaired proliferation in response to antigenic stimulation
accumulation and clonal expansion of memory and effector T cells
hampered immune defences against viral pathogens, especially by cytotoxic CD8+ T cells
changes in cytokine profile, e.g., increased pro-inflammatory cytokines milieu present in the elderly (IL-6)
increased PD-1 expression
glycolysis as a preferential pathway of energetic metabolism - functionally impaired mitochondria produce ROS excessively
presence of T cell-specific biomarkers of senescence (circular RNA100783, micro-RNAs MiR-181a)
Challenges
The elderly frequently present with non-specific signs and symptoms, and clues of focal infection are often absent or obscured by chronic conditions. This complicates diagnosis and treatment.
Vaccination in the elderly
The reduced efficacy of vaccination in the elderly stems from their restricted ability to respond to immunization with novel non-persistent pathogens, and correlates with both CD4:CD8 alterations and impaired dendritic cell function. Therefore, vaccination in earlier life stages seems more likely to be effective, although the duration of the effect varies by pathogen.
Rescue of the advanced-age phenotype
Removal of senescent cells with senolytic compounds has been proposed as a method of enhancing immunity during aging.
Immune system aging in mice can be partly restricted by restoring thymus growth, which can be achieved by transplantation of proliferative thymic epithelial cells from young mice. Metformin has been proven to moderate aging in preclinical studies. Its protective effect is probably caused primarily by impaired mitochondria metabolism, particularly decreased reactive oxygen production or increased AMP:ATP ratio and lower NAD/NADH ratio. Coenzyme NAD+ is reduced in various tissues in an age-dependent manner, and thus redox potential associated changes seem to be critical in the aging process, and NAD+ supplements may have protective effects. Rapamycin, an antitumor and immunosuppresant, acts similarly.
References
Ageing processes
Immunology
Senescence | Immunosenescence | [
"Chemistry",
"Biology"
] | 1,783 | [
"Senescence",
"Immunology",
"Ageing processes",
"Cellular processes",
"Metabolism"
] |
13,743,194 | https://en.wikipedia.org/wiki/Carbonylation | In chemistry, carbonylation refers to reactions that introduce carbon monoxide (CO) into organic and inorganic substrates. Carbon monoxide is abundantly available and conveniently reactive, so it is widely used as a reactant in industrial chemistry. The term carbonylation also refers to oxidation of protein side chains.
Organic chemistry
Several industrially useful organic chemicals are prepared by carbonylations, which can be highly selective reactions. Carbonylations produce organic carbonyls, i.e., compounds that contain the functional group such as aldehydes (), carboxylic acids () and esters (). Carbonylations are the basis of many types of reactions, including hydroformylation and Reppe reactions. These reactions require metal catalysts, which bind and activate the CO. These processes involve transition metal acyl complexes as intermediates. Much of this theme was developed by Walter Reppe.
Hydroformylation
Hydroformylation entails the addition of both carbon monoxide and hydrogen to unsaturated organic compounds, usually alkenes. The usual products are aldehydes:
The reaction requires metal catalysts that bind CO, forming intermediate metal carbonyls. Many of the commodity carboxylic acids, i.e. propionic, butyric, valeric, etc, as well as many of the commodity alcohols, i.e. propanol, butanol, amyl alcohol, are derived from aldehydes produced by hydroformylation. In this way, hydroformylation is a gateway from alkenes to oxygenates.
Decarbonylation
Few organic carbonyls undergo spontaneous decarbonylation, but many can be induced to do so with appropriate catalysts. A common transformation involves the conversion of aldehydes to alkanes, usually catalyzed by metal complexes:
Few catalysts are highly active or exhibit broad scope.
Acetic acid and acetic anhydride
Large-scale applications of carbonylation are the Monsanto acetic acid process and Cativa process, which convert methanol to acetic acid. In another major industrial process, acetic anhydride is prepared by a related carbonylation of methyl acetate.
Oxidative carbonylation
Dimethyl carbonate and dimethyl oxalate are produced industrially using carbon monoxide and an oxidant, in effect as a source of .
The oxidative carbonylation of methanol is catalyzed by copper(I) salts, which form transient carbonyl complexes. For the oxidative carbonylation of alkenes, palladium complexes are used.
Hydrocarboxylation, hydroxycarbonylation, and hydroesterification
In hydrocarboxylation, alkenes and alkynes are the substrates. This method is used to produce propionic acid from ethylene using nickel carbonyl as the catalyst:
The above reaction is also referred to as hydroxycarbonylation, in which case hydrocarboxylation refers to the same net converstion but using carbon dioxide in place of CO and H2 in place of water:
Acrylic acid was once mainly prepared by the hydrocarboxylation of acetylene.
The carbomethoxylation of ethylene to give methyl propionate:
C2H4 + CO + MeOH → MeO2CC2H5
Methyl propionate ester is a precursor to methyl methacrylate.
Hydroesterification is like hydrocarboxylation, but it uses alcohols in place of water.
The process is catalyzed by Herrmann's catalyst, . Under similar conditions, other Pd-diphosphines catalyze formation of polyketones.
Koch carbonylation
The Koch reaction is a special case of hydrocarboxylation reaction that does not rely on metal catalysts. Instead, the process is catalyzed by strong acids such as sulfuric acid or the combination of phosphoric acid and boron trifluoride. The reaction is less applicable to simple alkene. The industrial synthesis of glycolic acid is achieved in this way:
The conversion of isobutene to pivalic acid is also illustrative:
Other reactions
Alkyl, benzyl, vinyl, aryl, and allyl halides can also be carbonylated in the presence carbon monoxide and suitable catalysts such as manganese, iron, or nickel powders.
In the industrial synthesis of ibuprofen, a benzylic alcohol is converted to the corresponding arylacetic acid via a Pd-catalyzed carbonylation:
Carbonylation in inorganic chemistry
Metal carbonyls, compounds with the formula (M = metal; L = other ligands) are prepared by carbonylation of transition metals. Iron and nickel powder react directly with CO to give and , respectively. Most other metals form carbonyls less directly, such as from their oxides or halides. Metal carbonyls are widely employed as catalysts in the hydroformylation and Reppe processes discussed above. Inorganic compounds that contain CO ligands can also undergo decarbonylation, often via a photochemical reaction.
References
Chemical reactions
Carbon monoxide | Carbonylation | [
"Chemistry"
] | 1,087 | [
"nan"
] |
13,751,165 | https://en.wikipedia.org/wiki/Coherent%20diffraction%20imaging | Coherent diffractive imaging (CDI) is a "lensless" technique for 2D or 3D reconstruction of the image of nanoscale structures such as nanotubes, nanocrystals, porous nanocrystalline layers, defects, potentially proteins, and more. A comprehensive review titled Computational microscopy with coherent diffractive imaging and ptychography was published by Miao in Nature in 2025.
In CDI, a highly coherent beam of X-rays, electrons or other wavelike particle or photon is incident on an object. The beam scattered by the object produces a diffraction pattern downstream which is then collected by a detector. This recorded pattern is then used to reconstruct an image via an iterative feedback algorithm. Effectively, the objective lens in a typical microscope is replaced with software to convert from the reciprocal space diffraction pattern into a real space image. The advantage in using no lenses is that the final image is aberration–free and so resolution is only diffraction and dose limited (dependent on wavelength, aperture size and exposure). Applying a simple inverse Fourier transform to information with only intensities is insufficient for creating an image from the diffraction pattern due to the missing phase information. This is called the phase problem.
Imaging process
The overall imaging process can be broken down in four simple steps:
1. Coherent beam scatters from sample
2. Modulus of Fourier transform measured
3. Computational algorithms used to retrieve phases
4. Image recovered by Inverse Fourier transform
In CDI, the objective lens used in a traditional microscope is replaced with computational algorithms and software which are able to convert from the reciprocal space into the real space. The diffraction pattern picked up by the detector is in reciprocal space while the final image must be in real space to be of any use to the human eye.
To begin, a highly coherent light source of x-rays, electrons, or other wavelike particles must be incident on an object. This beam, although popularly x-rays, has potential to be made up of electrons due to their decreased overall wavelength; this lower wavelength allows for higher resolution and, thus, a clearer final image. However, electron beams are limited in penetration depth compared to X-rays, as electrons have an inherent mass. Due to this incident light, a spot is illuminated on the object being detected and reflected off of its surface. The beam is then scattered by the object producing a diffraction pattern representative of the Fourier transform of the object. The complex diffraction pattern is then collected by the detector and the Fourier transform of all the features that exist on the object’s surface are evaluated. With the diffraction information being put into the frequency domain, the image is not detectable by the human eye and, thus, very different from what we’re used to observing using normal microscopy techniques.
A reconstructed image is then made through utilization of an iterative feedback phase-retrieval algorithm where a few hundred of these incident rays are detected and overlapped to provide sufficient redundancy in the reconstruction process. Lastly, a computer algorithm transforms the diffraction information into the real space and produces an image observable by the human eye; this image is what we would likely see by means of traditional microscopy techniques. The hope is that using CDI would produce a higher resolution image due to its aberration-free design and computational algorithms.
The phase problem
There are two relevant parameters for diffracted waves: amplitude and phase. In typical microscopy using lenses there is no phase problem, as phase information is retained when waves are refracted. When a diffraction pattern is collected, the data is described in terms of absolute counts of photons or electrons, a measurement which describes amplitudes but loses phase information. This results in an ill-posed inverse problem as any phase could be assigned to the amplitudes prior to an inverse Fourier transform to real space.
Three ideas developed that enabled the reconstruction of real space images from diffraction patterns. The first idea was the realization by Sayre in 1952 that Bragg diffraction under-samples diffracted intensity relative to Shannon's theorem. If the diffraction pattern is sampled at twice the Nyquist frequency (inverse of sample size) or denser it can yield a unique real space image. The second was an increase in computing power in the 1980s which enabled iterative hybrid input output (HIO) algorithm for phase retrieval to optimize and extract phase information using adequately sampled intensity data with feedback. This method was introduced by Fienup in the 1980s. In 1998, Miao, Sayre and Chapman used numerical simulations to demonstrate that when the independently measured intensity points is more than the unknown variables, the phase can be in principle retrieved from the diffraction pattern via iterative algorithms. Finally, Miao and collaborators reported the first experimental demonstration of CDI in 1999 using a secondary image to provide low resolution information. Reconstruction methods were later developed that could remove the need for a secondary image.
Reconstruction
In a typical reconstruction the first step is to generate random phases and combine them with the amplitude information from the reciprocal space pattern. Then a Fourier transform is applied back and forth to move between real space and reciprocal space with the modulus squared of the diffracted wave field set equal to the measured diffraction intensities in each cycle. By applying various constraints in real and reciprocal space the pattern evolves into an image after enough iterations of the HIO process. To ensure reproducibility the process is typically repeated with new sets of random phases with each run having typically hundreds to thousands of cycles. The constraints imposed in real and reciprocal space typically depend on the experimental setup and the sample to be imaged. The real space constraint is to restrict the imaged object to a confined region called the "support". For example, the object to be imaged can be initially assumed to reside in a region no larger than roughly the beam size. In some cases this constraint may be more restrictive, such as in a periodic support region for a uniformly spaced array of quantum dots. Other researchers have investigated imaging extended objects, that is, objects that are larger than the beam size, by applying other constraints.
In most cases the support constraint imposed is a priori in that it is modified by the researcher based on the evolving image. In theory this is not necessarily required and algorithms have been developed which impose an evolving support based on the image alone using an auto-correlation function. This eliminates the need for a secondary image (support) thus making the reconstruction autonomic.
The diffraction pattern of a perfect crystal is symmetric so the inverse Fourier transform of that pattern is entirely real valued. The introduction of defects in the crystal leads to an asymmetric diffraction pattern with a complex valued inverse Fourier transform. It has been shown that the crystal density can be represented as a complex function where its magnitude is electron density and its phase is the "projection of the local deformations of the crystal lattice onto the reciprocal lattice vector Q of the Bragg peak about which the diffraction is measured". Therefore, it is possible to image the strain fields associated with crystal defects in 3D using CDI and it has been reported in one case. Unfortunately, the imaging of complex-valued functions (which for brevity represents the strained field in crystals) is accompanied by complementary problems namely, the uniqueness of the solutions, stagnation of the algorithm etc. However, recent developments that overcame these problems (particularly for patterned structures) were addressed. On the other hand, if the diffraction geometry is insensitive to strain, such as in GISAXS, the electron density will be real valued and positive. This provides another constraint for the HIO process, thus increasing the efficiency of the algorithm and the amount of information that can be extracted from the diffraction pattern.
Algorithms
One of the most important aspects of coherent diffraction imaging is the algorithm that recovers the phase from Fourier magnitudes and reconstructs the image. Several algorithms exist for this purpose, though they each follow a similar format of iterating between the real and reciprocal space of the object (Pham 2020). Furthermore, a support region is frequently defined to separate the object from its surrounding zero-density region (Pham 2020). As mentioned earlier, Fienup developed the initial algorithms of Error Reduction (ER) and Hybrid Input-Output (HIO) which both utilized a support constraint for real space and Fourier magnitudes as a constraint in reciprocal space (Fienup 1978). The ER algorithm sets both the zero-density region and the negative densities inside the support to zero for each iteration (Fienup 1978). The HIO algorithm relaxes the conditions of ER by gradually reducing the negative densities of the support to zero with each iteration (Fienup 1978). While HIO allowed for the reconstruction of an image from a noise-free diffraction pattern, it struggled to recover the phase in actual experiments where the Fourier magnitudes were corrupted by noise. This led to further development of algorithms that could better handle noise in image reconstruction. In 2010, a new algorithm called oversampling smoothness (OSS) was created to use a smoothness constraint on the imaged object. OSS would utilize Gaussian filters to apply a smoothness constraint to the zero-density region which was found to increase robustness to noise and reduce oscillations in reconstruction (Rodriguez 2013).
Generalized Proximal Imaging (GPS)
Building upon the success of OSS, a new algorithm called generalized proximal smoothness (GPS) has been developed. GPS addresses noise in the real and reciprocal space by incorporating principles of Moreau-Yosida regularization, which is a method of turning a convex function into a smooth convex function (Moreau 1965) (Yosida 1964). The magnitude constraint is relaxed into a least-fidelity squares term as a means of lessening the noise in the reciprocal space (Pham 2020). Overall, GPS was found to perform better than OSS and HIO in consistency, convergence speed, and robustness to noise. Using R-factor (relative error) as a measurement for effectiveness, GPS was found to have a lower R-factor in both real and reciprocal spaces (Pham 2020). Moreover, it took fewer iterations for GPS to converge towards a lower R-factor when compared to OSS and HIO in both spaces (Pham 2020).
Coherence
Two wave sources are coherent when their frequency and waveforms are identical; this property of waves allows for stationary interference in which the wave is temporally or spatially constant and the waves are either added or subtracted from one another. Coherence is important in the context of CDI as the coherence of the two sources allows for the continuous emission of waves to occur. A constant phase difference and the coherence of a wave are necessary in order to obtain any type of interference pattern.
Clearly a highly coherent beam of waves is required for CDI to work since the technique requires interference of diffracted waves. Coherent waves must be generated at the source (synchrotron, field emitter, etc.) and must maintain coherence until diffraction. It has been shown that the coherence width of the incident beam needs to be approximately twice the lateral width of the object to be imaged.
However determining the size of the coherent patch to decide whether the object does or does not meet the criterion is subject to debate. As the coherence width is decreased, the size of the Bragg peaks in reciprocal space grows and they begin to overlap leading to decreased image resolution.
Energy sources
X-ray
Coherent x-ray diffraction imaging (CXDI or CXD) uses x-rays (typically .5-4keV) to form a diffraction pattern which may be more attractive for 3D applications than electron diffraction since x-rays typically have better penetration. For imaging surfaces, the penetration of X-rays may be undesirable, in which case a glancing angle geometry may be used such as GISAXS. A typical x-ray CCD is used to record the diffraction pattern. If the sample is rotated about an axis perpendicular to the beam a 3-Dimensional image may be reconstructed.
Due to radiation damage, resolution is limited (for continuous illumination set-ups) to about 10 nm for frozen-hydrated biological samples but resolutions of as high as 1 to 2 nm should be possible for inorganic materials less sensitive to damage (using modern synchrotron sources). It has been proposed that radiation damage may be avoided by using ultra short pulses of x-rays where the time scale of the destruction mechanism is longer than the pulse duration. This may enable higher energy and therefore higher resolution CXDI of organic materials such as proteins. However, without the loss of information "the linear number of detector pixels fixes the energy spread needed in the beam" which becomes increasingly difficult to control at higher energies.
In a 2006 report, resolution was 40 nm using the Advanced Photon Source (APS) but the authors suggest this could be improved with higher power and more coherent X-ray sources such as the X-ray free electron laser.
Electrons
Coherent electron diffraction imaging works the same as CXDI in principle only electrons are the diffracted waves and an imaging plate is used to detect electrons rather than a CCD. In one published report a double walled carbon nanotube (DWCNT) was imaged using nano area electron diffraction (NAED) with atomic resolution. In principle, electron diffraction imaging should yield a higher resolution image because the wavelength of electrons can be much smaller than photons without going to very high energies. Electrons also have much weaker penetration so they are more surface sensitive than X-rays. However, typically electron beams are more damaging than x-rays so this technique may be limited to inorganic materials.
In Zuo's approach, a low resolution electron image is used to locate a nanotube. A field emission electron gun generates a beam with high coherence and high intensity. The beam size is limited to nano area with the condenser aperture in order to ensure scattering from only a section of the nanotube of interest. The diffraction pattern is recorded in the far field using electron imaging plates to a resolution of 0.0025 1/Å. Using a typical HIO reconstruction method an image is produced with Å resolution in which the DWCNT chirality (lattice structure) can be directly observed. Zuo found that it is possible to start with non-random phases based on a low resolution image from a TEM to improve the final image quality.
In 2007, Podorov et al. proposed an exact analytical solution of CDXI problem for particular cases.
In 2016 using the coherent diffraction imaging (CXDI) beamline at ESRF (Grenoble, France), the researchers quantified the porosity of large faceted nanocrystalline layers at the origin of photoluminescence emission band in the infrared. It has been shown that phonons can be confined in sub-micron structures, which could help enhance the output of photonic and photovoltaic (PV) applications.
In situ CDI
Incomplete measurements have been a problem observed across all algorithms in CDI. Since the detector is too sensitive to absorb a particle beam directly, a beamstop or hole must be placed at its center to prevent direct contact (Pham 2020). Furthermore, detectors are often constructed with multiple panels with gaps between them where data again cannot be collected (Pham 2020). Ultimately, these qualities of the detector result in missing data within the diffraction patterns. In situ CDI is a new method of this imaging technology that could increase resistance to incomplete measurements. In situ CDI images a static region and a dynamic region that changes over time as a result of external stimuli (Hung Lo 2018). A series of diffraction patterns are collected over time with interference from the static and dynamic regions (Hung Lo 2018). Because of this interference, the static region acts as a time invariant constraint that phases patterns together in fewer iterations (Hung Lo 2018). Enforcing this static region as a constraint makes in situ CDI more robust to incomplete data and noise interference in the diffraction patterns (Hung Lo 2018). Overall, in situ CDI provides clearer data collection in fewer iterations than other CDI techniques.
Related techniques
Various techniques for CDI have been developed over the years and utilized to study samples in physics, chemistry, materials, science, nanoscience, geology, and biology (6); this includes, but is not limited to, plane-wave DCI, Bragg CDI, ptychography, reflection CDI, Fresnel CDI, and sparsity CDI.
Ptychography is a technique which is closely related to coherent diffraction imaging. Instead of recording just one coherent diffraction pattern, several – and sometimes hundreds or thousands – of diffraction patterns are recorded from the same object. Each pattern is recorded from a different area of the object, although the areas must partially overlap with one another. Ptychography is only applicable to specimens that can survive irradiation in the illuminating beam for these multiple exposures. However, it has the advantage that a large field of view can be imaged. The extra translational diversity in the data also means the reconstruction procedure can be faster and ambiguities in the solution space are reduced.
See also
Diffraction
X-ray diffraction computed tomography
List of materials analysis methods
Nanotechnology
Surface physics
Synchrotron
References
External links
Ian Robinson X-Ray Studies Group Page
Jian-Min (Jim) Zuo Electron Microscopy Group Page
Diffraction
Materials science
Microscopes
Microscopy
Nanotechnology
Scientific techniques | Coherent diffraction imaging | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 3,665 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Materials science",
"Measuring instruments",
"Diffraction",
"Crystallography",
"Microscopes",
"nan",
"Microscopy",
"Nanotechnology",
"Spectroscopy"
] |
638,425 | https://en.wikipedia.org/wiki/Depth%20gauge | A depth gauge is an instrument for measuring depth below a vertical reference surface. They include depth gauges for underwater diving and similar applications.
A diving depth gauge is a pressure gauge that displays the equivalent depth below the free surface in water. The relationship between depth and pressure is linear and accurate enough for most practical purposes, and for many purposes, such as diving, it is actually the pressure that is important. It is a piece of diving equipment used by underwater divers, submarines and submersibles.
Most modern diving depth gauges have an electronic mechanism and digital display. Earlier types used a mechanical mechanism and analogue display. Digital depth gauges used by divers commonly also include a timer showing the interval of time that the diver has been submerged. Some show the diver's rate of ascent and descent, which can be is useful for avoiding barotrauma. This combination instrument is also known as a bottom timer. An electronic depth gauge is an essential component of a dive computer.
As the gauge only measures water pressure, there is an inherent inaccuracy in the depth displayed by gauges that are used in both fresh water and seawater due to the difference in the densities of fresh water and seawater due to salinity and temperature variations.
A depth gauge that measures the pressure of air bubbling out of an open ended hose to the diver is called a pneumofathometer. They are usually calibrated in metres of seawater or feet of seawater.
History
Experiments in 1659 by Robert Boyle of the Royal Society were made using a barometer underwater, and led to Boyle's law. The French physicist, mathematician and inventor Denis Papin published Recuiel de diverses Pieces touchant quelques novelles Machines in 1695, where he proposed a depth gauge for a submarine.
A "sea-gage" for measuring ocean depth was described in Philosophia Britannica in 1747. But it wasn't until 1775 and the development of a depth gauge by the inventor, scientific instrument, and clock maker Isaac Doolittle of New Haven, Connecticut, for David Bushnell's submarine the Turtle, that one was deployed in an underwater craft. By the early nineteenth century, "the depth gauge was a standard feature on diving bells".
Mode of operation
With water depth, the ambient pressure increases 1 bar for every 10 m in fresh water at 4 °C. Therefore, the depth can be determined by measuring the pressure and comparing it to the pressure at the surface. Atmospheric pressure varies with altitude and weather, and for accuracy the depth gauge should be calibrated to correct for local atmospheric pressure. This can be important for decompression safety at altitude. Water density varies with temperature and salinity, so for an accurate depth measurement by this method, the temperature and salinity profiles must be known. These are easily measured, but must be measured directly.
Types
Boyle-Mariott depth gauge
The Boyle-Mariotte depth gauge consists of a transparent tube open at one end. It has no moving parts, and the tube is commonly part of a circle or a flat spiral to compactly fit onto a support. While diving, water goes into the tube and compresses an air bubble inside proportionally to the depth. The edge of the bubble indicates the depth on a scale. For a depth up to 10 m, this depth gauge is quite accurate, because in this range, the pressure doubles from 1 bar to 2 bar, and so it uses half of the scale. This type of gauge is also known as a capillary gauge. At greater depths, it becomes inaccurate. The maximum depth cannot be recorded with this type of depth gauge, and accuracy is strongly affected by temperature change of the air bubble while immersed.
Bourdon tube depth gauge
The Bourdon tube depth gauge consists of a curved tube made of elastic metal, known as a Bourdon tube. Water pressure on the tube may be on the inside or the outside depending on the design. When the pressure increases, the tube stretches, and when it decreases the tube recovers to the original curvature. This movement is transferred to a pointer by a system of gears or levers, and the pointer may have an auxiliary trailing pointer which is pushed along but does not automatically return with the main pointer, which can mark the maximum depth reached. Accuracy can be good. When carried by the diver, these gauges measure the pressure difference directly between the ambient water and the sealed internal air space of the gauge, and therefore can be influenced by temperature changes.
Membrane depth gauge
In a membrane depth gauge, the water presses onto a metal canister with a flexible end, which is deflected proportionally to external pressure. Deflection of the membrane is amplified by a lever and gear mechanism and transferred to an indicator pointer like in an aneroid barometer. The pointer may push a trailing pointer which does not return by itself, and indicates the maximum. This type of gauge can be quite accurate when corrected for temperature variations.
Strain gauges may be used to convert the pressure on a membrane to electrical resistance, which can be converted to an analog signal by a Wheatstone bridge This signal can be processed to provide a signal proportional to pressure, which may be digitised for further processing and display.
Piezoresistive pressure sensors
Piezoresistive pressure sensors use the variation of resistivity of silicon with stress. A piezoresistive sensor consists of a silicon diaphragm on which silicon resistors are diffused during the manufacturing process. The diaphragm is bonded to a silicon wafer. The signal must be corrected for temperature variations. These pressure sensors are commonly used in dive computers.
Pneumofathometer
A pneumofathometer is a depth gauge which indicates the depth of a surface supplied diver by measuring the pressure of air supplied to the diver. Originally there were pressure gaues mounted on the hand cranked diver's air pump used to provide breathing air to a diver wearing standard diving dress, with a free-flow air supply, in which there was not much back-pressure other than the hydrostatic pressure of depth. As non-return valves were added to the system for safety, they increased back pressure, which also increased when demand helmets were introduced, so an additional small diameter hose was added to the diver's umbilical which has no added restrictions and when a low flow rate of gas is passed through it to produce bubbles at the diver, it gives an accurate, reliable and rugged system for measuring diver depth, which is still used as the standard depth monitoring equipment for surface supplied divers. The pneumofathometer gauges are mounted on the diver's breathing gas supply panel, and are activated by a valve. The "pneumo line", as it is generally called by divers, can be used as an emergency breathing air supply, by tucking the open end into the bottom of the helmet or full face mask and opening up the valve to provide free flow air. A "gauge snubber" needle valve or orifice is fitted between the pneumo line and the gauge to reduce shock loads on the delicate mechanism, and an overpressure valve protects the gauge from pressures beyond its operating range. The type of high precision gauge used is also known as a caisson gauge. Precision is typically 1% to 0.25% of full scale.
Dive computer
Dive computers have an integrated depth gauge, with digitized output which is used in the calculation of the current decompression status of the diver. The dive depth is displayed along with other values on the display and recorded by the computer for continuous simulation of the decompression model. Most dive computers contain a piezoresistive pressure sensor. Rarely, capacitive or inductive pressure sensors are used.
Uses
A diver uses a depth gauge with decompression tables and a watch to avoid decompression sickness. A common alternative to the depth gauge, watch and decompression tables is a dive computer, which has an integral depth gauge, and displays the current depth as a standard function.
Light based depth gauges in Biology
A depth gauge can also be based on light: The brightness decreases with depth, but depends on the weather (e.g. whether it is sunny or cloudy) and the time of the day. Also the color depends on the water depth.
In water, light attenuates for each wavelength, differently. The UV, violet (> 420 nm), and red (< 500 nm) wavelengths disappear before blue light (470 nm), which penetrates clear water the deepest. The wavelength composition is constant for each depth and is almost independent of time of the day and the weather. To gauge depth, an animal would need two photopigments sensitive to different wavelengths to compare different ranges of the spectrum. Such pigments may be expressed in different structures.
Such different structures are found in the polychaete Torrea candida. Its eyes have a main and two accessory retinae. The accessory retinae sense UV-light (λmax = 400 nm) and the main retina senses blue-green light (λmax = 560 nm). If the light sensed from all retinae is compared, the depth can be estimated, and so for Torrea candida such a ratio-chromatic depth gauge has been proposed.
A ratio chromatic depth gauge has been found in larvae of the polychaete Platynereis dumerilii. The larvae have two structures: The rhabdomeric photoreceptor cells of the eyes and in the deep brain the ciliary photoreceptor cells. The ciliary photoreceptor cells express a ciliary opsin, which is a photopigment maximally sensitive to UV-light (λmax = 383 nm). Thus, the ciliary photoreceptor cells react on UV-light and make the larvae swimming down gravitactically. The gravitaxis here is countered by phototaxis, which makes the larvae swimming up to the light coming from the surface. Phototaxis is mediated by the rhabdomeric eyes. The eyes express at least three opsins (at least in the older larvae), and one of them is maximally sensitive to cyan light (λmax = 483 nm) so that the eyes cover a broad wavelength range with phototaxis. When phototaxis and gravitaxis have leveled out, the larvae have found their preferred depth.
See also
References
External links
on depth gauges hosted by the Rubicon Foundation
Underwater diving safety equipment
Pressure gauges
Vertical position | Depth gauge | [
"Physics",
"Technology",
"Engineering"
] | 2,185 | [
"Vertical position",
"Physical quantities",
"Distance",
"Measuring instruments",
"Pressure gauges"
] |
638,899 | https://en.wikipedia.org/wiki/Vertex%20%28graph%20theory%29 | In discrete mathematics, and more specifically in graph theory, a vertex (plural vertices) or node is the fundamental unit of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). In a diagram of a graph, a vertex is usually represented by a circle with a label, and an edge is represented by a line or arrow extending from one vertex to another.
From the point of view of graph theory, vertices are treated as featureless and indivisible objects, although they may have additional structure depending on the application from which the graph arises; for instance, a semantic network is a graph in which the vertices represent concepts or classes of objects.
The two vertices forming an edge are said to be the endpoints of this edge, and the edge is said to be incident to the vertices. A vertex w is said to be adjacent to another vertex v if the graph contains an edge (v,w). The neighborhood of a vertex v is an induced subgraph of the graph, formed by all vertices adjacent to v.
Types of vertices
The degree of a vertex, denoted 𝛿(v) in a graph is the number of edges incident to it. An isolated vertex is a vertex with degree zero; that is, a vertex that is not an endpoint of any edge (the example image illustrates one isolated vertex). A leaf vertex (also pendant vertex) is a vertex with degree one. In a directed graph, one can distinguish the outdegree (number of outgoing edges), denoted 𝛿 +(v), from the indegree (number of incoming edges), denoted 𝛿−(v); a source vertex is a vertex with indegree zero, while a sink vertex is a vertex with outdegree zero. A simplicial vertex is one whose neighbors form a clique: every two neighbors are adjacent. A universal vertex is a vertex that is adjacent to every other vertex in the graph.
A cut vertex is a vertex the removal of which would disconnect the remaining graph; a vertex separator is a collection of vertices the removal of which would disconnect the remaining graph into small pieces. A k-vertex-connected graph is a graph in which removing fewer than k vertices always leaves the remaining graph connected. An independent set is a set of vertices no two of which are adjacent, and a vertex cover is a set of vertices that includes at least one endpoint of each edge in the graph. The vertex space of a graph is a vector space having a set of basis vectors corresponding with the graph's vertices.
A graph is vertex-transitive if it has symmetries that map any vertex to any other vertex. In the context of graph enumeration and graph isomorphism it is important to distinguish between labeled vertices and unlabeled vertices. A labeled vertex is a vertex that is associated with extra information that enables it to be distinguished from other labeled vertices; two graphs can be considered isomorphic only if the correspondence between their vertices pairs up vertices with equal labels. An unlabeled vertex is one that can be substituted for any other vertex based only on its adjacencies in the graph and not based on any additional information.
Vertices in graphs are analogous to, but not the same as, vertices of polyhedra: the skeleton of a polyhedron forms a graph, the vertices of which are the vertices of the polyhedron, but polyhedron vertices have additional structure (their geometric location) that is not assumed to be present in graph theory. The vertex figure of a vertex in a polyhedron is analogous to the neighborhood of a vertex in a graph.
See also
Node (computer science)
Graph theory
Glossary of graph theory
References
Berge, Claude, Théorie des graphes et ses applications. Collection Universitaire de Mathématiques, II Dunod, Paris 1958, viii+277 pp. (English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition. Dover, New York 2001)
External links
Graph theory | Vertex (graph theory) | [
"Mathematics"
] | 884 | [
"Mathematical relations",
"Graph theory objects",
"Graph theory"
] |
638,971 | https://en.wikipedia.org/wiki/MOLPRO | MOLPRO is a software package used for accurate ab initio quantum chemistry calculations. It is developed by Peter Knowles at Cardiff University and Hans-Joachim Werner at Universität Stuttgart in collaboration with other authors.
The emphasis in the program is on highly accurate computations, with extensive treatment of the electron correlation problem through the multireference configuration interaction, coupled cluster and associated methods. Integral-direct local electron correlation methods reduce the increase of the computational cost with molecular size. Accurate ab initio calculations can then be performed for larger molecules. With new explicitly correlated methods the basis set limit can be very closely approached.
History
Molpro was designed and maintained by Wilfried Meyer and Peter Pulay in the late 1960s. At that moment, Pulay developed the first analytical gradient code called Hartree-Fock (HF), and Meyer researched his PNO-CEPA (pseudo-natural orbital coupled-electron pair approximation) methods. In 1980, Werner and Meyer developed a new state-averaged, quadratically convergent (MC-SCF) method, which provided geometry optimization for multireference cases. By the same year, the first internally contracted multireference configuration interaction (IC-MRCI) program was developed by Werner and Reinsch. About four years later (1984), Werner and Knowles developed on a new generation program called CASSCF (complete active space SCF). This new CASSCF program combined fast orbital optimization algorithms with determinant-based full CI codes, and additional, more general, unitary group configuration interaction (CI) codes. This resulted in the quadratically convergent MCSCF/CASSCF code called MULTI, which allowed modals to be optimized a weighted energy average of several states, and is capable of treating both completely general configuration expansions. In fact, this method is still available today. In addition to these organizational developments, Knowles and Werner started to cooperate on a new, more efficient, IC-MRCI method. Extensions for accurate treatments of excited states became possible through a new IC-MRCI method. In brief, the present IC-MRCI will be described as MRCI. These recently developed MCSCF and MRCI methods resulted in the basis of the modern Molpro. In the following years, a number of new programs were added. Analytic energy gradients can be evaluated with coupled-cluster calculations, density functional theory (DFT), as well as many other programs. These structural changes make the code more modular and easier to use and maintain, and also reduces the probability of input error.
See also
References
External links
MOLPRO Official Site
Computational chemistry software | MOLPRO | [
"Chemistry"
] | 542 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
639,706 | https://en.wikipedia.org/wiki/Swampland%20%28physics%29 | In physics, the term swampland refers to effective low-energy physical theories which are not compatible with quantum gravity. This is in contrast with the so-called "string theory landscape" that are known to be compatible with string theory, which is hypothesized to be a consistent quantum theory of gravity. In other words, the Swampland is the set of consistent-looking theories with no consistent ultraviolet completion with the addition of gravity.
Developments in string theory also suggest that the string theory landscape of false vacuum is vast, so it is natural to ask if the landscape is as vast as allowed by anomaly-free effective field theories. The Swampland program aims to delineate the theories of quantum gravity by identifying the universal principles shared among all theories compatible with gravitational UV completion. The program was initiated by Cumrun Vafa who argued that string theory suggests that the Swampland is in fact much larger than the string theory landscape.
Quantum gravity differs from quantum field theory in several key ways, including locality and UV/IR decoupling. In quantum gravity, a local structure of observables is emergent rather than fundamental. A concrete example of the emergence of locality is AdS/CFT, where the local quantum field theory description in bulk is only an approximation that emerges within certain limits of the theory. Moreover, in quantum gravity, it is believed that different spacetime topologies can contribute to the gravitational path integral, which suggests that spacetime emerges due to one saddle being more dominant. Moreover, in quantum gravity, UV and IR are closely related. This connection is manifested in black hole thermodynamics, where a semiclassical IR theory calculates the black hole entropy, which captures the density of gravitational UV states known as black holes. In addition to general arguments based on black hole physics, developments in string theory also suggests that there are universal principles shared among all the theories in the string landscape.
The swampland conjectures are a set of conjectured criteria for theories in the quantum gravity landscape. The criteria are often motivated by black hole physics, universal patterns in string theory, and non-trivial self-consistencies among each other.
No global symmetry conjecture
The no global symmetry conjecture states that any symmetry in quantum gravity is either broken or gauged. In other words, there are no accidental symmetries in quantum gravity. The original motivation for the conjecture goes back to black holes. Hawking radiation of a generic black hole is only sensitive to charges that can be measured outside of the black hole, which are charges under gauge symmetries. Therefore, it is believed that the process of black hole formation and evaporation violates any conservation, which is not protected by gauge symmetry. The no global symmetry conjecture can also be derived from AdS/CFT correspondence in AdS.
Generalization to higher-form symmetries
The modern understanding of global and gauge symmetries allows for a natural generalization of the no-global symmetry conjectures to higher-form symmetries. A conventional symmetry (0-form symmetry) is a map that acts on point-like operators. For example, a free complex scalar field has a symmetry which acts on the operator as , where is a constant. One can use the symmetry to associate an operator to any symmetry element and codimension-1 hypersurface such that maps any charged local operator such as to if the point is enclosed (or linked) by . By definition, the action of the operator does not change by a continuous deformation of as long as does not hit a charged operator. Due to this feature, the operator is called a topological operator. If the algebra governing the fusion of the symmetry operators has an element without an inverse, the corresponding symmetry is called a non-invertible symmetry.
The above definitions can be generalized to higher dimensional charged operators. A collection of codimension- topological operators which act non-trivially on dimension- operators and are closed under fusion is called a -form symmetry. Compactification of a higher dimensional theory with a -form symmetry on a -dimensional torus can map the higher form symmetry to a -form symmetry in the lower dimensional theory. Therefore, it is believed that higher-form global symmetries are also excluded from quantum gravity.
Note that gauge symmetry does not satisfy this definition since, in the process of gauging, any local charged operator is excluded from the physical spectrum.
Cobordism conjecture
Global symmetries are closely connected to conservation laws. The no-global symmetry conjecture essentially states that any conservation law that is not protected by a gauge symmetry can be violated via a dynamical process. This intuition leads to the cobordism conjecture.
Consider a gravitational theory that can be put on two backgrounds with non-compact dimensions and internal geometries and . Cobordism conjecture states that there must be a dynamical process which connects the two backgrounds to each other. In other words, there must exist a domain wall in the lower-dimensional theory which separates the two backgrounds. This resembles the idea of cobordism in mathematics, which interpolates between two manifolds by connecting them using a higher dimensional manifold.
Completeness of spectrum hypothesis
The completeness of spectrum hypothesis conjectures that in quantum gravity, the spectrum of charges under any gauge symmetry is completely realized. This conjecture is universally satisfied in string theory, but is also motivated by black hole physics. The entropy of charged black holes is non-zero. Since the exponential of entropy counts the number of states, the non-zero entropy of black holes suggests that for sufficiently high charges, any charge is realized by at least one black hole state.
Relation to no-global symmetry conjecture
The completeness of spectrum hypothesis is closely related to the no global symmetry conjecture.
Example:
Consider a gauge symmetry. In the absence of charged particles, the theory has a 1-form global symmetry . For any number and any codimension 2 surface , the symmetry operator multiplies a Wilson line that links with by , where the charge associated with the Wilson line is units of the fundamental charge.
In the presence of charged particles, Wilson lines can break up. Suppose there is a charged particle with charge , the Wilson lines can change their charges for multiples of . Therefore, some of the symmetry operators are no longer well-defined. However, if we take to be the smallest charge, the values give rise to well defined symmetry operators. Therefore, a part of the global symmetry survives. To avoid any global symmetry, must be 1 which means all charges appear in the spectrum.
The above argument can be generalized to discrete and higher-dimensional symmetries. The completeness of spectrum follows from the absence of generalized global symmetry which also includes non-invertible symmetries.
Weak gravity conjecture
The weak gravity conjecture (WGC) is a conjecture regarding the strength gravity can have in a theory of quantum gravity relative to the gauge forces in that theory. It roughly states that gravity should be the weakest force in any consistent theory of quantum gravity.
Original conjecture
The weak gravity conjecture postulates that every black hole must decay unless it is protected by supersymmetry. Suppose there is a gauge symmetry, there is an upper bound on the charge of the black holes with a given mass. The black holes that saturate that bound are extremal black holes. The extremal black holes have zero Hawking temperature. However, whether or not a black hole with a charge and a mass that exactly satisfies the extremality condition exists depends on the quantum theory. But given the high entropy of the large extremal black holes, there must exist many states with charges and masses that are arbitrarily close to the extremality condition. Suppose the black hole emits a particle with charge and mass . For the remaining black hole to remain subextremal, we must have in Planck units where the extremality condition takes the form .
Mild version
Given that black holes are the natural extension of particles beyond a certain mass, it is natural to assume that there must also be black holes with a charge-to-mass ratio that is greater than that of very large black holes. In other words, the correction to the extremality condition must be such that .
Higher dimensional generalization
Weak gravity conjecture can be generalized to higher-form gauge symmetries. The generalization postulates that for any higher-form gauge symmetry, there exists a brane which has a charge-to-mass ratio that exceeds the charge-to-mass ratio of the extremal branes.
Distance conjecture
String dualities have played a crucial role in developing the modern understanding of string theory by providing a non-perturbative window into UV physics. In string theory, when one takes the vacuum expectation values of the scalar fields of a theory to a certain limit, a dual description always emerges. An example of this is T-duality, where there are two dual descriptions to understand a string theory with an internal geometry of a circle. However, each perturbative description becomes valid in a different regime of the parameter space. The circle's radius manifests itself as a scalar field in the lower dimensional theory. If one takes the value of this scalar field to infinity, the resulting theory can be described by the original higher dimensional theory. The new description includes a tower of light states corresponding to the Kaluza-Klein (KK) particles. On the other hand, if we take the size of the circle to zero, the strings that wind around the circle will become light. T-duality is the statement that there exists an alternative description which captures these light winding states as KK particles. Note that in the absence of a string, there is no reason to believe any states should become light in the limit where the size of the circle goes to zero. Distance conjecture quantifies the above observation and states that it must happen at any infinite distance limit of the parameter space.
Original conjecture
If one takes the vacuum expectation value of the scalar fields to infinity, there exists a tower of light and weakly coupled states whose mass in Planck units goes to zero. Moreover, the mass of the particles depends on the canonical distance travelled in the moduli space as , where and are constants. Moreover, there is a universal dimension-dependent lower bound on .
The canonical distance between two points in the target space for scalar expectations values (moduli space) is measured using the canonical metric , which is defined by the kinetic term in action.
Emergent string conjecture
A stronger version of the original distance conjecture additionally postulates that the lightest tower of states at any infinite distance limit is either a KK tower or a string tower. In other words, the leading tower of states can either be understood via dimensional reduction of a higher dimensional theory (just like the example provided above) or as excitations of a weakly coupled string.
This conjecture is often further strengthened by imposing the string to be a fundamental string.
The sharpened distance conjecture
The sharpened distance conjecture states that in spacetime dimensions, .
References
External links
Lecture by Cumrun Vafa, String Landscape and the Swampland, March 2018
String theory
Quantum gravity | Swampland (physics) | [
"Physics",
"Astronomy"
] | 2,271 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Quantum gravity",
"String theory",
"Physics beyond the Standard Model"
] |
639,767 | https://en.wikipedia.org/wiki/Holmdel%20Horn%20Antenna | The Holmdel Horn Antenna is a large microwave horn antenna that was used as a satellite communication antenna and radio telescope during the 1960s at the Bell Telephone Laboratories facility located on Crawford Hill in Holmdel Township, New Jersey, United States. It was designated a National Historic Landmark in 1989 because of its association with the research work of two radio astronomers, Arno Penzias and Robert Wilson.
In 1965, while using this antenna, Penzias and Wilson discovered the cosmic microwave background radiation (CMBR) that permeates the universe. This was one of the most important discoveries in physical cosmology since Edwin Hubble demonstrated in the 1920s that the universe was expanding. It provided the evidence that confirmed George Gamow's and Georges Lemaître's "Big Bang" theory of the creation of the universe. This helped change the science of cosmology, the study of the universe's history, from a field for unlimited theoretical speculation into a discipline of direct observation. In 1978 Penzias and Wilson received the Nobel Prize for Physics for their discovery.
Description
The horn antenna at Bell Telephone Laboratories in Holmdel, New Jersey, was constructed on Crawford Hill in 1959 to support Project Echo, the National Aeronautics and Space Administration's passive communications satellites, which used large aluminized plastic balloons (satellite balloon) as reflectors to bounce radio signals from one point on the Earth to another.
The antenna is in length with a radiating aperture of and is constructed of aluminum. The antenna's elevation wheel, which surrounds the midsection of the horn, is in diameter and supports the structure's weight using rollers mounted on a base frame. All axial or thrust loads are taken by a large ball bearing at the narrow apex end of the horn. The horn continues through this bearing into the equipment building or cab. The ability to locate receiver equipment at the horn apex, thus eliminating the noise contribution of a connecting line, is an important feature of the antenna. A radiometer for measuring the intensity of radiant energy is located in the cab.
The triangular base frame of the antenna is made from structural steel. It rotates on wheels about a center pintle ball bearing on a turntable track in diameter. The track consists of stress-relieved, planed steel plates individually adjusted to produce a track that is flat to about . The faces of the wheels are cone-shaped to minimize contact friction. A tangential force of 100 pounds (400 N) is sufficient to start the antenna rotating on the turntable. The antenna beam can be directed to any part of the sky using the turntable for azimuth adjustments and the elevation wheel to change the elevation angle or altitude above the horizon.
Except for the steel base frame, which a local steel company made, the Holmdel Laboratory shops fabricated and assembled the antenna under the direction of Mr. H. W. Anderson, who also collaborated on the design. Assistance in the design was also given by Messrs. R. O'Regan and S. A. Darby. Construction of the antenna was completed under the direction of Arthur Crawford.
When not in use, the turntable azimuth sprocket drive is disengaged, allowing the structure to "weathervane" and seek a position of minimum wind resistance. The antenna was designed to withstand winds of , and the entire structure weighs 18 short tons (16 tonnes).
A plastic clapboarded utility shed with two windows, a double door, and a sheet-metal roof, is located on the ground next to the antenna. This structure houses equipment and controls for the antenna and is included as a part of the designation as a National Historic Landmark.
The antenna has not been used for several decades.
Technical
This type of antenna is called a Hogg or horn-reflector antenna, invented by Alfred C. Beck and Harald T. Friis in 1941. It was built by David C. Hogg. It consists of a flaring metal horn with a curved reflecting surface mounted in its mouth at a 45° angle to the long axis of the horn. The reflector is a segment of a parabolic reflector, so the antenna is a parabolic antenna that is fed off-axis. A Hogg horn combines several characteristics useful for radio astronomy. It is extremely broad-band, has calculable aperture efficiency, and the walls of the horn shield it from radiation coming from angles outside the main beam axis. Therefore, the back and side lobes are so minimal that scarcely any thermal energy is received from the ground. Consequently, it is an ideal radio telescope for accurately measuring low levels of weak background radiation. The antenna has a gain of about 43.3 dBi and a beamwidth of about 1.5° at 2.39 GHz and an aperture efficiency of 76%.
Preservation
In 2021, the Crawford Hill site was sold to a developer who was interested in building a residential development. In reaction, this triggered a "Save Holmdel's Horn Antenna" petition to preserve the property as a park. Advocates felt that a better fate than the horn antenna or its site encountering destruction to make way for a planned real estate development.
As of October 2023, the site is now planned to be preserved. After public support for the preservation of the horn antenna emerged—demonstrated in part by more than 8,000 signatures on a petition disseminated by community groups—the Holmdel Township Committee agreed to pay $5.5 million for of land, including that which the antenna sits on. The town plans to turn the land into a public park.
See also
Andover Earth Station, location of another large Hogg horn antenna
References
Footnotes
Aaronson, Steve. "The Light of Creation: An Interview with Arno A. Penzias and Robert W. Wilson." Bell Laboratories Record. January 1979, pp. 12–18.
Abell, George O. Exploration of the Universe. 4th ed., Philadelphia: Saunders College Publishing, 1982.
Asimov, Isaac. Asimov's Biographical Encyclopedia of Science and Technology. 2nd ed., New York: Doubleday & Company, Inc., 1982.
Bernstein, Jeremy. Three Degrees Above Zero: Bell Labs in the Information Age. New York: Charles Scribner's Sons, 1984.
Chown, Marcus. "A Cosmic Relic in Three Degrees," New Scientist, September 29, 1988, pp. 51–55.
Crawford, A.B., D.C. Hogg and L.E. Hunt. "Project Echo: A Horn-Reflector Antenna for Space Communication," The Bell System Technical Journal, July 1961, pp. 1095–1099.
Disney, Michael. The Hidden Universe. New York: Macmillan Publishing Company, 1984.
Ferris, Timothy. The Red Limit: The Search for the Edge of the Universe. 2nd ed., New York: Quill Press, 1978.
Friedman, Herbert. The Amazing Universe. Washington, DC: National Geographic Society, 1975.
Hey, J.S. The Evolution of Radio Astronomy. New York: Neale Watson Academic Publications, Inc., 1973.
Jastrow, Robert. God and the Astronomers. New York : W. W. Norton & Company, Inc., 1978.
H.T. Kirby-Smith U.S. Observatories: A Directory and Travel Guide. New York: Van Nostrand Reinhold Company, 1976.
Penzias, A.A., and R. W. Wilson. "A Measurement of the Flux Density of CAS A At 4080 Mc/s," Astrophysical Journal Letters, May 1965, pp. 1149–1154.
Further reading
External links
Buildings and structures in Monmouth County, New Jersey
Holmdel Township, New Jersey
National Historic Landmarks in New Jersey
Physical cosmology
Radio telescopes
National Register of Historic Places in Monmouth County, New Jersey | Holmdel Horn Antenna | [
"Physics",
"Astronomy"
] | 1,598 | [
"Astrophysics",
"Theoretical physics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
639,790 | https://en.wikipedia.org/wiki/Discovery%20of%20cosmic%20microwave%20background%20radiation | The discovery of cosmic microwave background radiation constitutes a major development in modern physical cosmology. In 1964, US physicist Arno Allan Penzias and radio-astronomer Robert Woodrow Wilson discovered the cosmic microwave background (CMB), estimating its temperature as 3.5 K, as they experimented with the Holmdel Horn Antenna. The new measurements were accepted as important evidence for a hot early Universe (big bang theory) and as evidence against the rival steady state theory as theoretical work around 1950 showed the need for a CMB for consistency with the simplest relativistic universe models. In 1978, Penzias and Wilson were awarded the Nobel Prize for Physics for their joint measurement. There had been a prior measurement of the cosmic background radiation (CMB) by Andrew McKellar in 1941 at an effective temperature of 2.3 K using CN stellar absorption lines observed by W. S. Adams. Although no reference to the CMB is made by McKellar, it was not until much later after the Penzias and Wilson measurements that the significance of this measurement was understood.
History
By the middle of the 20th century, cosmologists had developed two different theories to explain the creation of the universe. Some supported the steady-state theory, which states that the universe has always existed and will continue to survive without noticeable change. Others believed in the Big Bang theory, which states that the universe was created in a massive explosion-like event billions of years ago (later determined to be approximately 13.8 billion years).
In 1941, Andrew McKellar used W. S. Adams' spectroscopic observations of CN absorption lines in the spectrum of a B type star to measure a blackbody background temperature of 2.3 K. McKellar referred to his detection as a "'rotational' temperature of interstellar molecules", without reference to a cosmological interpretation, stating that the temperature "will have its own, perhaps limited, significance".
Over two decades later, working at a Bell Telephone Laboratories facility atop Crawford Hill in Holmdel, New Jersey, in 1964, Arno Penzias and Robert Wilson were experimenting with a supersensitive, 6 meter (20 ft) horn antenna originally built to detect radio waves bounced off Echo balloon satellites. To measure these faint radio waves, they had to eliminate all recognizable interference from their receiver. They removed the effects of radar and radio broadcasting, and suppressed interference from the heat in the receiver itself by cooling it with liquid helium to −269 °C, only 4 K above absolute zero.
When Penzias and Wilson reduced their data, they found a low, steady, mysterious noise that persisted in their receiver. This residual noise was 100 times more intense than they had expected, was evenly spread over the sky, and was present day and night. They were certain that the radiation they detected on a wavelength of 7.35 centimeters did not come from the Earth, the Sun, or our galaxy. After thoroughly checking their equipment, removing some pigeons nesting in the antenna and cleaning out the accumulated droppings, the noise remained. Both concluded that this noise was coming from outside our own galaxy—although they were not aware of any radio source that would account for it.
At that same time, Robert H. Dicke, Jim Peebles, and David Wilkinson, astrophysicists at Princeton University just away, were preparing to search for microwave radiation in this region of the spectrum. Dicke and his colleagues reasoned that the Big Bang must have scattered not only the matter that condensed into galaxies, but also must have released a tremendous blast of radiation. With the proper instrumentation, this radiation should be detectable, albeit as microwaves, due to a massive redshift.
When his friend Bernard F. Burke, a professor of physics at MIT, told Penzias about a preprint paper he had seen by Jim Peebles on the possibility of finding radiation left over from an explosion that filled the universe at the beginning of its existence, Penzias and Wilson began to realize the significance of what they believed was a new discovery. The characteristics of the radiation detected by Penzias and Wilson fit exactly the radiation predicted by Robert H. Dicke and his colleagues at Princeton University. Penzias called Dicke at Princeton, who immediately sent him a copy of the still-unpublished Peebles paper. Penzias read the paper and called Dicke again and invited him to Bell Labs to look at the horn antenna and listen to the background noise. Dicke, Peebles, Wilkinson and P. G. Roll interpreted this radiation as a signature of the Big Bang.
To avoid potential conflict, they decided to publish their results jointly. Two notes were rushed to the Astrophysical Journal Letters. In the first, Dicke and his associates outlined the importance of cosmic background radiation as substantiation of the Big Bang Theory. In a second note, jointly signed by Penzias and Wilson titled, "A Measurement of Excess Antenna Temperature at 4080 Megacycles per Second," they reported the existence of a 3.5 K residual background noise, remaining after accounting for a sky absorption component of 2.3 K and a 0.9 K instrumental component, and attributed a "possible explanation" as that given by Dicke in his companion letter.
In 1978, Penzias and Wilson were awarded the Nobel Prize for Physics for their joint detection. They shared the prize with Pyotr Kapitsa, who won it for unrelated work. In 2019, Jim Peebles was also awarded the Nobel Prize for Physics, “for theoretical discoveries in physical cosmology”.
Bibliography
References
External links
"Astronomy and Astrophysics Horn Antenna.". National Park Service, Department of the Interior.
Radio astronomy
Physical cosmology
1960s in science | Discovery of cosmic microwave background radiation | [
"Physics",
"Astronomy"
] | 1,167 | [
"Theoretical physics",
"Astrophysics",
"Radio astronomy",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
640,046 | https://en.wikipedia.org/wiki/Norden%20bombsight | The Norden Mk. XV, known as the Norden M series in U.S. Army service, is a bombsight that was used by the United States Army Air Forces (USAAF) and the United States Navy during World War II, and the United States Air Force in the Korean and the Vietnam Wars. It was an early tachometric design, which combined optics, a mechanical computer, and an autopilot for the first time to not merely identify a target but fly the airplane to it. The bombsight directly measured the aircraft's ground speed and direction, which older types could only estimate with lengthy manual procedures. The Norden further improved on older designs by using an analog computer that continuously recalculated the bomb's impact point based on changing flight conditions, and an autopilot that reacted quickly and accurately to changes in the wind or other effects.
Together, these features promised unprecedented accuracy for daytime bombing from high altitudes. During prewar testing the Norden demonstrated a circular error probable (CEP), an astonishing performance for that period. This precision would enable direct attacks on ships, factories, and other point targets. Both the Navy and the USAAF saw it as a means to conduct successful high-altitude bombing. For example, an invasion fleet could be destroyed long before it could reach U.S. shores.
To protect these advantages, the Norden was granted the utmost secrecy well into the war, and was part of a production effort on a similar scale to the Manhattan Project: the overall cost (both R&D and production) was $1.1 billion, as much as 2/3 of the latter or over a quarter of the production cost of all B-17 bombers. The Norden was not as secret as believed; both the British SABS and German Lotfernrohr 7 worked on similar principles, and details of the Norden had been passed to Germany even before the war started.
Under combat conditions the Norden did not achieve its expected precision, yielding an average CEP in 1943 of , similar to other Allied and German results. Both the Navy and Air Forces had to give up using pinpoint attacks. The Navy turned to dive bombing and skip bombing to attack ships, while the Air Forces developed the lead bomber procedure to improve accuracy, and adopted area bombing techniques for ever-larger groups of aircraft. Nevertheless, the Norden's reputation as a pin-point device endured, due in no small part to Norden's own advertising of the device after secrecy was reduced late in the war.
The Norden saw reduced use in the post–World War II period after radar-based targeting was introduced, but the need for accurate daytime attacks kept it in service, especially during the Korean War. The last combat use of the Norden was in the U.S. Navy's VO-67 squadron, which used it to drop sensors onto the Ho Chi Minh Trail in 1967. The Norden remains one of the best-known bombsights.
History and development
Early work
The Norden sight was designed by Carl Norden, a Dutch engineer educated in Switzerland who immigrated to the U.S. in 1904. In 1911, Norden joined Sperry Gyroscope to work on ship gyrostabilizers, and then moved to work directly for the U.S. Navy as a consultant. At the Navy, Norden worked on a catapult system for a proposed flying bomb that was never fully developed, but this work introduced various Navy personnel to Norden's expertise with gyro stabilization.
World War I bomb sight designs had improved rapidly, with the ultimate development being the Course Setting Bomb Sight, or CSBS. This was essentially a large mechanical calculator that directly represented the wind triangle using three long pieces of metal in a triangular arrangement. The hypotenuse of the triangle was the line the aircraft needed to fly along in order to arrive over the target in the presence of wind, which, before the CSBS, was an intractable problem. Almost all air forces adopted some variation of the CSBS as their standard inter-war bomb sight, including the U.S. Navy, who used a modified version known as the Mark III.
It was already realized that one major source of error in bombing was levelling the aircraft enough so the bombsight pointed straight down, even small errors in levelling could produce dramatic errors in accuracy. The US Army did not adopt the CSBS and instead used a simpler design, the Estoppey D-series, as it automatically levelled the sight during use. Navy experiments showed these roughly doubled accuracy, so they began a series of developments to add a gyroscopic stabilizer to their bombsights. In addition to new designs like the Inglis (working with Sperry) and Seversky, Norden was asked to provide an external stabilizer for the Navy's existing Mark III designs.
Mark III-A
Although the CSBS and similar designs allowed the calculation of the proper flight angle needed to correct for windage, they did so by looking downward out of the aircraft. Very simple bombsights could be operated by the pilot, but as their sophistication grew they demanded full-time operators. This task was often given to the front or rear gunner. In Army aircraft they would sit near enough to the pilot to indicate any required directional adjustments using hand signals, or if they sat behind the pilot, using strings attached to the pilot's jacket.
The Navy's first bombers were large flying boats, where the pilot sat well away from the front of the fuselage, and one could not simply cut a hole for the bombsight to view through. Instead, the bombs were normally aimed by an observer in the nose of the aircraft. This made communications with the pilot very difficult. To address this, the Navy developed the concept of the pilot direction indicator, or PDI, an electrically-driven pointer that the observer used to indicate which direction to turn. The bombardier used switches to move the pointer on his unit to indicate the direction of the target, which was duplicated on the unit in front of the pilot so he could maneuver the aircraft to follow suit.
Norden's attempt to fit a stabilizer to the Mark III, the Mark III-A, also included a separate contract to develop a new automatic PDI. Norden proposed removing the electrical switches used to move the pointer and using the entire bombsight itself as the indicator. In place of the thin metal wires that formed the sights on the Mark III, a small low-power telescope would be used in its place. The bombardier would rotate the telescope left or right to follow the target. This motion would cause the gyros to precess, and this signal would drive the PDI automatically. The pilot would follow the PDI as before.
Norden initially delivered three prototypes of the stabilized bombsight without the automatic PDI. In testing, the Navy found that while the system did improve accuracy when it worked, it was complicated to use and often failed, leaving the real-world accuracy no better than before. They asked Norden for suggestions on ways to improve this. They were still interested in the PDI work, and the contract was allowed to continue.
Mark XI
Norden suggested that the only solution to improve accuracy would be to directly measure the ground speed, as opposed to calculating it using the CSBS's wind triangle. To time the drop, Norden used an idea already in use on other bombsights, the "equal distance" concept. This was based on the observation that the time needed to travel a certain distance over the ground would remain relatively constant during the bomb run, as the wind would not be expected to change dramatically over a short period of time. If you could accurately mark out a distance on the ground, or in practice, an angle in the sky, timing the passage over that distance would give you all the information needed to time the drop.
Norden's version of the system was very similar to the Army's Estoppey D-4 of the same era, differing largely in the physical details of the actual sights. The D-4 used thin wires as the sights, while Norden's would use the small telescope of the Mark III-A. To use the system, the bombardier looked up the expected time it would take for the bombs to fall from the current altitude. This time was set into a countdown stopwatch, and the sights were set to the angle that the bombs would fall if there was no wind. The bombardier waited for the target to line up with a crosshair in the telescope. When it did, the timer was started, and the bombardier rotated the telescope around its vertical axis to track the target as they flew toward it. This movement was linked to a second crosshair through a gearing system. The bombardier continued moving the telescope until the timer ran out. The second crosshair was now at the correct aiming angle, or range angle, after accounting for any difference between groundspeed and airspeed. The bombardier then waited for the target to pass through the second crosshair to time the drop.
In 1924, the first prototype of this design, known to the Navy as the Mark XI, was delivered to the Navy's proving grounds in Virginia. In testing, the system proved disappointing. The circular error probable (CEP), a circle into which 50% of the bombs would fall, was wide from only altitude. This was an error of over 3.6%, somewhat worse than existing systems. Moreover, bombardiers universally complained that the device was far too hard to use. Norden worked tirelessly on the design, and by 1928, the accuracy had improved to 2% of altitude. This was enough for the Navy's Bureau of Ordnance to place a US$348,000 contract (equivalent to $ million in ) for 80 production examples.
Norden was known for his confrontational and volatile nature. He often worked 16-hour days and thought little of anyone who did not. Navy officers began to refer to him as "Old Man Dynamite". During development, the Navy asked Norden to consider taking on a partner to handle the business and leave Norden free to develop the engineering side. They recommended former Army colonel Theodore Barth, an engineer who had been in charge of gas mask production during World War I. The match-up was excellent, as Barth had the qualities Norden lacked: charm, diplomacy, and a head for business. The two became close friends.
Initial U.S. Army interest
In December 1927, the United States Department of War was granted permission to use a bridge over the Pee Dee River in North Carolina for target practice, as it would soon be sunk in the waters of a new dam. The 1st Provisional Bombardment Squadron, equipped with Keystone LB-5 bombers, attacked the bridge over a period of five days, flying 20 missions a day in perfect weather and attacking at altitudes from . After this massive effort, the middle section of the bridge finally fell on the last day. However, the effort as a whole was clearly a failure in any practical sense.
About the same time as the operation was being carried out, General James Fechet replaced General Mason Patrick as commander of the USAAC. He received a report on the results of the test, and on 6 January 1928 sent out a lengthy memo to Brigadier General William Gillmore, chief of the Material Division at Wright Field, stating:
He went on to request information on every bombsight then used at Wright, as well as "the Navy's newest design". However, the Mark XI was so secret that Gillmore was not aware Fechet was referring to the Norden. Gilmore produced contracts for twenty-five examples of an improved version of the Seversky C-1, the C-3, and six prototypes of a new design known as the Inglis L-1. The L-1 never matured, and Inglis later helped Seversky to design the improved C-4.
The wider Army establishment became aware of the Mark XI in 1929 and was eventually able to buy an example in 1931. Their testing mirrored the Navy's experience; they found that the gyro stabilization worked and the sight was accurate, but it was also "entirely too complicated" to use. The Army turned its attention to further upgraded versions of their existing prototypes, replacing the older vector bombsight mechanisms with the new synchronous method of measuring the proper dropping angle.
Fully automatic bombsight
While the Mk. XI was reaching its final design, the Navy learned of the Army's efforts to develop a synchronous bombsight, and asked Norden to design one for them. Norden was initially unconvinced this was workable, but the Navy persisted and offered him a development contract in June 1929. Norden retreated to his mother's house in Zürich and returned in 1930 with a working prototype. Lieutenant Frederick Entwistle, the Navy's chief of bombsight development, judged it revolutionary.
The new design, the Mark XV, was delivered in production quality in the summer of 1931. In testing, it proved to eliminate all of the problems of the earlier Mk. XI design. From altitude the prototype delivered a CEP of , while even the latest production Mk. XI's were . At higher altitudes, a series of 80 bomb runs demonstrated a CEP of . In a test on 7 October 1931, the Mk. XV dropped 50% of its bombs on a static target, the USS Pittsburgh, while a similar aircraft with the Mk. XI had only 20% of its bombs hit.
Moreover, the new system was dramatically simpler to use. After locating the target in the sighting system, the bombardier simply made fine adjustments using two control wheels throughout the bomb run. There was no need for external calculation, lookup tables or pre-run measurementseverything was carried out automatically through an internal wheel-and-disc calculator. The calculator took a short time to settle on a solution, with setups as short as six seconds, compared to the 50 needed for the Mk. XI to measure its ground speed. In most cases, the bomb run needed to be only 30 seconds long.
Despite this success, the design also demonstrated several serious problems. In particular, the gyroscopic platform had to be levelled out before use using several spirit levels, and then checked and repeatedly reset for accuracy. Worse, the gyros had a limited degree of movement, and if the plane banked far enough the gyro would reach its limit and have to be re-set from scratch – something that could happen even due to strong turbulence. If the gyros were found to be off, the levelling procedure took as long as eight minutes. Other minor problems were the direct current electric motors which drove the gyroscopes, whose brushes wore down quickly and left carbon dust throughout the interior of the device, and the positioning of the control knobs, which meant the bombardier could only adjust side-to-side or up-and-down aim at a time, not both. But despite all of these problems, the Mark XV was so superior to any other design that the Navy ordered it into production.
Carl L. Norden Company was incorporated in 1931, supplying the sights under a dedicated source contract. In effect, the company was owned by the Navy. In 1934 the newly-forming GHQ Air Force, the purchasing arm of the U.S. Army Air Corps, selected the Norden for their bombers as well, referring to it as the M-1. However, due to the dedicated source contract, the Army had to buy the sights from the Navy. This was not only annoying for inter-service rivalry reasons, but the Air Corps' higher-speed bombers demanded several changes to the design, notably the ability to aim the sighting telescope further forward to give the bombardier more time to set up. The Navy was not interested in these changes, and would not promise to work them into the production lines. Worse, Norden's factories were having serious problems keeping up with demand for the Navy alone, and in January 1936, the Navy suspended all shipments to the Army.
Autopilot
Mk. XV's were initially installed with the same automatic PDI as the earlier Mk. XI. In practice, it was found that the pilots had a very difficult time keeping the aircraft stable enough to match the accuracy of the bombsight. Starting in 1932 and proceeding in fits and starts for the next six years, Norden developed the Stabilized Bombing Approach Equipment (SBAE), a mechanical autopilot that attached to the bombsight. However, it was not a true "autopilot", in that it could not fly the aircraft by itself. By rotating the bombsight in relationship to the SBAE, the SBAE could account for wind and turbulence and calculate the appropriate directional changes needed to bring the aircraft onto the bomb run far more precisely than a human pilot. The minor adaptations needed on the bombsight itself produced what the Army referred to as the M-4 model.
In 1937 the Army, faced with the continuing supply problems with the Norden, once again turned to Sperry Gyroscope to see if they could come up with a solution. Their earlier models had all proved unreliable, but they had continued working with the designs throughout this period and had addressed many of the problems. By 1937, Orland Esval had introduced a new AC-powered electrical gyroscope that spun at 30,000 RPM, compared to the Norden's 7,200 , which dramatically improved the performance of the inertial platform. The use of three-phase AC power and inductive pickup eliminated the carbon brushes, and further simplified the design. Carl Frische had developed a new system to automatically level the platform, eliminating the time-consuming process needed on the Norden. The two collaborated on a new design, adding a second gyro to handle heading changes, and named the result the Sperry S-1. Existing supplies of Nordens continued to be supplied to the USAAC's B-17s, while the S-1 equipped the B-24Es being sent to the 15th Air Force.
Some B-17s had been equipped with a simple heading-only autopilot, the Sperry A-3. The company had also been working on an all-electronic model, the A-5, which stabilized in all three directions. By the early 1930s, it was being used in a variety of Navy aircraft to excellent reviews. By connecting the outputs of the S-1 bombsight to the A-5 autopilot, Sperry produced a system similar to the M-4/SBAE, but it reacted far more quickly. The combination of the S-1 and A-5 so impressed the Army that on 17 June 1941 they authorized the construction of a factory and noted that "in the future all production models of bombardment airplanes be equipped with the A-5 Automatic Pilot and have provisions permitting the installation of either the M-Series [Norden] Bombsight or the S-1 Bombsight".
British interest, Tizard mission
By 1938, information about the Norden had worked its way up the Royal Air Force chain of command and was well known within that organization. The British had been developing a tachometric bombsight of their own known as the Automatic Bomb Sight, but combat experience in 1939 demonstrated the need for it to be stabilized. Work was underway as the Stabilized Automatic Bomb Sight (SABS), but it would not be available until 1940 at the earliest, and likely later. Even then, it did not feature the autopilot linkage of the Norden, and would thus find it difficult to match the Norden's performance in anything but smooth air. Acquiring the Norden became a major goal.
The RAF's first attempt, in the spring of 1938, was rebuffed by the U.S. Navy. Air Chief Marshal Edgar Ludlow-Hewitt, commanding RAF Bomber Command, demanded Air Ministry action. They wrote to George Pirie, the British air attaché in Washington, suggesting he approach the U.S. Army with an offer of an information exchange with their own SABS. Pirie replied that he had already looked into this, and was told that the U.S. Army had no licensing rights to the device as it was owned by the U.S. Navy. The matter was not helped by a minor diplomatic issue that flared up in July when a French air observer was found to be on board a crashed Douglas Aircraft Company bomber, forcing President Roosevelt to promise no further information exchanges with foreign powers.
Six months later, after a change of leadership within the U.S. Navy's Bureau of Aeronautics, on 8 March 1939, Pirie was once again instructed to ask the U.S. Navy about the Norden, this time enhancing the deal with offers of British power-operated turrets. However, Pirie expressed concern as he noted the Norden had become as much political as technical, and its relative merits were being publicly debated in Congress weekly while the U.S. Navy continued to say the Norden was "the United States' most closely guarded secret".
The RAF's desires were only further goaded on 13 April 1939, when Pirie was invited to watch an air demonstration at Fort Benning where the painted outline of a battleship was the target:
The three following B-17s also hit the target, and then a flight of a dozen Douglas B-18 Bolos placed most of their bombs in a separate square outlined on the ground.
Another change of management within the Bureau of Aeronautics had the effect of making the U.S. Navy more friendly to British overtures, but no one was willing to fight the political battle needed to release the design. The Navy brass was concerned that giving the Norden to the RAF would increase its chances of falling into German hands, which could put the U.S.'s own fleet at risk. The UK Air Ministry continued increasing pressure on Pirie, who eventually stated there was simply no way for him to succeed, and suggested the only way forward would be through the highest diplomatic channels in the Foreign Office. Initial probes in this direction were also rebuffed. When a report stated that the Norden's results were three to four times as good as their own bombsights, the Air Ministry decided to sweeten the pot and suggested they offer information on radar in exchange. This too was rebuffed.
The matter eventually worked its way to the Prime Minister, Neville Chamberlain, who wrote personally to President Roosevelt asking for the Norden, but even this was rejected. The reason for these rejections was more political than technical, but the U.S. Navy's demands for secrecy were certainly important. They repeated that the design would be released only if the British could demonstrate the basic concept was common knowledge, and therefore not a concern if it fell into German hands. The British failed to convince them, even after offering to equip their examples with a variety of self-destruct devices.
This may have been ameliorated by the winter of 1939, at which point a number of articles about the Norden appeared in the U.S. popular press with reasonably accurate descriptions of its basic workings. But when these were traced back to the press corps at the U.S. Army Air Corps, the U.S. Navy was apoplectic. Instead of accepting it was now in the public domain, any discussion about the Norden was immediately shut down. This drove both the British Air Ministry and Royal Navy to increasingly anti-American attitudes when they considered sharing their own developments, notably newer ASDIC systems. By 1940 the situation on scientific exchange was entirely deadlocked as a result.
Looking for ways around the deadlock, Henry Tizard sent Archibald Vivian Hill to the U.S. to take a survey of U.S. technical capability in order to better assess what technologies the U.S. would be willing to exchange. This effort was the start on the path that led to the famous Tizard Mission in late August 1940. Ironically, by the time the Mission was being planned, the Norden had been removed from the list of items to be discussed, and Roosevelt personally noted this was due largely to political reasons. Ultimately, although Tizard was unable to convince the U.S. to release the design, he was able to request information about its external dimensions and details on the mounting system so it could be easily added to British bombers if it were released in the future.
Production, problems, and Army standardization
The conversion of Norden Laboratories Corporation's New York City engineering lab to a production factory was a long process. Before the war, skilled craftsmen, most of them German or Italian immigrants, hand-made almost every part of the 2,000-part machine. Between 1932 and 1938, the company produced only 121 bombsights per year. During the first year after the Attack on Pearl Harbor, Norden produced 6,900 bombsights, three-quarters of which went to the U.S. Navy.
When Norden heard of the U.S. Army's dealings with Sperry, Theodore Barth called a meeting with the U.S. Army and U.S. Navy at their factory in New York City. Barth offered to build an entirely new factory just to supply the U.S. Army, but the U.S. Navy refused this. Instead, the U.S. Army suggested that Norden adapt their sight to work with Sperry's A-5, which Barth refused. Norden actively attempted to make the bombsight incompatible with the A-5.
It was not until 1942 that the impasse was finally solved by farming out autopilot production to Honeywell Regulator, who combined features of the Norden-mounted SBAE with the aircraft-mounted A-5 to produce what the U.S. Army referred to as "Automatic Flight Control Equipment" (AFCE); the unit would later be redesigned as the C-1. The Norden, now connected with the aircraft's built-in autopilot, allowed the bombardier alone to fully control minor movements of the aircraft during the bombing run.
By May 1943 the U.S. Navy was complaining that they had a surplus of devices, with full production turned over to the USAAF. After investing more than $100 million in Sperry bombsight manufacturing plants, the USAAF concluded that the Norden M-series was far superior in accuracy, dependability, and design. Sperry contracts were cancelled in November 1943. When production ended a few months later, 5,563 Sperry bombsight-autopilot combinations had been built, most of which were installed in Consolidated B-24 Liberator bombers.
Expansion of Norden bombsight production to a final total of six factories took several years. The USAAF demanded additional production to meet their needs, and eventually arranged for the Victor Adding Machine company to gain a manufacturing license, and then Remington Rand. Ironically, during this period the U.S. Navy abandoned the Norden in favor of dive bombing, reducing the demand. By the end of the war, Norden and its subcontractors had produced 72,000 M-9 bombsights for the U.S. Army Air Force alone, costing $8,800 each.
Description and operation
Background
Typical bombsights of the pre-war era worked on the "vector bombsight" principle introduced with the World War I Course Setting Bomb Sight. These systems consisted of a slide rule-type calculator that was used to calculate the effects of the wind on the bomber based on simple vector arithmetic. The mathematical principles are identical to those on the E6B calculator used to this day.
In operation, the bombardier would first take a measurement of the wind speed using one of a variety of methods, and then dial that speed and direction into the bombsight. This would move the sights to indicate the direction the plane should fly to take it directly over the target with any cross-wind taken into account, and also set the angle of the iron sights to account for the wind's effect on ground speed.
These systems had two primary problems in terms of accuracy. The first was that there were several steps that had to be carried out in sequence in order to set up the bombsight correctly, and there was limited time to do all of this during the bomb run. As a result, the accuracy of the wind measurement was always limited, and errors in setting the equipment or making the calculations were common. The second problem was that the sight was attached to the aircraft, and thus moved about during maneuvers, during which time the bombsight would not point at the target. As the aircraft had to maneuver in order to make the proper approach, this limited the time allowed to accurately make corrections. This combination of issues demanded a long bomb run.
Experiments had shown that adding a stabilizer system to a vector bombsight would roughly double the accuracy of the system. This would allow the bombsight to remain level while the aircraft maneuvered, giving the bombardier more time to make his adjustments, as well as reducing or eliminating mis-measurements when sighting off of non-level sights. However, this would not have any effect on the accuracy of the wind measurements, nor the calculation of the vectors. The Norden attacked all of these problems.
Basic operation
To improve the calculation time, the Norden used a mechanical computer inside the bombsight to calculate the range angle of the bombs. By simply dialing in the aircraft's altitude and heading, along with estimates of the wind speed and direction (in relation to the aircraft), the computer would automatically, and quickly, calculate the aim point. This not only reduced the time needed for the bombsight setup but also dramatically reduced the chance for errors. This attack on the accuracy problem was by no means unique; several other bombsights of the era used similar calculators. It was the way the Norden used these calculations that differed.
Conventional bombsights are set up pointing at a fixed angle, the range angle, which accounts for the various effects on the trajectory of the bomb. To the operator looking through the sights, the crosshairs indicate the location on the ground the bombs would impact if released at that instant. As the aircraft moves forward, the target approaches the crosshairs from the front, moving rearward, and the bombardier releases the bombs as the target passes through the line of the sights. One example of a highly automated system of this type was the RAF's Mark XIV bomb sight.
The Norden worked in an entirely different fashion, based on the "synchronous" or "tachometric" method. Internally, the calculator continually computed the impact point, as was the case for previous systems. However, the resulting range angle was not displayed directly to the bombardier or dialed into the sights. Instead, the bombardier used the sighting telescope to locate the target long in advance of the drop point. A separate section of the calculator used the inputs for altitude and airspeed to determine the angular velocity of the target, the speed at which it would be seen drifting backward due to the forward motion of the aircraft. The output of this calculator drove a rotating prism at that angular speed in order to keep the target centered in the telescope. In a properly adjusted Norden, the target remains motionless in the sights.
The Norden thus calculated two angles: the range angle based on the altitude, airspeed and ballistics; and the current angle to the target, based on the ground speed and heading of the aircraft. The difference between these two angles represented the "correction" that needed to be applied to bring the aircraft over the proper drop point. If the aircraft was properly aligned with the target on the bomb run, the difference between the range and target angles would be continually reduced, eventually to zero (within the accuracy of the mechanisms). At this moment the Norden automatically dropped the bombs.
In practice, the target failed to stay centered in the sighting telescope when it was first set up. Instead, due to inaccuracies in the estimated wind speed and direction, the target would drift in the sight. To correct for this, the bombardier would use fine-tuning controls to slowly cancel out any motion through trial and error. These adjustments had the effect of updating the measured ground speed used to calculate the motion of the prisms, slowing the visible drift. Over a short period of time of continual adjustments, the drift would stop, and the bombsight would now hold an extremely accurate measurement of the exact ground speed and heading. Better yet, these measurements were being carried out on the bomb run, not before it, and helped eliminate inaccuracies due to changes in the conditions as the aircraft moved. And by eliminating the manual calculations, the bombardier was left with much more time to adjust his measurements, and thus settle at a much more accurate result.
The angular speed of the prism changes with the range of the target: consider the reverse situation, the apparent high angular speed of an aircraft passing overhead compared to its apparent speed when it is seen at a longer distance. In order to properly account for this non-linear effect, the Norden used a system of slip-disks similar to those used in differential analysers. However, this slow change at long distances made it difficult to fine-tune the drift early in the bomb run. In practice, bombardiers would often set up their ground speed measurements in advance of approaching the target area by selecting a convenient "target" on the ground that was closer to the bomber and thus had more obvious motion in the sight. These values would then be used as the initial setting when the target was later sighted.
System description
The Norden bombsight consisted of two primary parts, the gyroscopic stabilization platform on the left side, and the mechanical calculator and sighting head on the right side. They were essentially separate instruments, connecting through the sighting prism. The sighting eyepiece was located in the middle, between the two, in a less than convenient location that required some dexterity to use.
Before use, the Norden's stabilization platform had to be righted, as it slowly drifted over time and no longer kept the sight pointed vertically. Righting was accomplished through a time-consuming process of comparing the platform's attitude to small spirit levels seen through a glass window on the front of the stabilizer. In practice, this could take as long as eight and a half minutes. This problem was made worse by the fact that the platform's range of motion was limited, and could be tumbled even by strong turbulence, requiring it to be reset again. This problem seriously upset the usefulness of the Norden, and led the RAF to reject it once they received examples in 1942. Some versions included a system that quickly righted the platform, but this "Automatic Gyro Leveling Device" proved to be a maintenance problem, and was removed from later examples.
Once the stabilizer was righted, the bombardier would then dial in the initial setup for altitude, speed, and direction. The prism would then be "clutched out" of the computer, allowing it to be moved rapidly to search for the target on the ground. Later Nordens were equipped with a reflector sight to aid in this step. Once the target was located the computer was clutched in and started moving the prism to follow the target. The bombardier would begin making adjustments to the aim. As all of the controls were located on the right, and had to be operated while sighting through the telescope, another problem with the Norden is that the bombardier could only adjust either the vertical or horizontal aim at a given time, his other arm was normally busy holding himself up above the telescope.
On top of the device, to the right of the sight, were two final controls. The first was the setting for "trail", which was pre-set at the start of the mission for the type of bombs being used. The second was the "index window" which displayed the aim point in numerical form. The bombsight calculated the current aim point internally and displayed this as a sliding pointer on the index. The current sighting point, where the prism was aimed, was also displayed against the same scale. In operation, the sight would be set far in advance of the aim point, and as the bomber approached the target the sighting point indicator would slowly slide toward the aim point. When the two met, the bombs were automatically released. The aircraft was moving over , so even minor interruptions in timing could dramatically affect aim.
Early examples, and most used by the Navy, had an output that directly drove a Pilot Direction Indicator meter in the cockpit. This eliminated the need to manually signal the pilot, as well as eliminating the possibility of error.
In U.S. Army Air Forces use, the Norden bombsight was attached to its autopilot base, which was in turn connected with the aircraft's autopilot. The Honeywell C-1 autopilot could be used as an autopilot by the flight crew during the journey to the target area through a control panel in the cockpit, but was more commonly used under direct command of the bombardier. The Norden's box-like autopilot unit sat behind and below the sight and attached to it at a single rotating pivot. After control of the aircraft was passed to the bombardier during the bomb run, he would first rotate the entire Norden so the vertical line in the sight passed through the target. From that point on, the autopilot would attempt to guide the bomber so it followed the course of the bombsight, and pointed the heading to zero out the drift rate, fed to it through a coupling. As the aircraft turned onto the correct angle, a belt and pulley system rotated the sight back to match the changing heading. The autopilot was another reason for the Norden's accuracy, as it ensured the aircraft quickly followed the correct course and kept it on that course much more accurately than the pilots could.
Later in the war, the Norden was combined with other systems to widen the conditions for successful bombing. Notable among these was the radar system called the H2X (Mickey), which were used directly with the Norden bombsight. The radar proved most accurate in coastal regions, as the water surface and the coastline produced a distinctive radar echo.
Combat use
Early tests
The Norden bombsight was developed during a period of United States non-interventionism when the dominant U.S. military strategy was the defense of the U.S. and its possessions. A considerable amount of this strategy was based on stopping attempted invasions by sea, both with direct naval power, and starting in the 1930s, with USAAC airpower. Most air forces of the era invested heavily in dive bombers or torpedo bombers for these roles, but these aircraft generally had limited range; long-range strategic reach would require the use of an aircraft carrier. The Army felt the combination of the Norden and B-17 Flying Fortress presented an alternate solution, believing that small formations of B-17s could successfully attack shipping at long distances from the USAAC's widespread bases. The high altitudes the Norden allowed would help increase the range of the aircraft, especially if equipped with a turbocharger, as with each of the four Wright Cyclone 9 radial engines of the B-17.
In 1940, Barth claimed that "we do not regard a 15 foot (4.6 m) square... as being a very difficult target to hit from an altitude of ". At some point the company started using the pickle barrel imagery, to reinforce the bombsight's reputation. After the device became known about publicly in 1942, the Norden company in 1943 rented Madison Square Garden and folded their own show in between the presentations of the Ringling Bros. and Barnum & Bailey Circus. Their show involved dropping a wooden "bomb" into a pickle barrel, at which point a pickle popped out.
These claims were greatly exaggerated; in 1940 the average score for an Air Corps bombardier was a circular error of from , not from . Real-world performance was poor enough that the Navy de-emphasized level attacks in favor of dive bombing almost immediately. The Grumman TBF Avenger could mount the Norden, like the preceding Douglas TBD Devastator, but combat use was disappointing and eventually described as "hopeless" during the Guadalcanal Campaign. In spite of giving up on the device in 1942, bureaucratic inertia meant they were supplied as standard equipment until 1944.
USAAF anti-shipping operations in the Far East were generally unsuccessful. In early operations during the Battle of the Philippines, B-17s claimed to have sunk one minesweeper and damaged two Japanese transports, the cruiser , and the destroyer . However, all of these ships are known to have suffered no damage from air attack during that period. In other early battles, including the Battle of Coral Sea or Battle of Midway, no claims were made at all, although some hits were seen on docked targets. The USAAF eventually replaced all of their anti-shipping B-17s with other aircraft, and came to use the skip bombing technique in direct low-level attacks.
Air war in Europe
As U.S. participation in the war started, the U.S. Army Air Forces drew up widespread and comprehensive bombing plans based on the Norden. They believed the B-17 had a 1.2% probability of hitting a target from , meaning that 220 bombers would be needed for a 93% probability of one or more hits. This was not considered a problem, and the USAAF forecast the need for 251 combat groups to provide enough bombers to fulfill their comprehensive pre-war plans.
After earlier combat trials proved troublesome, the Norden bombsight and its associated AFCE were used on a wide scale for the first time on the 18 March 1943 mission to Bremen-Vegesack, Germany. The 303d Bombardment Group dropped 76% of its load within a ring, representing a CEP well under . As at sea, many early missions over Europe demonstrated varied results; on wider inspection, only 50% of American bombs fell within a of the target, and American flyers estimated that as many as 90% of bombs could miss their targets. The average CEP in 1943 was , meaning that only 16% of the bombs fell within of the aiming point. A bomb, standard for precision missions after 1943, had a lethal radius of only .
Faced with these poor results, Curtis LeMay started a series of reforms in an effort to address the problems. In particular, he introduced the "combat box" formation in order to provide maximum defensive firepower by densely packing the bombers. As part of this change, he identified the best bombardiers in his command and assigned them to the lead bomber of each box. Instead of every bomber in the box using their Norden individually, the lead bombardiers were the only ones actively using the Norden, and the rest of the box's aircraft dropped their bombs when they saw the lead's bombs leaving his aircraft. Although this spread the bombs over the area of the combat box, this could still improve accuracy over individual efforts. It also helped stop a problem where various aircraft, all slaved to their autopilots on the same target, would drift into each other. These changes did improve accuracy, which suggests that much of the problem is attributable to the bombardier. However, true "precision" attacks still proved difficult or impossible.
When Jimmy Doolittle took over command of the 8th Air Force from Ira Eaker in early 1944, precision bombing attempts were dropped. Area bombing, like the RAF efforts, were widely used with 750 and then 1,000-bomber raids against large targets. The main targets were railroad marshaling yards (27.4% of the bomb tonnage dropped), airfields (11.6%), oil refineries (9.5%), and military installations (8.8%). To some degree, the targets themselves were secondary; Doolittle used the bombers as an irresistible target to draw up Luftwaffe fighters into the ever-increasing swarms of Allied long-range escort fighters. As these missions broke the Luftwaffe, missions were able to be carried out at lower altitudes or especially in bad weather when the H2X radar could be used. In spite of abandoning precision attacks, accuracy nevertheless improved. By 1945, the 8th was putting up to 60% of its bombs within , a CEP of about .
Still pursuing precision attacks, various remotely guided weapons were developed, notably the AZON and VB-3 Razon bombs and similar weapons.
Adaptations
The Norden operated by mechanically turning the viewpoint so the target remained stationary in the display. The mechanism was designed for the low angular rate encountered at high altitudes, and thus had a relatively low range of operational speeds. The Norden could not rotate the sight fast enough for bombing at low altitude, for instance. Typically this was solved by removing the Norden completely and replacing it with simpler sighting systems.
A good example of its replacement was the refitting of the Doolittle Raiders with a simple iron sight. Designed by Capt. C. Ross Greening, the sight was mounted to the existing pilot direction indicator, allowing the bombardier to make corrections remotely, like the bombsights of an earlier era.
However, the Norden combined two functions, aiming and stabilization. While the former was not useful at low altitudes, the latter could be even more useful, especially if flying in rough air near the surface. This led James "Buck" Dozier to mount a Doolittle-like sight on top of the stabilizer in the place of the sighting head in order to attack German submarines in the Caribbean Sea. This proved extraordinarily useful and was soon used throughout the fleet.
Postwar use
In the postwar era, the United States mostly stopped developing new precision bombsights. Initially this was because of the war's end but, as budgets increased during the Cold War, the deployment of nuclear weapons meant accuracies of around were sufficient, well within the capabilities of existing radar bombing systems. Only one major bombsight of note was developed, the Y-4 developed on the Boeing B-47 Stratojet. This sight combined the images of the radar and a lens system in front of the aircraft, allowing them to be directly compared at once through a binocular eyepiece.
Bombsights on older aircraft, like the Boeing B-29 Superfortress and the later B-50, were left in their wartime state. When the Korean War began, these aircraft were pressed into service and the Norden once again became the USAF's primary bombsight. This occurred again when the Vietnam War started; in this case retired World War II technicians had to be called up in order to make the bombsights operational again. Its last use in combat was by the Naval Air Observation Squadron Sixty-Seven (VO-67), during the Vietnam War. The bombsights were used in Operation Igloo White for implanting Air-Delivered Seismic Intrusion Detectors (ADSID) along the Ho Chi Minh Trail.
Wartime security
Since the Norden was considered a critical wartime instrument, bombardiers were required to take an oath during their training stating that they would defend its secret with their own life if necessary. In case the plane should make an emergency landing on enemy territory, the bombardier would have to shoot the important parts of the Norden with a gun to disable it. The Douglas TBD Devastator torpedo bomber was originally equipped with flotation bags in the wings to aid the aircrew's escape after ditching, but they were removed once the Pacific War began; this ensured that the aircraft would sink, taking the Norden with it.
After each completed mission, bomber crews left the aircraft with a bag which they deposited in a safe ("the Bomb Vault"). This secure facility ("the AFCE and Bombsight Shop") was typically in one of the base's Nissen hut (Quonset hut) support buildings. The Bombsight Shop was manned by enlisted men who were members of a Supply Depot Service Group ("Sub Depot") attached to each USAAF bombardment group. These shops not only guarded the bombsights but performed critical maintenance on the Norden and related control equipment. This was probably the most technically skilled ground-echelon job, and certainly the most secret, of all the work performed by Sub Depot personnel. The non-commissioned officer in charge and his staff had to have a high aptitude for understanding and working with mechanical devices.
As the end of World War II neared, the bombsight was gradually downgraded in its secrecy; however, it was not until 1944 that the first public display of the instrument occurred.
Espionage
Despite the security precautions, the entire Norden system had been passed to the Germans before the war started. Herman W. Lang, a German spy, had been employed by the Carl L. Norden Company. During a visit to Germany in 1938, Lang conferred with German military authorities and reconstructed plans of the confidential materials from memory. In 1941, Lang, along with the 32 other German agents of the Duquesne Spy Ring, was arrested by the FBI and convicted in the largest espionage prosecution in U.S. history. He received a sentence of 18 years in prison on espionage charges and a two-year concurrent sentence under the Foreign Agents Registration Act.
German instruments were fairly similar to the Norden, even before World War II. A similar set of gyroscopes provided a stabilized platform for the bombardier to sight through, although the complex interaction between the bombsight and autopilot was not used. The Carl Zeiss Lotfernrohr 7, or Lotfe 7, was an advanced mechanical system similar to the Norden bombsight, although in form it was more similar to the Sperry S-1. It started replacing the simpler Lotfernrohr 3 and BZG 2 in 1942, and emerged as the primary late-war bombsight used in most Luftwaffe level bombers. The use of the autopilot allowed single-handed operation, and was key to bombing use of the single-crewed Arado Ar 234.
Japanese forces captured examples of the Norden, primarily from North American B-25 Mitchell bombers. They developed a simplified and more compact version known as the Type 4 Automatic Bombing Sight, but found it too complex to mass produce. Further development led to the Type 1 Model 2 Automatic Bombing Sight which began limited production just before the end of the war. Approximately 20 were in service at the end of the war.
Postwar analysis
Postwar analysis placed the overall accuracy of daylight precision attacks with the Norden at about the same level as radar bombing efforts. The 8th Air Force put 31.8% of its bombs within from an average altitude of , the 15th Air Force averaged 30.78% from , and the 20th Air Force against Japan averaged 31% from .
Many factors have been put forth to explain the Norden's poor real-world performance. Over Europe, the cloud cover was a common explanation, although performance did not improve even in favorable conditions. Over Japan, bomber crews soon discovered strong winds at high altitudes, the so-called jet streams, but the Norden bombsight worked only for wind speeds with minimal wind shear. Additionally, the bombing altitude over Japan reached up to , but most of the testing had been done well below . This extra altitude compounded factors that could previously be ignored; the shape of and even the paint on the bomb changed its aerodynamic properties and, at that time, nobody knew how to calculate the trajectory of bombs that reached supersonic speeds during their fall.
The RAF developed their own designs. Having moved to night bombing, where visual accuracy was difficult under even the best conditions, they introduced the much simpler Mark XIV bomb sight. This was designed not for accuracy above all, but ease of use in operational conditions. In testing in 1944, it was found to offer a CEP of , about what the Norden was offering at that time. This led to a debate within the RAF whether to use their own tachometric design, the Stabilized Automatic Bomb Sight, or use the Mk. XIV on future bombers. The Mk. XIV ultimately served into the 1960s while the SABS faded from service as the Lancaster and Lincoln bombers fitted with it were retired by the late 1940s.
See also
Mary Babnik Brown, who donated her hair in 1944, often said to be for the bombsight crosshairs, though this is incorrect
Glasgow Army Airfield Norden Bombsight Storage Vault
Lotfernrohr 7, a similar German design of late-war vintage
Stabilized Automatic Bomb Sight, a British bomb sight
Mark XIV bomb sight, a British bomb sight
Notes
References
Bibliography
Further reading
Stewart Halsey Ross: "Strategic Bombing by the United States in World War II"
"Bombardier: A History", Turner Publishing, 1998
"The Norden Bombsight
"Bombing – Students' Manual"
"Bombardier's Information File"
Stephen McFarland: "America's Pursuit of Precision Bombing, 1910–1945"
Charles Babbage Institute, University of Minnesota. Pasinski produced the prototype for the bombsight. He designed production tools and supervised production of the bombsight at Burroughs Corporation.
Charles Babbage Institute, University of Minnesota. Information on the Norden bombsight, which Burroughs produced beginning in 1942.
External links
Flight 1945 Norden Bomb Sight
How the Norden Bombsight Does Its Job by V. Torrey, June 1945 Popular Science
Norden bombsight images and information from twinbeech.com
Optical bombsights
World War II military equipment of the United States
Military computers
Mechanical computers
American inventions
Fire-control computers of World War II
Military equipment introduced in the 1930s | Norden bombsight | [
"Physics",
"Technology"
] | 11,011 | [
"Physical systems",
"Machines",
"Mechanical computers"
] |
640,290 | https://en.wikipedia.org/wiki/Paradoxical%20reaction | A paradoxical reaction (or paradoxical effect) is an effect of a chemical substance, such as a medical drug, that is opposite to what would usually be expected. An example of a paradoxical reaction is pain caused by a pain relief medication.
Substances
Amphetamines
Amphetamines are a class of psychoactive drugs that are stimulants. Paradoxical drowsiness can sometimes occur in adults. Research from the 1980s popularized the belief that ADHD stimulants such as amphetamine have a calming effect in individuals with ADHD, but opposite effects in the general population. Research in the early 2000s, however, disputes this claim, suggesting that ADHD stimulants have similar effects in adults with and without ADHD.
Antibiotics
The paradoxical effect or Eagle effect (named after Harry Eagle, who first described it) refers to an observation of an increase in survivors, seen when testing the activity of an antimicrobial agent. Initially when an antibiotic agent is added to a culture media, the number of bacteria that survive drops, as one would expect. But after increasing the concentration beyond a certain point, the number of bacteria that survive, paradoxically, increases.
Antidepressants
In a minority of cases, antidepressants can lead to violent thoughts of suicide or self-harm, as observed in some patients during and after treatment, which is in marked contrast to their intended effect. A 1991 study found that children and adolescents were more sensitive to paradoxical reactions of self-harm and suicidal ideation while taking fluoxetine (commonly known as Prozac). This can be regarded as a paradoxical reaction but, especially in the case of suicide, may in at least some cases be merely due to differing rates of effect with respect to different symptoms of depression: If generalized overinhibition of a patient's actions enters remission before that patient's dysphoria does and if the patient was already suicidal but too depressed to act on their inclinations, the patient may find themself in the situation of being both still dysphoric enough to want to commit suicide but newly free of endogenous barriers against doing so.
Antipsychotics
Chlorpromazine, an antipsychotic and antiemetic drug which is classed as a "major" tranquilizer, may cause paradoxical effects such as agitation, hallucinations, excitement, insomnia, bizarre dreams, aggravation of psychotic symptoms and toxic confusional states.
These may be more common in elderly dementia patients. The apparent worsening of dementia may be due to the anticholinergic side effects of many antipsychotics.
Barbiturates
Phenobarbital can cause hyperactivity in children. This may follow after a small dose of 20 mg, on condition of no phenobarbital administered in previous days. Prerequisite for this reaction is a continued sense of tension. The mechanism of action is not known, but it may be started by the anxiolytic action of the phenobarbital.
Barbiturates such as pentobarbital have been shown to cause paradoxical hyperactivity in an estimated 1% of children, who display symptoms similar to the hyperactive-impulsive subtype of attention deficit hyperactivity disorder. Intravenous caffeine administration can return these patients' behavior to baseline levels. Some case reports postulate a high rate (10-20%) of paradoxical response to anesthesia in ADHD patients, though this has not been objectively corroborated in controlled studies.
Benzodiazepines
Benzodiazepines, a class of psychoactive drugs called the "minor" tranquilizers, have varying hypnotic, sedative, anxiolytic, anticonvulsant, and muscle relaxing properties, but they may create the exact opposite effects. Susceptible individuals may respond to benzodiazepine treatment with an increase in anxiety, aggressiveness, agitation, confusion, disinhibition, loss of impulse control, talkativeness, violent behavior, and even convulsions. Paradoxical adverse effects may even lead to criminal behavior. Severe behavioral changes resulting from benzodiazepines have been reported including mania, hypomania, psychosis, anger and impulsivity.
Paradoxical rage reactions due to benzodiazepines occur as a result of an altered level of consciousness, which generates automatic behaviors, anterograde amnesia and uninhibited aggression. These aggressive reactions may be caused by a disinhibiting serotonergic mechanism.
Paradoxical effects of benzodiazepines appear to be dose-related, that is, likelier to occur with higher doses.
In a letter to the British Medical Journal, it was reported that a high proportion of parents referred for actual or threatened child abuse were taking medication at the time, often a combination of benzodiazepines and tricyclic antidepressants. Many mothers described that instead of feeling less anxious or depressed, they became more hostile and openly aggressive towards the child as well as to other family members while consuming tranquilizers. The author warned that environmental or social stresses such as difficulty coping with a crying baby combined with the effects of tranquilizers may precipitate a child abuse event.
Self-aggression has been reported and also demonstrated in laboratory conditions in a clinical study. Diazepam was found to increase people's willingness to harm themselves.
Benzodiazepines can sometimes cause a paradoxical worsening of EEG readings in patients with seizure disorders.
Caffeine
Caffeine is believed by many to cause paradoxical calmness or sedation in individuals with ADHD. There is insufficient evidence to determine if sedation caused by caffeine is due to a true paradoxical reaction, or rather from dehydration and sleep deprivation caused by the caffeine. Furthermore, there are no conclusive studies showing a differential effect of caffeine in individuals with ADHD compared to the general population.
Naltrexone
Naltrexone blocks the opioid receptors, acting opposite to most opioid pain medications. It can be used to negate the effects of opioid painkillers. At doses around one-tenth of the typical dose, naltrexone has been used for pain relief. Low-dose naltrexone is believed to have an anti-inflammatory effect. This is an off-label use and not widely accepted by the medical and scientific community.
Diphenhydramine
Diphenhydramine (often referred to by the trade name Benadryl) is an anticholinergic antihistamine medicine commonly used to treat allergic reactions and symptoms of a common cold, such as coughing. Its central antihistaminergic properties also cause it to act as a sedative, and for this reason it is also used to treat insomnia. Diphenhydramine is also used off-label for its sedative properties, particularly by parents seeking to make their children sedated or sleep during long-haul flights. This use of diphenhydramine has been criticized for a number of reasons, ranging from ethical to safety concerns, but also due to the risk of diphenhydramine's paradoxical reaction, which induces hyperactivity and irritability. This phenomenon can also be observed in adults who use the medication as a sleep aid. The prevalence of this paradoxical reaction is unknown, but research into the phenomenon suggests that it may be as a result of the medicine's interactions with the CYP2D6 enzyme, and that a metabolite of diphenhydramine may be to blame.
Causes
The mechanism of a paradoxical reaction has as yet (2019) not been fully clarified, in no small part due to the fact that signal transfer of single neurons in subcortical areas of the human brain is usually not accessible.
There are, however, multiple indications that paradoxical reactions upon – for example – benzodiazepines, barbiturates, inhalational anesthetics, propofol, neurosteroids, and alcohol are associated with structural deviations of GABAA receptors. The combination of the five subunits of the receptor (see image) can be altered in such a way that for example, the receptor's response to GABA remains unchanged but the response to one of the named substances is dramatically different from the normal one.
See also
Adverse drug reaction (ADR)
Drug-drug interaction (DDI)
Idiosyncratic drug reaction
Iatrogenesis
References
Clinical pharmacology
Drug-induced diseases
Health paradoxes | Paradoxical reaction | [
"Chemistry"
] | 1,795 | [
"Drug-induced diseases",
"Pharmacology",
"Drug safety",
"Clinical pharmacology"
] |
640,637 | https://en.wikipedia.org/wiki/Aplysia | Aplysia () is a genus of medium-sized to extremely large sea slugs, specifically sea hares, which are a kind of marine gastropod mollusk.
These benthic herbivorous creatures can become rather large compared with most other mollusks. They graze in tidal and subtidal zones of tropical waters, mostly in the Indo-Pacific Ocean (23 species); but they can also be found in the Atlantic Ocean (12 species), with a few species occurring in the Mediterranean.
Aplysia species, when threatened, frequently release clouds of ink, it is believed in order to blind the attacker (though they are in fact considered edible by relatively few species).
Following the lead of Eric R. Kandel, the genus has been studied as a model organism by neurobiologists, because its gill and siphon withdrawal reflex, as studied in Aplysia californica, is mediated by electrical synapses, which allow several neurons to fire synchronously. This quick neural response is necessary for a speedy reaction to danger by the animal. Aplysia has only about 20,000 neurons, making it a favorite subject for investigation by neuroscientists. Also, the 'tongue' on the underside is controlled by only two neurons, which allowed complete mapping of the innervation network to be carried out.
Long-term memory
In neurons that mediate several forms of long-term memory in Aplysia, the DNA repair enzyme poly ADP ribose polymerase 1 (PARP-1) is activated. In virtually all eukaryotic cells tested, the addition of polyADP-ribosyl groups to proteins (polyADP-ribosylation) occurs as a response to DNA damage. Thus the finding of activation of PARP-1 during learning and its requirement for long-term memory was surprising. Cohen-Aromon et al. suggested that fast and transient decondensation of chromatin structure by polyADP-ribosylation enables the transcription needed to form long-term memory without strand-breaks in DNA. Subsequent to these findings in Aplysia, further research was done with mice and it was found that polyADP-ribosylation is also required for long-term memory formation in mammals.
In 2018, scientists from the University of California, Los Angeles, have shown that the behavioral modifications characteristic of a form of nonassociative long-term memory in Aplysia can be transferred by RNA.
Operant conditioning
Operant conditioning is considered a form of associative learning. Because operant conditioning involves intricate interaction between an action and a stimulus (in this case food) it is closely associated with the acquisition of compulsive behavior. The Aplysia species serve as an ideal model system for the physical studying of food-reward learning, due to "the neuronal components of parts of its ganglionic nervous system that are responsible for the generation of feeding movements." As a result, Aplysia has been used in associative learning studies to derive certain aspects of feeding and operant conditioning in the context of compulsive behavior.
In Aplysia, the primary reflex studied by scientists while studying operant conditioning is the gill and siphon withdrawal reflex. The gill and siphon withdrawal reflex allows the Aplysia to pull back its siphon and gill for protection. The links between the synapses during the gill and siphon withdrawal reflex are directly correlated with many behavioral traits in the Aplysia such as its habits, reflexes, and conditioning. Scientists have studied the conditioning of the Aplysia to identify correlations with conditioning in mammals, mainly regarding behavioral responses such as addiction. Through experiments on the conditioning of the Aplysia, links have been discovered with the synaptic plasticity for reward functions involved in the trait of addiction within mammals. Synaptic plasticity is the idea that the synapses will become stronger or weaker depending on how much those specific synapses are used. Conditioning of these synapses can lead them to become stronger or weaker by causing the neurons to fire or not fire when influenced by a stimulus. The conditioning of behavioral traits is based on the idea of a reward function. A reward function is when a stimulus is conditioned to fire according to a certain stimulus. The neurons will adapt to that stimulus, and fire those neurons more easily, even if the stimulus has a negative effect on the subject (in this case the Aplysia). In mammals, the reward function is mainly controlled by ventral tegmental area (VTA) dopamine neurons. During conditioning (in mammals), the VTA dopamine neurons have an increased effect on the stimuli being conditioned, and a decreased effect on the stimuli not being conditioned. This induces the synapses to form an expectation for reward for the stimuli being conditioned. The properties of the synapses displayed in the tests on conditioning involving the Aplysia (which has dopamine neurons but not a ventral tegmental area) are proposed to be directly comparable to behavioral responses such as addiction in mammals.
Reproduction
Aplysia are simultaneous hermaphrodites, meaning each adult individual sea hare possesses both male and female reproductive structures that may be mature at the same time.
Aplysia have the ability to store and digest allosperm (sperm from a partner) and often mate with multiple partners. A potent sex pheromone, the water-borne protein attractin, is employed in promoting and maintaining mating in Aplysia. Attractin interacts with three other Aplysia protein pheromones (enticin, temptin or seductin) in a binary fashion to stimulate mate attraction.
Studies of multiple matings in the California sea hare, Aplysia californica, have provided insights on how conflicts between the sexes are resolved.
Self-defense
Aplysia species was once thought to use ink to escape from predators, much like the octopus. Instead, recent research has made it clear that these sea slugs are able to produce and secrete multiple compounds within their ink, including the chemodeterrant Aplysioviolin and toxic substances such as ammonia for self-defense. The ability of the Aplysia species to hold toxins within their bodies without poisoning themselves is a result of the unique way that the toxin is stored within the slug. Different molecules essential to the creation of the toxin are accumulated in separate parts of the body of the slug, rendering them benign, as only the mixing of all the molecules can result in a toxic chemical cloud. When the sea hare feels threatened it immediately begins the process of defending itself by mixing the distinct molecules in an additional part of the body used specifically for that purpose. At which point, enzymes within the sea slug begin the process of making the substance toxic, and the mixture is ejected out at the predator in self-defense.
Species
Species within the genus Aplysia are as follows.
This list follows the studies of Medina et al. who established a phylogenetic hypothesis for the genus Aplysia through study of the partial mitochondrial DNA (mtDNA) sequence data of ribosomal genes (rDNA).
Aplysia argus Rüppell & Leuckart, 1830
Aplysia atromarginata Bergh, 1905
Aplysia brasiliana Rang, 1828
Aplysia californica (J.G. Cooper, 1863) California sea hare
Distribution: Northeast Pacific
Aplysia cedrocensis (Bartsch & Rehder, 1939)
Distribution: Northeast Pacific
Aplysia cervina (Dall & Simpson, 1901)
Distribution: West Atlantic
Aplysia cornigera Sowerby, 1869
Distribution: Indian Ocean, West Pacific
Aplysia cronullae Eales, 1960: synonym of Aplysia extraordinaria (Allan, 1932) (uncertain synonym)
Distribution: Southwest Pacific
Aplysia dactylomela (Rang, 1828) spotted sea hare
Distribution: Cosmopolitan; tropical and temperate seas.
Color: from pale gray to green to dark brown.
Description: large black rings on the mantle; good swimmer
Aplysia denisoni Smith, 1884: synonym of Aplysia extraordinaria (Allan, 1932) (possible senior synonym)
Distribution: Indian Ocean, West Pacific
Aplysia depilans (Gmelin, 1791)
Distribution: Northeast Atlantic, Mediterranean.
Description: thin, yellow inner shell
Aplysia dura Eales, 1960
Distribution: Southeast Atlantic, Southwest Pacific
Aplysia elongata (Pease, 1860)
Aplysia euchlora Adams in M.E.Gray, 1850: represented as Aplysia (Phycophila) euchlora (Gray, 1850) (alternate representation)
Distribution: Northwest Pacific
Aplysia extraordinaria (Allan, 1932) (possibly = Aplysia gigantea)
Distribution: Western Australia, New Zealand.
Length: more than 40 cm
Aplysia fasciata (Poiret, 1798) ( Aplysia brasiliana Rang, 1828 is a junior synonym).
Distribution: East Atlantic, Mediterranean, West Africa, Red Sea
Length: 40 cm
Color: dark brown to black.
Description: sometimes has a red border to the parapodia and oral tentacles
Aplysia ghanimii Golestani, Crocetta, Padula, Camacho, Langeneck, Poursanidis, Pola, Yokeş, Cervera, Jung, Gosliner, Araya, Hooker, Schrödl & Valdés, 2019
Aplysia gigantea Sowerby, 1869
Distribution: Indian Ocean, West Pacific
Aplysia hooveri Golestani, Crocetta, Padula, Camacho, Langeneck, Poursanidis, Pola, Yokeş, Cervera, Jung, Gosliner, Araya, Hooker, Schrödl & Valdés, 2019
Aplysia inca d'Orbigny, 1837
Distribution: Southeast Pacific
Aplysia japonica G. B. Sowerby II, 1869
Aplysia juanina (Bergh, 1898)
Aplysia juliana (Quoy & Gaimard, 1832) Walking sea hare
Distribution: cosmopolitan, circumtropical in all warm seas
Color: various, from uniform to pale brown
Description: no purple gland, therefore no ink secretions; posterior end of the foot can act as a sucker
Aplysia keraudreni Rang, 1828
Distribution: South Pacific
Length: 25 cm
Color: dark brown
Aplysia kurodai (Baba, 1937)
Distribution: NW Pacific
Length: 30 cm
Color: dark brown to purplish black, dotted with white spots
Aplysia lineolata A. Adams & Reeve, 1850: synonym of Aplysia oculifera A. Adams & Reeve, 1850
Aplysia maculata Rang, 1828
Distribution : Western Indian Ocean
Aplysia morio (A. E. Verrill, 1901) Atlantic black sea hare, sooty sea hare
Distribution: Northwest Atlantic
Length: 40 cm
Color: black to deep brown; no spots
Aplysia nigra d'Orbigny, 1837
Distribution: Southwest Atlantic, South Pacific
Aplysia nigra brunnea Hutton, 1875
Distribution: New Zealand
Length: 10 cm
Color: dark brown
Aplysia nigrocincta von Martens, 1880
Aplysia oculifera (Adams & Reeve, 1850) spotted sea hare
Distribution: Indian Ocean; West Pacific; common along the north, east and south coast of South Africa
Length: 15 cm
Description: greenish brown, with small brown to black spots with white centres
Habitat: shallow bays and estuaries
Behaviour: hides by day; emerges at night to feed on seaweed
Aplysia parva Pruvot-Fol, 1953
Aplysia parvula (Guilding in Moerch, 1863) pygmy sea hare, dwarf sea hare
Distribution: worldwide in warm to temperate seas
Length: 6 cm
Color: brown to green spots
Aplysia peasei (Tryon, 1895) (taxon inquirendum)
Aplysia perviridis (Pilsbry, 1895)
Aplysia pilsbryi (Letson, 1898)
Aplysia pulmonica Gould, 1852
Aplysia punctata (Cuvier, 1803)
Distribution: NE Atlantic
Length: 20 cm
Color: very variable
Aplysia rehderi Eales, 1960
Distribution: Northeast Pacific
Aplysia reticulata Eales, 1960
Distribution: Southwest Pacific
Aplysia reticulopoda (Beeman, 1960) net-foot sea hare
Distribution: Northeast Pacific
Aplysia robertsi Pilsbry, 1895
Distribution: Northeast Pacific
Aplysia rudmani Bebbington, 1974
Distribution: Indian Ocean
Aplysia sagamiana (Baba, 1949)
Distribution: East Australia, Japan; Northwest Pacific
Aplysia sowerbyi Pilsbry, 1895
Distribution: Southwest Pacific
Aplysia sydneyensis (Sowerby, 1869)
Distribution: Australia
Length: 15 cm
Description: not clearly defined
Aplysia tanzanensis Bebbington, 1974
Distribution: Indian Ocean
Aplysia vaccaria (Winkler, 1955) California black sea hare (possibly ?= Aplysia cedrocensis)
Distribution: Pacific Coast of California
Length: very big – up to 75 cm
Color: black
Description: no purple ink; huge internal shell
Aplysia venosa Hutton, 1875 (taxon inquirendum)
Aplysia vistosa Pruvot-Fol, 1953
Species brought into synonymy
Aplysia aequorea Heilprin, 1888: synonym of Aplysia dactylomela Rang, 1828
Aplysia albopunctata Deshayes, 1853: synonym of Aplysia punctata (Cuvier, 1803)
Aplysia angasi G.B. Sowerby II, 1869: synonym of Aplysia dactylomela Rang, 1828
Aplysia annulifera Thiele, 1930: synonym of Aplysia dactylomela Rang, 1828
Aplysia ascifera Rang, 1828: synonym of Dolabrifera dolabrifera (Rang, 1828)
Aplysia benedicti Eliot, 1899: synonym of Aplysia dactylomela Rang, 1828
Aplysia bourailli Risbec, 1951: synonym of Aplysia dactylomela Rang, 1828
Aplysia brasiliana (Rang, 1828) mottled sea hare, sooty sea hare (junior synonym of Aplysia fasciata; different geographical populations of the same species): synonym of Aplysia fasciata Poiret, 1789
Aplysia cirrhifera Quoy & Gaimard, 1832: synonym of Barnardaclesia cirrhifera (Quoy & Gaimard, 1832)
Aplysia concava Sowerby, 1869: synonym of Aplysia parvula Mørch, 1863
Aplysia depressa Cantraine, 1835: synonym of Phyllaplysia depressa (Cantraine, 1835)
Aplysia dolabrifera Rang, 1828: synonym of Dolabrifera dolabrifera (Rang, 1828)
Aplysia donca (Ev. Marcus & Er. Marcus, 1960): synonym of Aplysia morio (A. E. Verrill, 1901)
Aplysia eusiphonata Bergh, 1908: synonym of Aplysia maculata Rang, 1828
Aplysia fimbriata Adams & Reeve, 1850: synonym of Aplysia dactylomela Rang, 1828
Aplysia gargantua Bergh, 1908: synonym of Aplysia maculata Rang, 1828
Aplysia geographica (Adams & Reeve, 1850): synonym of Syphonota geographica (A. Adams & Reeve, 1850)
Aplysia gilchristi Bergh, 1908: synonym of Aplysia maculata Rang, 1828
Aplysia gracilis Eales, 1960: synonym of Aplysia fasciata Poiret, 1789
Aplysia griffithsiana Leach, 1852 synonym of Aplysia punctata (Cuvier, 1803)
Aplysia guttata Sars M., 1840 synonym of Aplysia punctata (Cuvier, 1803)
Aplysia hamiltoni Kirk, 1882: synonym of Aplysia juliana Quoy & Gaimard, 1832
Aplysia hybrida Sowerby, 1806: synonym of Aplysia punctata (Cuvier, 1803)
Aplysia longicauda Quoy & Gaimard, 1825: synonym of Stylocheilus longicauda (Quoy & Gaimard, 1825)
Aplysia megaptera Verrill, 1900: synonym of Aplysia dactylomela Rang, 1828
Aplysia nettiae Winkler, 1959: synonym of Aplysia californica J. G. Cooper, 1863
Aplysia norfolkensis Sowerby, 1869: synonym of Aplysia parvula Mørch, 1863
Aplysia oahouensis Souleyet, 1852: synonym of Dolabrifera dolabrifera (Rang, 1828)
Aplysia ocellata d'Orbigny, 1839: synonym of Aplysia dactylomela Rang, 1828
Aplysia odorata Risbec, 1928: synonym of Aplysia dactylomela Rang, 1828
Aplysia operta Burne, 1906: synonym of Aplysia dactylomela Rang, 1828
Aplysia petalifera Rang, 1828: synonym of Petalifera petalifera (Rang, 1828)
Aplysia poikilia Bergh, 1908: synonym of Aplysia maculata Rang, 1828
Aplysia protea Rang, 1828: synonym of Aplysia dactylomela Rang, 1828
Aplysia pulmonica Gould, 1852: synonym of Aplysia argus Rüppell & Leuckart, 1830
Aplysia radiata Ehrenberg, 1831: synonym of Aplysia dactylomela Rang, 1828
Aplysia rosea Rathke, 1799: synonym of Aplysia punctata (Cuvier, 1803)
Aplysia schrammi Deshayes, 1857: synonym of Aplysia dactylomela Rang, 1828
Aplysia scutellata Ehrenberg, 1831: synonym of Aplysia dactylomela Rang, 1828
Aplysia sibogae Bergh, 1905: synonym of Aplysia juliana Quoy & Gaimard, 1832
Aplysia striata Quoy & Gaimard, 1832: synonym of Stylocheilus longicauda (Quoy & Gaimard, 1825)
Aplysia tigrina Rang, 1828: synonym of Aplysia dactylomela Rang, 1828
Aplysia tigrinella Gray, 1850: synonym of Aplysia maculata Rang, 1828
Aplysia tigrinella Gray, 1850: synonym of Aplysia maculata Rang, 1828
Aplysia velifer Bergh, 1905: synonym of Aplysia dactylomela Rang, 1828
Aplysia willcoxi (Hellprin, 1886): synonym of Aplysia fasciata Poiret, 1789
Aplysia winneba Eales, 1957: synonym of Aplysia fasciata Poiret, 1789
References
Kandel Eric R., Schwartz, J.H., Jessell, T.M. 2000. Principles of Neural Science, 4th ed., p. 180. McGraw-Hill, New York.
Howson, C.M.; Picton, B.E. (Ed.) (1997). The species directory of the marine fauna and flora of the British Isles and surrounding seas. Ulster Museum Publication, 276. The Ulster Museum: Belfast, UK. . vi, 508 (+ cd-rom) pp
Gofas, S.; Le Renard, J.; Bouchet, P. (2001). Mollusca, in: Costello, M.J. et al. (Ed.) (2001). European register of marine species: a check-list of the marine species in Europe and a bibliography of guides to their identification. Collection Patrimoines Naturels, 50: pp. 180–213
External links
Photos of Aplysia - MondoMarino.net
Cunha, C. M.; Rosenberg, G. (2019). Type specimens of Aplysiida (Gastropoda, Heterobranchia) in the Academy of Natural Sciences of Philadelphia, with taxonomic remarks. Zoosystematics and Evolution. 95(2): 361-372
Animal models
Animal models in neuroscience
Taxa named by Carl Linnaeus
Gastropod genera | Aplysia | [
"Biology"
] | 4,310 | [
"Model organisms",
"Animal models"
] |
640,714 | https://en.wikipedia.org/wiki/Hales%E2%80%93Jewett%20theorem | In mathematics, the Hales–Jewett theorem is a fundamental combinatorial result of Ramsey theory named after Alfred W. Hales and Robert I. Jewett, concerning the degree to which high-dimensional objects must necessarily exhibit some combinatorial structure.
An informal geometric statement of the theorem is that for any positive integers n and c there is a number H such that if the cells of a H-dimensional n×n×n×...×n cube are colored with c colors, there must be one row, column, or certain diagonal (more details below) of length n all of whose cells are the same color. In other words, assuming n and c are fixed, the higher-dimensional, multi-player, n-in-a-row generalization of a game of tic-tac-toe with c players cannot end in a draw, no matter how large n is, no matter how many people c are playing, and no matter which player plays each turn, provided only that it is played on a board of sufficiently high dimension H. By a standard strategy-stealing argument, one can thus conclude that if two players alternate, then the first player has a winning strategy when H is sufficiently large, though no practical algorithm for obtaining this strategy is known.
Formal statement
Let W be the set of words of length H over an alphabet with n letters; that is, the set of sequences of {1, 2, ..., n} of length H. This set forms the hypercube that is the subject of the theorem.
A variable word w(x) over W still has length H but includes the special element x in place of at least one of the letters. The words w(1), w(2), ..., w(n) obtained by replacing all instances of the special element x with 1, 2, ..., n, form a combinatorial line in the space W; combinatorial lines correspond to rows, columns, and (some of the) diagonals of the hypercube. The Hales–Jewett theorem then states that for given positive integers n and c, there exists a positive integer H, depending on n and c, such that for any partition of W into c parts, there is at least one part that contains an entire combinatorial line.
For example, take n = 3, H = 2, and c = 2. The hypercube W in this case
is just the standard tic-tac-toe board, with nine positions:
A typical combinatorial
line would be the word 2x, which corresponds to the line 21, 22, 23; another combinatorial line is xx, which is the line
11, 22, 33. (Note that the line 13, 22, 31, while a valid line for the game tic-tac-toe, is not considered a combinatorial line.) In this particular case, the Hales–Jewett theorem does not apply; it is possible to divide
the tic-tac-toe board into two sets, e.g. {11, 22, 23, 31} and {12, 13, 21, 32, 33}, neither of which contain
a combinatorial line (and would correspond to a draw in the game of tic-tac-toe). On the other hand, if we increase
H to, say, 8 (so that the board is now eight-dimensional, with 38 = 6561 positions), and partition this board
into two sets (the "noughts" and "crosses"), then one of the two sets must contain a combinatorial line (i.e. no draw is possible in this variant of tic-tac-toe). For a proof, see below.
Proof of Hales–Jewett theorem (in a special case)
We now prove the Hales–Jewett theorem in the special case n = 3, c = 2, H = 8 discussed above. The idea is to
reduce this task to that of proving simpler versions of the Hales–Jewett theorem (in this particular case, to the cases n = 2, c = 2, H = 2 and n = 2, c = 6, H = 6). One can prove the general case of the Hales–Jewett theorem by similar methods, using mathematical induction.
Each element of the hypercube W is a string of eight numbers from 1 to 3, e.g. 13211321 is an element of the hypercube. We are assuming that this hypercube is completely filled with "noughts" and "crosses". We shall use a proof by contradiction and assume that neither the set of noughts nor the set of crosses contains a combinatorial line. If we fix the first six elements of such a string and let the last two vary, we obtain an ordinary tic-tac-toe board, for instance "132113??" gives such a board. For each such board "abcdef??", we consider the positions
"abcdef11", "abcdef12", "abcdef22". Each of these must be filled with either a nought or a cross, so by the pigeonhole principle two of them must be filled with the same symbol. Since any two of these positions are part of
a combinatorial line, the third element of that line must be occupied by the opposite symbol (since we are assuming that no combinatorial line has all three elements filled with the same symbol). In other words, for each choice of "abcdef" (which can be thought of as an element of the six-dimensional hypercube W), there are six (overlapping) possibilities:
abcdef11 and abcdef12 are noughts; abcdef13 is a cross.
abcdef11 and abcdef22 are noughts; abcdef33 is a cross.
abcdef12 and abcdef22 are noughts; abcdef32 is a cross.
abcdef11 and abcdef12 are crosses; abcdef13 is a nought.
abcdef11 and abcdef22 are crosses; abcdef33 is a nought.
abcdef12 and abcdef22 are crosses; abcdef32 is a nought.
Thus we can partition the six-dimensional hypercube W into six classes, corresponding to each of the above six possibilities. (If an element abcdef obeys multiple possibilities, we can choose one arbitrarily, e.g. by choosing the highest one on the above list).
Now consider the seven elements 111111, 111112, 111122, 111222, 112222, 122222, 222222 in W. By the pigeonhole principle, two of these elements must fall into the same class. Suppose for instance
111112 and 112222 fall into class (5), thus 11111211, 11111222, 11222211, 11222222 are crosses and 11111233, 11222233 are noughts. But now consider the position 11333233, which must be filled with either a cross or a nought. If it is filled with a cross, then the combinatorial line 11xxx2xx is filled entirely with crosses, contradicting our hypothesis. If instead it is filled with a nought, then the combinatorial line 11xxx233 is filled entirely with noughts, again contradicting our hypothesis. Similarly if any other two of the above seven elements of W fall into the same class. Since we have a contradiction in all cases, the original hypothesis must be false; thus there must exist at least one combinatorial line consisting entirely of noughts or entirely of crosses.
The above argument was somewhat wasteful; in fact the same theorem holds for H = 4.
If one extends the above argument to general values of n and c, then H will grow very fast; even when c = 2 (which corresponds to two-player tic-tac-toe) the H given by the above argument grows as fast as the Ackermann function. The first primitive recursive bound is due to Saharon Shelah, and is still the best known bound in general for the Hales–Jewett number H = H(n, c).
Connections with other theorems
Observe that the above argument also gives the following corollary: if we let A be the set of all
eight-digit numbers whose digits are all either 1, 2, 3 (thus A contains numbers such as 11333233),
and we color A with two colors, then A contains at least one arithmetic progression of length three, all of whose elements are the same color. This is simply because all of the combinatorial lines appearing in the above proof of the Hales–Jewett theorem, also form arithmetic progressions in decimal notation. A more general formulation of this argument can be used to show that the Hales–Jewett theorem generalizes van der Waerden's theorem. Indeed the Hales–Jewett theorem is substantially a stronger theorem.
Just as van der Waerden's theorem has a stronger density version in Szemerédi's theorem, the Hales–Jewett theorem also has a density version. In this strengthened version of the Hales–Jewett theorem, instead of coloring the entire hypercube W into c colors, one is given an arbitrary subset A of the hypercube W with some given density 0 < δ < 1. The theorem states that if H is sufficiently large depending on n and δ, then the set A must necessarily contain an entire combinatorial line.
The density Hales–Jewett theorem was originally proved by Furstenberg and Katznelson using ergodic theory. In 2009, the Polymath Project developed a new proof of the density Hales–Jewett theorem based on ideas from the proof of the corners theorem. Dodos, Kanellopoulos, and Tyros gave a simplified version of the Polymath proof.
The Hales–Jewett is generalized by the Graham–Rothschild theorem, on higher-dimensional combinatorial cubes.
References
External links
Full proof of HJT - begins on slide 57
Science News article on the collaborative proof of the density Hales-Jewett theorem
A blog post by Steven Landsburg discussing how the proof of this theorem was improved collaboratively on a blog
Higher-Dimensional Tic-Tac-Toe | Infinite Series by PBS
Ramsey theory
Theorems in discrete mathematics
Articles containing proofs
Positional games | Hales–Jewett theorem | [
"Mathematics"
] | 2,219 | [
"Discrete mathematics",
"Mathematical theorems",
"Combinatorics",
"Theorems in discrete mathematics",
"Articles containing proofs",
"Mathematical problems",
"Ramsey theory"
] |
641,019 | https://en.wikipedia.org/wiki/James%20A.%20Yorke | James A. Yorke (born August 3, 1941) is a Distinguished University Research Professor of Mathematics and Physics and former chair of the Mathematics Department at the University of Maryland, College Park.
Life and career
Born in Plainfield, New Jersey, United States, Yorke attended The Pingry School, then located in Hillside, New Jersey. Yorke is now a Distinguished University Research Professor of Mathematics and Physics with the Institute for Physical Science and Technology at the University of Maryland. In June 2013, Yorke retired as chair of the University of Maryland's Math department. He devotes his university efforts to collaborative research in chaos theory and genomics.
He and Benoit Mandelbrot were the recipients of the 2003 Japan Prize in Science and Technology: Yorke was selected for his work in chaotic systems. In 2003 He was elected a Fellow of the American Physical Society, and in 2012 became a fellow of the American Mathematical Society.
He received the Doctor Honoris Causa degree from the Universidad Rey Juan Carlos, Madrid, Spain, in January 2014. In June 2014, he received the Doctor Honoris Causa degree from Le Havre University, Le Havre, France. He was a 2016 Thomson Reuters Citations Laureate in Physics.
Contributions
Period three implies chaos
He and his co-author T.Y. Li coined the mathematical term chaos in a paper they published in 1975 entitled Period three implies chaos, in which it was proved that every one-dimensional continuous map
F: R → R
that has a period-3 orbit must have two properties:
(1) For each positive integer p, there is a point in R that returns to where it started after p applications of the map and not before.
This means there are infinitely many periodic points (any of which may or may not be stable): different sets of points for each period p. This turned out to be a special case of Sharkovskii's theorem.
The second property requires some definitions. A pair of points x and y is called “scrambled” if as the map is applied repeatedly to the pair, they get closer together and later move apart and then get closer together and move apart, etc., so that they get arbitrarily close together without staying close together. The analogy is to an egg being scrambled forever, or to typical pairs of atoms behaving in this way. A set S is called a scrambled set if every pair of distinct points in S is scrambled. Scrambling is a kind of mixing.
(2) There is an uncountably infinite set S that is scrambled.
A map satisfying Property 2 is sometimes called "chaotic in the sense of Li and Yorke". Property 2 is often stated succinctly as their article's title phrase "Period three implies chaos". The uncountable set of chaotic points may, however, be of measure zero (see for example the article Logistic map), in which case the map is said to have unobservable nonperiodicity or unobservable chaos.
O.G.Y control method
He and his colleagues (Edward Ott and Celso Grebogi) had shown with a numerical example that one can convert a chaotic motion into a periodic one by a proper time-dependent perturbation of the parameter. This article is considered a classic among the works in the control theory of chaos, and their control method is known as the O.G.Y. method.
Books
Together with Kathleen T. Alligood and Tim D. Sauer, he was the author of the book Chaos: An Introduction to Dynamical Systems.
References
External links
Website at the University of Maryland
1941 births
Living people
20th-century American mathematicians
21st-century American mathematicians
Chaos theorists
Columbia University alumni
Fellows of the American Physical Society
Fellows of the American Mathematical Society
Theoretical physicists
University of Maryland, College Park alumni
University of Maryland, College Park faculty
Fellows of the Society for Industrial and Applied Mathematics
People from Plainfield, New Jersey
Mathematicians from New Jersey | James A. Yorke | [
"Physics"
] | 808 | [
"Theoretical physics",
"Theoretical physicists"
] |
641,565 | https://en.wikipedia.org/wiki/Hydraulic%20accumulator | A hydraulic accumulator is a pressure storage reservoir in which an incompressible hydraulic fluid is held under pressure that is applied by an external source of mechanical energy. The external source can be an engine, a spring, a raised weight, or a compressed gas. An accumulator enables a hydraulic system to cope with extremes of demand using a less powerful pump, to respond more quickly to a temporary demand, and to smooth out pulsations. It is a type of energy storage device.
Compressed gas accumulators, also called hydro-pneumatic accumulators, are by far the most common type.
Types of accumulator
Towers
The first accumulators for William Armstrong's hydraulic dock machinery were simple raised water towers. Water was pumped to a tank at the top of these towers by steam pumps. When dock machinery required hydraulic power, the hydrostatic head of the water's height above ground provided the necessary pressure.
These simple accumulators were extremely tall. For instance, Grimsby Dock Tower, built in 1852, is tall. Because of their size, they were costly, and so were constructed for less than a decade. Around the same time, John Fowler was working on the construction of the ferry quay at nearby New Holland but could not use similar hydraulic power as the poor ground conditions did not permit a tall accumulator tower to be built. By the time Grimsby was opened, it was already obsolete as Armstrong had developed the more complex, but much smaller, weighted accumulator for use at New Holland. In 1892 the original Grimsby tower's function was replaced, on Fowler's advice, by a smaller weighted accumulator on an adjacent dock, although the tower remains to this day as a well-known landmark.
Other surviving towers include one adjacent to East Float in Birkenhead, England, and another located at the Bramley-Moore Dock, Liverpool, England. The latter tower is to be renovated as part of plans for the proposed development of the area associated with the construction of a new football stadium for Everton F.C.
Raised weight
A raised weight accumulator consists of a vertical cylinder containing fluid connected to the hydraulic line. The cylinder is closed by a piston on which a series of weights are placed that exert a downward force on the piston and thereby pressurizes the fluid in the cylinder. In contrast to compressed gas and spring accumulators, this type delivers a nearly constant pressure, regardless of the volume of fluid in the cylinder, until it is empty. (The pressure will decline somewhat as the cylinder is emptied due to the decline in weight of the remaining fluid.)
A working example of this type of accumulator may be found at the hydraulic engine house, Bristol Harbour. The original 1887 accumulator is in place in its tower, an external accumulator was added in 1954 and this system was used until 2010 to power the Cumberland Basin (Bristol) lock gates. The water is pumped from the harbour into a header tank and then fed by gravity to the pumps. The working pressure is 750 psi (5.2 MPa, or 52 bar) which was used to power the cranes, bridges and locks of Bristol Harbour.
The original operating mechanism of Tower Bridge, London, also used this type of accumulator. Although no longer in use, two of the six accumulators may still be seen in situ in the bridge's museum.
Regent's Canal Dock, now named Limehouse Basin has the remains of a hydraulic accumulator, dating from 1869, a fragment of the oldest remaining such facility in the world, the second at the dock, which was installed later than that at Poplar Dock, originally listed incorrectly as a signalbox for the London and Blackwall Railway, when correctly identified, it was restored as a tourist attraction by the now defunct London Docklands Development Corporation. Now owned by the Canal & River Trust, it is open for large groups on application to the Dockmaster's Office at the basin and on both the afternoons of London Open House Weekend, held on the third weekend of September each year.
London had an extensive public hydraulic power system from the mid-nineteenth century finally closing in the 1970s with 5 hydraulic power stations, operated by the London Hydraulic Power Company. Railway goods yards and docks often had their own separate system.
Air-filled accumulator
A simple form of accumulator is an enclosed volume, filled with air. A vertical section of pipe, often enlarged diameter, may be enough and fills itself with air, trapped as the pipework fills.
Such accumulators typically do not have enough capacity to be useful for storing significant power since they cannot be pre-charged with high pressure gas, but they can act as a buffer to absorb fluctuations in pressure. They are used to smooth out the delivery from piston pumps. Another use is as a shock absorber to damp out water hammer; this application is an integral part of most ram pumps. Loss of air will result in loss of effectiveness. If air is lost over time, the design must include some way to replenish the accumulator.
Compressed gas (or gas-charged) closed accumulator
A compressed gas accumulator consists of a cylinder with two chambers that are separated by an elastic diaphragm, a totally enclosed bladder, or a floating piston. One chamber contains the fluid and is connected to the hydraulic line. The other chamber contains an inert gas (typically nitrogen), usually under pressure, that provides the compressive force on the hydraulic fluid. Inert gas is used because oxygen and oil can form an explosive mixture when combined under high pressure. As the volume of the compressed gas changes, the pressure of the gas (and the pressure on the fluid) changes inversely.
For low pressure water system use the water usually fills a rubber bladder within the tank (pictured), preventing contact with the tank which would otherwise need to be corrosion resistant. Units designed for high-pressure applications such as hydraulic systems are usually pre-charged to a very high pressure (approaching the system operating pressure) and are designed to prevent the bladder or membrane being damaged by this internal pressure when the system pressure is low. For bladder types this generally requires the bladder to be filled with the gas so that when system pressure is zero the bladder is fully expanded rather than being crushed by the gas charge. To prevent the bladder being forced out of the device when the system pressure is low there is typically either an anti-extrusion plate attached to the bladder that presses against and seals the entrance, or a spring-loaded plate on the entrance that closes when the bladder presses against it.
It is possible to increase the gas volume of the accumulator by coupling a gas bottle to the gas side of the accumulator. For the same swing in system pressure this will result in a larger portion of the accumulator volume being used. If the pressure does not vary over a very wide range this can be a cost effective way to reduce the size of the accumulator needed. If the accumulator is not of the piston type care must be taken that the bladder or membrane will not be damaged in any expected over-pressure situation, many bladder-type accumulators cannot tolerate the bladder being crushed under pressure.
A compressed gas accumulator was invented by Jean Mercier for use in variable-pitch propellers.
Spring type
A spring type accumulator is similar in operation to the gas-charged accumulator above, except that a heavy spring (or springs) is used to provide the compressive force. According to Hooke's law the magnitude of the force exerted by a spring is linearly proportional to its change of length. Therefore, as the spring compresses, the force it exerts on the fluid is increased linearly.
Metal bellows type
The metal bellows accumulators function similarly to the compressed gas type, except that the elastic diaphragm or floating piston is replaced by a hermetically sealed welded metal bellows. Fluid may be internal or external to the bellows. The advantages to the metal bellows type include exceptionally low spring rate, allowing the gas charge to do all the work with little change in pressure from full to empty, a long stroke that allows efficient usage of the casing volume, and the bellows can be built to be resistant to overpressure that would crush a bladder-type separator. The welded metal bellows accumulator provides an exceptionally high level of accumulator performance, and can be produced with a broad spectrum of alloys, resulting in a broad range of fluid compatibility. Other advantages to this type are that it does not face issues with high pressure operation, may be built to be resistant to very high or low temperatures or certain aggressive chemicals, and may be longer lasting in some situations. Metal bellows tend to be much more costly to produce than other common types.
Functioning of an accumulator
In modern, often mobile, hydraulic systems the preferred item is a gas charged accumulator, but simple systems may be spring-loaded. There may be more than one accumulator in a system. The exact type and placement of each may be a compromise due to its effects and the costs of manufacture.
An accumulator is placed close to the pump with a non-return valve preventing flow back to the pump. In the case of piston-type pumps this accumulator is placed in the ideal location to absorb pulsations of energy from the multi-piston pump. It also helps protect the system from fluid hammer. This protects system components, particularly pipework, from both potentially destructive forces.
An additional benefit is the additional energy that can be stored while the pump is subject to low demand. The designer can use a smaller-capacity pump. The large excursions of system components, such as landing gear on a large aircraft, that require a considerable volume of fluid can also benefit from one or more accumulators. These are often placed close to the demand to help overcome restrictions and drag from long pipework runs. The outflow of energy from a discharging accumulator is much greater, for a short time, than even large pumps could generate.
An accumulator can maintain the pressure in a system for periods when there are slight leaks without the pump being cycled on and off constantly. When temperature changes cause pressure excursions the accumulator helps absorb them. Its size helps absorb fluid that might otherwise be locked in a small fixed system with no room for expansion due to valve arrangement.
The gas precharge in an accumulator is set so that the separating bladder, diaphragm or piston does not reach or strike either end of the operating cylinder. The design precharge normally ensures that the moving parts do not foul the ends or block fluid passages. Poor maintenance of precharge can destroy an operating accumulator. A properly designed and maintained accumulator should operate trouble-free for years.
See also
Accumulator (energy)
Expansion tank
Notes
References
External links
Common Applications for Hydraulic Accumulators
Accumulator Applications and Compatibility
Online Accumulator Sizing Calculator
How To Repair A Hydraulic Accumulator –
Video footage of a hydraulic accumulator tower
Accumulator, hydraulic
Fluid dynamics
Energy storage
Accumulator | Hydraulic accumulator | [
"Physics",
"Chemistry",
"Engineering"
] | 2,300 | [
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Hydraulic machinery",
"Piping",
"Hydraulic accumulators",
"Fluid dynamics"
] |
4,313,204 | https://en.wikipedia.org/wiki/Glyceraldehyde%203-phosphate%20dehydrogenase | Glyceraldehyde 3-phosphate dehydrogenase (abbreviated GAPDH) () is an enzyme of about 37kDa that catalyzes the sixth step of glycolysis and thus serves to break down glucose for energy and carbon molecules. In addition to this long established metabolic function, GAPDH has recently been implicated in several non-metabolic processes, including transcription activation, initiation of apoptosis, ER-to-Golgi vesicle shuttling, and fast axonal, or axoplasmic transport. In sperm, a testis-specific isoenzyme GAPDHS is expressed.
Structure
Under normal cellular conditions, cytoplasmic GAPDH exists primarily as a tetramer. This form is composed of four identical 37-kDa subunits containing a single catalytic thiol group each and critical to the enzyme's catalytic function. Nuclear GAPDH has increased isoelectric point (pI) of pH 8.3–8.7. Of note, the cysteine residue C152 in the enzyme's active site is required for the induction of apoptosis by oxidative stress. Notably, post-translational modifications of cytoplasmic GAPDH contribute to its functions outside of glycolysis.
GAPDH is encoded by a single gene that produces a single mRNA transcript with 8 splice variants, though an isoform does exist as a separate gene that is expressed only in spermatozoa.
Reaction
Two-step conversion of G3P
The first reaction is the oxidation of glyceraldehyde 3-phosphate (G3P) at the position-1 (in the diagram it is shown as the 4th carbon from glycolysis), in which an aldehyde is converted into a carboxylic acid (ΔG°'=-50 kJ/mol (−12kcal/mol)) and NAD+ is simultaneously reduced endergonically to NADH.
The energy released by this highly exergonic oxidation reaction drives the endergonic second reaction (ΔG°'=+50 kJ/mol (+12kcal/mol)), in which a molecule of inorganic phosphate is transferred to the GAP intermediate to form a product with high phosphoryl-transfer potential: 1,3-bisphosphoglycerate (1,3-BPG).
This is an example of phosphorylation coupled to oxidation, and the overall reaction is somewhat endergonic (ΔG°'=+6.3 kJ/mol (+1.5)). Energy coupling here is made possible by GAPDH.
Mechanism
GAPDH uses covalent catalysis and general base catalysis to decrease the very large activation energy of the second step (phosphorylation) of this reaction.
1: Oxidation
First, a cysteine residue in the active site of GAPDH attacks the carbonyl group of G3P, creating a hemithioacetal intermediate (covalent catalysis).
The hemithioacetal is deprotonated by a histidine residue in the enzyme's active site (general base catalysis). Deprotonation encourages the reformation of the carbonyl group in the subsequent thioester intermediate and ejection of a hydride ion.
Next, an adjacent, tightly bound molecule of NAD+ accepts the hydride ion, forming NADH while the hemithioacetal is oxidized to a thioester.
This thioester species is much higher in energy (less stable) than the carboxylic acid species that would result if G3P were oxidized in the absence of GAPDH (the carboxylic acid species is so low in energy that the energy barrier for the second step of the reaction (phosphorylation) would be too high, and the reaction, therefore, too slow and unfavorable for a living organism).
2: Phosphorylation
NADH leaves the active site and is replaced by another molecule of NAD+, the positive charge of which stabilizes the negatively charged carbonyl oxygen in the transition state of the next and ultimate step. Finally, a molecule of inorganic phosphate attacks the thioester and forms a tetrahedral intermediate, which then collapses to release 1,3-bisphosphoglycerate, and the thiol group of the enzyme's cysteine residue.
Regulation
This protein may use the morpheein model of allosteric regulation.
Function
Metabolic
As its name indicates, glyceraldehyde 3-phosphate dehydrogenase (GAPDH) catalyses the conversion of glyceraldehyde 3-phosphate to D-glycerate 1,3-bisphosphate. This is the 6th step in the glycolytic breakdown of glucose, an important pathway of energy and carbon molecule supply which takes place in the cytosol of eukaryotic cells. The conversion occurs in two coupled steps. The first is favourable and allows the second unfavourable step to occur.
Adhesion
One of the GAPDH moonlighting functions is its role in adhesion and binding to other partners. Bacterial GAPDH from Mycoplasma and Streptococcus and fungal GAPDH from Paracoccidioides brasiliensis are known to bind with the human extracellular matrix component and act in adhesion. GAPDH is found to be surface bound contributing in adhesion and also in competitive exclusion of harmful pathogens. GAPDH from Candida albicans is found to cell-wall associated and binds to Fibronectin and Laminin. GAPDH from probiotics species are known to bind human colonic mucin and ECM, resulting in enhanced colonization of probiotics in the human gut. Patel D. et al., showed that Lactobacillus acidophilus GAPDH binds with mucin, acting in adhesion.
Transcription and apoptosis
GAPDH can itself activate transcription. The OCA-S transcriptional coactivator complex contains GAPDH and lactate dehydrogenase, two proteins previously only thought to be involved in metabolism. GAPDH moves between the cytosol and the nucleus and may thus link the metabolic state to gene transcription.
In 2005, Hara et al. showed that GAPDH initiates apoptosis. This is not a third function, but can be seen as an activity mediated by GAPDH binding to DNA like in transcription activation, discussed above. The study demonstrated that GAPDH is S-nitrosylated by NO in response to cell stress, which causes it to bind to the protein SIAH1, a ubiquitin ligase. The complex moves into the nucleus where Siah1 targets nuclear proteins for degradation, thus initiating controlled cell shutdown. In subsequent study the group demonstrated that deprenyl, which has been used clinically to treat Parkinson's disease, strongly reduces the apoptotic action of GAPDH by preventing its S-nitrosylation and might thus be used as a drug.
Metabolic switch
GAPDH acts as a reversible metabolic switch under oxidative stress. When cells are exposed to oxidants, they need excessive amounts of the antioxidant cofactor NADPH. In the cytosol, NADPH is reduced from NADP+ by several enzymes, three of them catalyze the first steps of the pentose phosphate pathway. Oxidant-treatments cause an inactivation of GAPDH. This inactivation re-routes temporally the metabolic flux from glycolysis to the pentose phosphate pathway, allowing the cell to generate more NADPH. Under stress conditions, NADPH is needed by some antioxidant-systems including glutaredoxin and thioredoxin as well as being essential for the recycling of gluthathione.
ER-to-Golgi transport
GAPDH also appears to be involved in the vesicle transport from the endoplasmic reticulum (ER) to the Golgi apparatus which is part of shipping route for secreted proteins. It was found that GAPDH is recruited by rab2 to the vesicular-tubular clusters of the ER where it helps to form COP 1 vesicles. GAPDH is activated via tyrosine phosphorylation by Src.
Additional functions
GAPDH, like many other enzymes, has multiple functions. In addition to catalysing the 6th step of glycolysis, recent evidence implicates GAPDH in other cellular processes. GAPDH has been described to exhibit higher order multifunctionality in the context of maintaining cellular iron homeostasis, specifically as a chaperone protein for labile heme within cells. This came as a surprise to researchers but it makes evolutionary sense to re-use and adapt existing proteins instead of evolving a novel protein from scratch.
Use as loading control
Because the GAPDH gene is often stably and constitutively expressed at high levels in most tissues and cells, it is considered a housekeeping gene. For this reason, GAPDH is commonly used by biological researchers as a loading control for western blot and as a control for qPCR. However, researchers have reported different regulation of GAPDH under specific conditions. For example, the transcription factor MZF-1 has been shown to regulate the GAPDH gene. Hypoxia also strongly upregulates GAPDH. Therefore, the use of GAPDH as loading control has to be considered carefully.
Cellular distribution
All steps of glycolysis take place in the cytosol and so does the reaction catalysed by GAPDH. In red blood cells, GAPDH and several other glycolytic enzymes assemble in complexes on the inside of the cell membrane. The process appears to be regulated by phosphorylation and oxygenation. Bringing several glycolytic enzymes close to each other is expected to greatly increase the overall speed of glucose breakdown. Recent studies have also revealed that GAPDH is expressed in an iron dependent fashion on the exterior of the cell membrane a where it plays a role in maintenance of cellular iron homeostasis.
Clinical significance
Cancer
GAPDH is overexpressed in multiple human cancers, such as cutaneous melanoma, and its expression is positively correlated with tumor progression. Its glycolytic and antiapoptotic functions contribute to proliferation and protection of tumor cells, promoting tumorigenesis. Notably, GAPDH protects against telomere shortening induced by chemotherapeutic drugs that stimulate the sphingolipid ceramide. Meanwhile, conditions like oxidative stress impair GAPDH function, leading to cellular aging and death. Moreover, depletion of GAPDH has managed to induce senescence in tumor cells, thus presenting a novel therapeutic strategy for controlling tumor growth.
Neurodegeneration
GAPDH has been implicated in several neurodegenerative diseases and disorders, largely through interactions with other proteins specific to that disease or disorder. These interactions may affect not only energy metabolism but also other GAPDH functions. For example, GAPDH interactions with beta-amyloid precursor protein (betaAPP) could interfere with its function regarding the cytoskeleton or membrane transport, while interactions with huntingtin could interfere with its function regarding apoptosis, nuclear tRNA transport, DNA replication, and DNA repair. In addition, nuclear translocation of GAPDH has been reported in Parkinson's disease (PD), and several anti-apoptotic PD drugs, such as rasagiline, function by preventing the nuclear translocation of GAPDH. It is proposed that hypometabolism may be one contributor to PD, but the exact mechanisms underlying GAPDH involvement in neurodegenerative disease remains to be clarified. The SNP rs3741916 in the 5' UTR of the GAPDH gene may be associated with late onset Alzheimer's disease.
Interactions
Protein binding partners
GAPDH participates in a number of biological functions through its protein–protein interactions with:
tubulin to facilitate microtubule bundling;
actin to facilitate actin polymerization;
VDAC1 to induce mitochondrial membrane permeabilization (MMP) and apoptosis;
Inositol 1,4,5-trisphosphate receptor to regulate intracellular Ca2+ signaling;
Oct-1 to form the coactivator complex OCA-S, which is required for histone H2B synthesis during S phase of the cell cycle;
p22 to aid microtubule organization;
Rab2 to facilitate endoplasmic reticulum (ER)–golgi transport;
Transferrin on the surface of diverse cells and in extracellular fluid;
Lactate dehydrogenase;
Lactoferrin;
Apurinic/apyrimidinic endonuclease (APE1), thus converting oxidized APE1 to its reduced form, to restart its endonuclease activity;
Promyelocytic leukaemia protein (PML) in an RNA-dependent fashion;
Rheb to sequester the GTPase during low glucose conditions;
Siah1 to form a complex that translocates to the nucleus, where it ubiquitinates and degrades nuclear proteins during nitrosative stress conditions;
GAPDH's competitor of Siah protein enhances life (GOSPEL) to block GAPDH interaction with Siah1 and, thus, cell death in response to oxidative stress;
p300/CREB binding protein (CBP), which acetylates GAPDH and, in turn, enhances the acetylation of additional apoptotic targets;
skeletal muscle-specific Ca2+/calmodulin-dependent protein kinase;
Akt;
Beta-amyloid precursor protein (betaAPP);
Huntingtin.
GAPDH can self-associate into homotypic oligomers/aggregates
Nucleic acid binding partners
GAPDH binds to single-stranded RNA and DNA and a number of nucleic acid binding partners have been identified:
tRNA,
Hepatitis A viral RNA,
Hepatitis B viral RNA,
Hepatitis C viral RNA,
HPIV3,
lymphokine mRNA,
IFN-γ mRNA,
JEV mRNA, and
telomeric DNA.
Inhibitors
Desmethylselegiline
Koningic acid
Rasagiline
Selegiline
Interactive pathway map
References
Further reading
diagram of the GAPDH reaction mechanism from Lodish MCB at NCBI bookshelf
similar diagram from Alberts The Cell at NCBI bookshelf
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human Glyceraldehyde-3-phosphate dehydrogenase
EC 1.2.1
Glycolysis enzymes
Glycolysis | Glyceraldehyde 3-phosphate dehydrogenase | [
"Chemistry"
] | 3,068 | [
"Carbohydrate metabolism",
"Glycolysis"
] |
4,318,654 | https://en.wikipedia.org/wiki/Function%20point | The function point is a "unit of measurement" to express the amount of business functionality an information system (as a product) provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost (in dollars or hours) of a single unit is calculated from past projects.
Standards
There are several recognized standards and/or public specifications for sizing software based on Function Point.
1. ISO Standards
FiSMA: ISO/IEC 29881:2010 Information technology – Systems and software engineering – FiSMA 1.1 functional size measurement method.
IFPUG: ISO/IEC 20926:2009 Software and systems engineering – Software measurement – IFPUG functional size measurement method.
Mark-II: ISO/IEC 20968:2002 Software engineering – Ml II Function Point Analysis – Counting Practices Manual
Nesma: ISO/IEC 24570:2018 Software engineering – Nesma functional size measurement method version 2.3 – Definitions and counting guidelines for the application of Function Point Analysis
COSMIC: ISO/IEC 19761:2011 Software engineering. A functional size measurement method.
OMG: ISO/IEC 19515:2019 Information technology — Object Management Group Automated Function Points (AFP), 1.0
The first five standards are implementations of the over-arching standard for Functional Size Measurement ISO/IEC 14143. The OMG Automated Function Point (AFP) specification, led by the Consortium for IT Software Quality, provides a standard for automating the Function Point counting according to the guidelines of the International Function Point User Group (IFPUG) However, the current implementations of this standard have a limitation in being able to distinguish External Output (EO) from External Inquiries (EQ) out of the box, without some upfront configuration.
Introduction
Function points were defined in 1979 in Measuring Application Development Productivity by Allan J. Albrecht at IBM. The functional user requirements of the software are identified and each one is categorized into one of five types: outputs, inquiries, inputs, internal files, and external interfaces. Once the function is identified and categorized into a type, it is then assessed for complexity and assigned a number of function points. Each of these functional user requirements maps to an end-user business function, such as a data entry for an Input or a user query for an Inquiry. This distinction is important because it tends to make the functions measured in function points map easily into user-oriented requirements, but it also tends to hide internal functions (e.g. algorithms), which also require resources to implement.
There is currently no ISO recognized FSM Method that includes algorithmic complexity in the sizing result. Recently there have been different approaches proposed to deal with this perceived weakness, implemented in several commercial software products. The variations of the Albrecht-based IFPUG method designed to make up for this (and other weaknesses) include:
Early and easy function points – Adjusts for problem and data complexity with two questions that yield a somewhat subjective complexity measurement; simplifies measurement by eliminating the need to count data elements.
Engineering function points – Elements (variable names) and operators (e.g., arithmetic, equality/inequality, Boolean) are counted. This variation highlights computational function. The intent is similar to that of the operator/operand-based Halstead complexity measures.
Bang measure – Defines a function metric based on twelve primitive (simple) counts that affect or show Bang, defined as "the measure of true function to be delivered as perceived by the user." Bang measure may be helpful in evaluating a software unit's value in terms of how much useful function it provides, although there is little evidence in the literature of such application. The use of Bang measure could apply when re-engineering (either complete or piecewise) is being considered, as discussed in Maintenance of Operational Systems—An Overview.
Feature points – Adds changes to improve applicability to systems with significant internal processing (e.g., operating systems, communications systems). This allows accounting for functions not readily perceivable by the user, but essential for proper operation.
Weighted Micro Function Points – One of the newer models (2009) which adjusts function points using weights derived from program flow complexity, operand and operator vocabulary, object usage, and algorithm.
Fuzzy Function Points - Proposes a fuzzy and gradative transition between low x medium and medium x high complexities
Contrast
The use of function points in favor of lines of code seek to address several additional issues:
The risk of "inflation" of the created lines of code, and thus reducing the value of the measurement system, if developers are incentivized to be more productive. FP advocates refer to this as measuring the size of the solution instead of the size of the problem.
Lines of Code (LOC) measures reward low level languages because more lines of code are needed to deliver a similar amount of functionality to a higher level language. C. Jones offers a method of correcting this in his work.
LOC measures are not useful during early project phases where estimating the number of lines of code that will be delivered is challenging. However, Function Points can be derived from requirements and therefore are useful in methods such as estimation by proxy.
Criticism
Albrecht observed in his research that Function Points were highly correlated to lines of code, which has resulted in a questioning of the value of such a measure if a more objective measure, namely counting lines of code, is available. In addition, there have been multiple attempts to address perceived shortcomings with the measure by augmenting the counting regimen. Others have offered solutions to circumvent the challenges by developing alternative methods which create a proxy for the amount of functionality delivered.
See also
COCOMO (Constructive Cost Model)
Comparison of development estimation software
COSMIC functional size measurement
Mark II method
Object point
Software development effort estimation
Software Sizing
Source lines of code
Use Case Points
The Simple Function Point method
References
External links
The International Function Point Users Group (IFPUG)
Software metrics
Software engineering costs
pt:Ponto de função | Function point | [
"Mathematics",
"Engineering"
] | 1,223 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
4,319,349 | https://en.wikipedia.org/wiki/Quantum%20reflection | Quantum reflection is a uniquely quantum phenomenon in which an object, such as a neutron or a small molecule, reflects smoothly and in a wavelike fashion from a much larger surface, such as a pool of mercury. A classically behaving neutron or molecule will strike the same surface much like a thrown ball, hitting only at one atomic-scale location where it is either absorbed or scattered. Quantum reflection provides a powerful experimental demonstration of particle-wave duality, since it is the extended quantum wave packet of the particle, rather than the particle itself, that reflects from the larger surface. It is similar to reflection high-energy electron diffraction, where electrons reflect and diffraction from surfaces, and grazing incidence atom scattering, where the fact that atoms (and ions) can also be waves is used to diffract from surfaces.
Definition
In a workshop about quantum reflection, the following definition of quantum reflection was suggested:
Quantum reflection is a classically counterintuitive phenomenon whereby the motion of particles is reverted "against the force" acting on them. This effect manifests the wave nature of particles and influences collisions of ultracold atoms and interaction of atoms with solid surfaces.
Observation of quantum reflection has become possible thanks to recent advances in trapping and cooling atoms.
Reflection of slow atoms
Although the principles of quantum mechanics apply to any particles, usually the term "quantum reflection" means reflection of atoms from a surface of condensed matter (liquid or solid). The full potential experienced by the incident atom does become repulsive at a very small distance from the surface (of order of size of atoms). This is when the atom becomes aware of the discrete character of material. This repulsion is responsible for the classical scattering one would expect for particles incident on a surface. Such scattering can be diffuse rather than specular, so this component of the reflection is easy to distinguish. To reduce this part of the physical process, a grazing angle of incidence is used; this enhances the quantum reflection. This requirement of small incident velocities for the particles means that a non-relativistic approximation to quantum mechanics is appropriate.
Single-dimensional approximation
So far, one usually considers the single-dimensional case of this phenomenon, that is when the potential has translational symmetry in two directions ( and ), such that only a single coordinate () is important. In this case one can examine the specular reflection of a slow neutral atom from a solid state surface
. Where one has an atom in a region of free space close to a material capable of being polarized, a combination of the pure van der Waals interaction, and the related Casimir-Polder interaction attracts the atom to the surface of the material. The latter force dominates when the atom is comparatively far from the surface, and the former when the atom comes closer to the surface. The intermediate region is controversial as it is dependent upon the specific nature and quantum state of the incident atom.
The condition for a reflection to occur as the atom experiences the attractive potential can be given by the presence of regions of space where the WKB approximation to the atomic wave-function breaks down. In accordance with this approximation the wavelength of the gross motion of the atom system toward the surface as a quantity local to every region along the axis is,
where is the atomic mass, is its energy, and is the potential it experiences, then it is clear that we cannot give meaning to this quantity where,
That is, in regions of space where the variation of the atomic wavelength is significant over its own length (i.e. the gradient of is steep), there is no meaning in the approximation of a local wavelength. This breakdown occurs irrespective of the sign of the potential, . In such regions part of the incident atom wave-function may become reflected. Such a reflection may occur for slow atoms experiencing the comparatively rapid variation of the Van der Waals potential near the material surface. This is just the same kind of phenomenon as occurs when light passes from a material of one refractive index to another of a significantly different index over a small region of space. Irrespective of the sign of the difference in index, there will be a reflected component of the light from the interface. Indeed, quantum reflection from the surface of solid-state wafer allows one to make the quantum optical analogue of a mirror - the atomic mirror - to a high precision.
Experiments with grazing incidence
Practically, in many experiments with quantum reflection from Si, the grazing incidence angle is used (figure A).
The set-up is mounted in a vacuum chamber to provide a several-meter path free of atoms; a good vacuum (at the level of 10−7 Torr or ) is required. The magneto-optical trap (MOT) is used to collect cold atoms, usually excited He or Ne, approaching the point-like source of atoms. The excitation of atoms is not essential for the quantum reflection but it allows the efficient trapping and cooling using optical frequencies. In addition, the excitation of atoms allows the registration at the micro-channel plate (MCP) detector (bottom of the figure). Movable edges are used to stop atoms which do not go toward the sample (for example a Si plate), providing the collimated atomic beam. The He-Ne laser was used to control the orientation of the sample and measure the grazing angle . At the MCP, there was observed relatively intensive strip of atoms which come straightly (without reflection) from the MOT, by-passing the sample, strong shadow of the sample (the thickness of this shadow could be used for rough control of the grazing angle), and the relatively weak strip produced by the reflected atoms. The ratio of density of atoms registered at the center of this strip to the density of atoms at the directly illuminated region was considered as efficiency of quantum reflection, i.e., reflectivity. This reflectivity strongly depends on the grazing angle and speed of atoms.
In the experiments with Ne atoms, usually just fall down, when the MOT is suddenly switched off. Then, the speed of atoms is determined as , where is acceleration of free fall, and is distance from the MOT to the sample. In experiments described, this distance was of order of , providing the speed of order of . Then, the transversal wavenumber can be calculated as , where is mass of the atom, and is the Planck constant.
In the case with He, the additional resonant laser could be used to release the atoms and provide them an additional velocity; the delay since the release of the atoms till the registration allowed to estimate this additional velocity; roughly, , where is time delay since the release of atoms till the click at the detector. Practically, could vary from .
Although the scheme at the figure looks simple, the extend facility is necessary to slow atoms, trap them and cool to millikelvin temperature, providing a micrometre size source of cold atoms. Practically, the mounting and maintaining of this facility (not shown in the figure) is the heaviest job in the experiments with quantum reflection of cold atoms. The possibility of an experiment with the quantum reflection with just a pinhole instead of MOT are discussed in the literature.
Casimir and van der Waals attraction
Despite this, there is some doubt as to the physical origin of quantum reflection from solid surfaces. As was briefly mentioned above, the potential in the intermediate region between the regions dominated by the Casimir-Polder and Van der Waals interactions requires an explicit Quantum Electrodynamical calculation for the particular state and type of atom incident on the surface. Such a calculation is very difficult. Indeed, there is no reason to suppose that this potential is solely attractive within the intermediate region. Thus the reflection could simply be explained by a repulsive force, which would make the phenomenon not quite so surprising. Furthermore, a similar dependence for reflectivity on the incident velocity is observed in the case of the absorption of particles in vicinity of a surface. In the simplest case, such absorption could be described with a non-Hermitian potential (i.e. one where probability is not conserved). Until 2006, the published papers interpreted the reflection in terms of a Hermitian potential;
this assumption allows to build a quantitative theory.
Efficient quantum reflection
A qualitative estimate for the efficiency of quantum reflection can be made using dimensional analysis. Letting be mass of the atom and the normal component of its wave-vector, then the energy of the normal motion of the particle,
should be compared to the potential, of interaction. The distance, at which can be considered as the distance at which the atom will come across a troublesome discontinuity in the potential. This is the point at which the WKB method truly becomes nonsense. The condition for efficient quantum reflection can be written as . In other words, the wavelength is small compared to the distance at which the atom may become reflected from the surface. If this condition holds, the aforementioned effect of the discrete character of the surface may be neglected. This argument produces a simple estimate for the reflectivity, ,
which shows good agreement with experimental data for excited neon and helium atoms, reflected from a flat silicon surface (fig.1), see
and references therein. Such a fit is also in good agreement with a single-dimensional analysis of the scattering of atoms from an attractive potential,. Such agreement indicates, that, at least in the case of noble gases and Si surface, the quantum reflection can be described with single-dimensional hermitian potential, as the result of attraction of atoms to the surface.
Ridged mirror
The effect of quantum reflection can be enhanced using ridged mirrors
. If one produces a surface consisting of a set of narrow ridges then the resulting non-uniformity of the material allows the reduction of the effective van der Waals constant; this extends the working ranges of the grazing angle. For this reduction to be valid, we must have small distances, between the ridges. Where becomes large, the non-uniformity is such that the ridged mirror must be interpreted in terms of multiple Fresnel diffraction or the Zeno effect; these interpretations give similar estimates for the reflectivity
. See ridged mirror for the details.
Similar enhancement of quantum reflection takes place where one has particles incident on an array of pillars
. This was observed with very slow atoms (Bose–Einstein condensate) at almost normal incidence.
Application of quantum reflection
Quantum reflection makes the idea of solid-state atomic mirrors and atomic-beam imaging systems (atomic nanoscope) possible. The use of quantum reflection in the production of atomic traps has also been suggested.
References
See also
Atom optics
Ridged mirror
Casimir force
van der Waals potential
Quantum optics | Quantum reflection | [
"Physics"
] | 2,172 | [
"Quantum optics",
"Quantum mechanics"
] |
4,319,802 | https://en.wikipedia.org/wiki/Methyl%20isothiocyanate | Methyl isothiocyanate is the organosulfur compound with the formula CH3N=C=S. This low melting colorless solid is a powerful lachrymator. As a precursor to a variety of valuable bioactive compounds, it is the most important organic isothiocyanate in industry.
Synthesis
It is prepared industrially by two routes. Annual production in 1993 was estimated to be 4,000 tonnes. The main method involves the thermal rearrangement of methyl thiocyanate:
CH3S−C≡N → CH3N=C=S
It is also prepared via with the reaction of methylamine with carbon disulfide followed by oxidation of the resulting dithiocarbamate with hydrogen peroxide. A related method is useful to prepare this compound in the laboratory.
MITC forms naturally upon the enzymatic degradation of glucocapparin, a glucoside found in capers.
Reactions
A characteristic reaction is with amines to give methyl thioureas:
CH3NCS + R2NH → R2NC(S)NHCH3
Other nucleophiles add similarly.
Applications
Solutions of MITC are used in agriculture as soil fumigants, mainly for protection against fungi and nematodes.
MITC is a building block for the synthesis of 1,3,4-thiadiazoles, which are heterocyclic compounds used as herbicides. Commercial products include "Spike", "Ustilan," and "Erbotan."
Well known pharmaceuticals prepared using MITC include Zantac and Tagamet. Suritozole is a third example.
MITC is used in the Etasuline patent (Ex2), although the compound is question (Ex6) is with EITC.
Safety
MITC is a dangerous lachrymator as well as being poisonous.
See also
6-MITC
Bhopal disaster
References
Methyl esters
Isothiocyanates
Lachrymatory agents | Methyl isothiocyanate | [
"Chemistry"
] | 413 | [
"Isothiocyanates",
"Lachrymatory agents",
"Functional groups",
"Chemical weapons"
] |
9,715,483 | https://en.wikipedia.org/wiki/Collectionwise%20normal%20space | In mathematics, a topological space is called collectionwise normal if for every discrete family Fi (i ∈ I) of closed subsets of there exists a pairwise disjoint family of open sets Ui (i ∈ I), such that Fi ⊆ Ui. Here a family of subsets of is called discrete when every point of has a neighbourhood that intersects at most one of the sets from .
An equivalent definition of collectionwise normal demands that the above Ui (i ∈ I) themselves form a discrete family, which is stronger than pairwise disjoint.
Some authors assume that is also a T1 space as part of the definition, but no such assumption is made here.
The property is intermediate in strength between paracompactness and normality, and occurs in metrization theorems.
Properties
A collectionwise normal space is collectionwise Hausdorff.
A collectionwise normal space is normal.
A Hausdorff paracompact space is collectionwise normal. In particular, every metrizable space is collectionwise normal.Note: The Hausdorff condition is necessary here, since for example an infinite set with the cofinite topology is compact, hence paracompact, and T1, but is not even normal.
Every normal countably compact space (hence every normal compact space) is collectionwise normal.Proof: Use the fact that in a countably compact space any discrete family of nonempty subsets is finite.
An Fσ-set in a collectionwise normal space is also collectionwise normal in the subspace topology. In particular, this holds for closed subsets.
The states that a collectionwise normal Moore space is metrizable.
Hereditarily collectionwise normal space
A topological space X is called hereditarily collectionwise normal if every subspace of X with the subspace topology is collectionwise normal.
In the same way that hereditarily normal spaces can be characterized in terms of separated sets, there is an equivalent characterization for hereditarily collectionwise normal spaces. A family of subsets of X is called a separated family if for every i, we have , with cl denoting the closure operator in X, in other words if the family of is discrete in its union. The following conditions are equivalent:
X is hereditarily collectionwise normal.
Every open subspace of X is collectionwise normal.
For every separated family of subsets of X, there exists a pairwise disjoint family of open sets , such that .
Examples of hereditarily collectionwise normal spaces
Every linearly ordered topological space (LOTS)
Every generalized ordered space (GO-space)
Every metrizable space. This follows from the fact that metrizable spaces are collectionwise normal and being metrizable is a hereditary property.
Every monotonically normal space
Notes
References
Properties of topological spaces | Collectionwise normal space | [
"Mathematics"
] | 580 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
9,715,761 | https://en.wikipedia.org/wiki/Henry%20Primakoff | Henry Primakoff (; February 12, 1914 – July 25, 1983) was an American theoretical physicist who is famous for his discovery of the Primakoff effect.
Primakoff contributed to the understanding of weak interactions, double beta decay, spin waves in ferromagnetism, and the interaction between neutrinos and the atomic nucleus. Along with Theodore Holstein, Primakoff also developed the Holstein–Primakoff transformation which is designed to treat spin waves as bosonic excitations.
Life
Henry Primakoff was born in 1914 in Odesa, Russian Empire, into a Jewish family. His father Chaim Primakov (a pharmacist) and his mother Maryem Primakova (nee Katz) were married in the office of the Municipal Rabbi of Odesa on June 30, 1913. His mother and his grandparents decided to escape from Russia to the United States, through Romania and later Germany, where they finally took a steamship. They settled in New York City in 1922
Primakoff graduated from Columbia University in 1936, and obtained his PhD in physics from New York University in 1938.
During his university studies he met the biochemist Mildred Cohn, who he married in 1938.
In 1940 he worked at the Polytechnic Institute of Brooklyn, subsequently at the Queens College, and then at Washington University in St. Louis starting in 1946.
During World War II, J. Robert Oppenheimer tried to convince him to join the Manhattan Project, but Primakoff declined due the short time to make the atomic bomb.
Primakoff was the first Donner Professor of Physics in the University of Pennsylvania in 1960.
Primakoff died of cancer in 1983 in Philadelphia, United States.
Fellowships, awards and honors
In 1968 he was elected a member of the U.S. National Academy of Sciences.
In 2011 the American Physical Society established the Henry Primakoff Award for Early-Career Particle Physics.
References
External links
Henry Primakoff National Academy of Sciences biographical memoirs.
Henry Primakoff, Array of Contemporary American Physicists, AIP
publications of primakoff,h - INSPIRE-HEP
1914 births
1983 deaths
Scientists from Odesa
Theoretical physicists
Soviet emigrants to the United States
Washington University in St. Louis physicists
Members of the United States National Academy of Sciences
University of Pennsylvania faculty
20th-century Ukrainian physicists
Odesa Jews
Jewish American physicists
Fellows of the American Physical Society | Henry Primakoff | [
"Physics"
] | 471 | [
"Theoretical physics",
"Theoretical physicists"
] |
9,716,092 | https://en.wikipedia.org/wiki/Codeine | Codeine is an opiate and prodrug of morphine mainly used to treat pain, coughing, and diarrhea. It is also commonly used as a recreational drug. It is found naturally in the sap of the opium poppy, Papaver somniferum. It is typically used to treat mild to moderate degrees of pain. Greater benefit may occur when combined with paracetamol (acetaminophen) or a nonsteroidal anti-inflammatory drug (NSAID) such as aspirin or ibuprofen. Evidence does not support its use for acute cough suppression in children. In Europe, it is not recommended as a cough medicine for those under 12 years of age. It is generally taken by mouth. It typically starts working after half an hour, with maximum effect at two hours. Its effects last for about four to six hours. Codeine exhibits abuse potential similar to other opioid medications, including a risk of addiction and overdose.
Common side effects include vomiting, constipation, itchiness, lightheadedness, and drowsiness. Serious side effects may include breathing difficulties and addiction. Whether its use in pregnancy is safe is unclear. Care should be used during breastfeeding, as it may result in opiate toxicity in the baby. Its use as of 2016 is not recommended in children. Codeine works following being broken down by the liver into morphine; how quickly this occurs depends on a person's genetics.
Codeine was discovered in 1832 by Pierre Jean Robiquet. In 2013, about 361,000 kg (795,000 lb) of codeine were produced while 249,000 kg (549,000 lb) were used, which made it the most commonly taken opiate. It is on the World Health Organization's List of Essential Medicines. Codeine occurs naturally and makes up about 2% of opium.
Medical uses
Pain
Codeine is used to treat mild to moderate pain. It is commonly used to treat post-surgical dental pain.
Weak evidence indicates that it is useful in cancer pain, but it may have increased adverse effects, especially constipation, compared to other opioids. The American Academy of Pediatrics does not recommend its use in children due to side effects. The Food and Drug Administration (FDA) lists age under 12 years old as a contraindication to use.
Cough
Codeine is used to relieve coughing. Evidence does not support its use for acute cough suppression in children. In Europe, it is not recommended as a cough medicine for those under 12 years of age. Some tentative evidence shows it can reduce a chronic cough in adults.
Diarrhea
It is used to treat diarrhea and diarrhea-predominant irritable bowel syndrome, although loperamide (which is available without a prescription for milder diarrhea), diphenoxylate, paregoric, or even laudanum are more frequently used to treat severe diarrhea.
Formulations
Codeine is marketed as both a single-ingredient drug and in combination preparations with paracetamol (as co-codamol: e.g., brands Paracod, Panadeine, and the Tylenol-with-codeine series, including Tylenol 3 and 1, 2, and 4); with aspirin (as co-codaprin); or with ibuprofen (as Nurofen Plus). These combinations provide greater pain relief than either agent alone (drug synergy).
Codeine is also commonly marketed in products containing codeine with other pain killers or muscle relaxers, as well as codeine mixed with phenacetin (Emprazil with codeine No. 1, 2, 3, 4, and 5), naproxen, indomethacin, diclofenac, and others, as well as more complex mixtures, including such mixtures as aspirin + paracetamol + codeine ± caffeine ± antihistamines and other agents, such as those mentioned above.
Codeine-only products can be obtained with a prescription as a time-release tablet. Codeine is also marketed in cough syrups with zero to a half-dozen other active ingredients, and a linctus (e.g., Paveral) for all of the uses for which codeine is indicated.
Injectable codeine is available for subcutaneous or intramuscular injection only; intravenous injection is contraindicated, as this can result in nonimmune mast-cell degranulation and resulting anaphylactoid reaction. Codeine suppositories are also marketed in some countries.
Side effects
Common adverse effects associated with the use of codeine include drowsiness and constipation. Less common are itching, nausea, vomiting, dry mouth, miosis, orthostatic hypotension, urinary retention, euphoria, and dysphoria. Rare adverse effects include anaphylaxis, seizure, acute pancreatitis, and respiratory depression. As with all opiates, long-term effects can vary, but can include diminished libido, apathy, and memory loss. Some people may have allergic reactions to codeine, such as the swelling of the skin and rashes.
Tolerance to many of the effects of codeine, including its therapeutic effects, develops with prolonged use. This occurs at different rates for different effects, with tolerance to the constipation-inducing effects developing particularly slowly for instance.
As with other opioids, a potentially serious adverse drug reaction is respiratory depression. This depression is dose-related and is a mechanism for the potentially fatal consequences of overdose. As codeine is metabolized to morphine, morphine can be passed through breast milk in potentially lethal amounts, fatally depressing the respiration of a breastfed baby.
In August 2012, the United States Food and Drug Administration issued a warning about deaths in pediatric patients less than 6 years old after ingesting "normal" doses of paracetamol with codeine after tonsillectomy; this warning was upgraded to a black box warning in February 2013.
Some patients are very effective converters of codeine to its active form, morphine, resulting in lethal blood levels. The FDA is presently recommending very cautious use of codeine in young tonsillectomy patients; the drug should be used in the lowest amount that can control the pain, "as needed" and not "around the clock", and immediate medical attention is needed if the user responds negatively.
Withdrawal and dependence
As with other opiates, chronic use of codeine can cause physical dependence which can lead to severe withdrawal symptoms if a person suddenly stops the medication. Withdrawal symptoms include drug craving, runny nose, yawning, sweating, insomnia, weakness, stomach cramps, nausea, vomiting, diarrhea, muscle spasms, chills, irritability, and pain. These side effects also occur in acetaminophen/aspirin combinations, though to a lesser extent. To minimize withdrawal symptoms, long-term users should gradually reduce their codeine medication under the supervision of a healthcare professional.
Also, no evidence indicates that CYP2D6 inhibition is useful in treating codeine dependence, though the metabolism of codeine to morphine (and hence further metabolism to glucuronide morphine conjugates) does have an effect on the abuse potential of codeine. However, CYP2D6 has been implicated in the toxicity and death of neonates when codeine is administered to lactating mothers, particularly those with increased enzyme activity ("ultra-rapid" metabolizers).
In 2019 Ireland was said to be on the verge of a codeine addiction epidemic, according to a paper in the Irish Medical Journal. Under Irish law, codeine can be bought over the counter under the supervision of a pharmacist, but there is no mechanism to detect patients travelling to different pharmacies to purchase codeine.
Pharmacology
Pharmacodynamics
Codeine is a nonsynthetic opioid. It is a selective agonist of the μ-opioid receptor (MOR). Codeine itself has relatively weak affinity for the MOR. Instead of acting directly on the MOR, codeine functions as a prodrug of its major active metabolites morphine and codeine-6-glucuronide, which are far more potent MOR agonists in comparison.
Codeine has been found as an endogenous compound, along with morphine, in the brains of nonhuman primates with depolarized neurons, indicating that codeine may function as a neurotransmitter or neuromodulator in the central nervous system. Like morphine, codeine causes TLR4 signaling which causes allodynia and hyperalgesia. It does not need to be converted to morphine to increase pain sensitivity.
Mechanism of action
Codeine is an opiate and an agonist of the mu opioid receptor (MOR). It acts on the central nervous system to have an analgesic effect. It is metabolised in the liver to produce morphine which is ten times more potent against the mu receptor. Opioid receptors are G protein-coupled receptors that positively and negatively regulate synaptic transmission through downstream signalling. Binding of codeine or morphine to the mu-opioid receptor results in hyperpolarization of the neuron leading to the inhibition of the release of nociceptive neurotransmitters, causing an analgesic effect and increased pain tolerance due to reduced neuronal excitability.
Pharmacokinetics
The conversion of codeine to morphine occurs in the liver and is catalyzed by the cytochrome P450 enzyme CYP2D6. CYP3A4 produces norcodeine, and UGT2B7 conjugates codeine, norcodeine, and morphine to the corresponding 3- and 6-glucuronides. Srinivasan, Wielbo, and Tebbett speculate that codeine-6-glucuronide is responsible for a large percentage of the analgesia of codeine, and thus these patients should experience some analgesia. Many of the adverse effects will still be experienced in poor metabolizers. Conversely, between 0.5% and 2% of the population are "extensive metabolizers"; multiple copies of the gene for 2D6 produce high levels of CYP2D6 and will metabolize drugs through that pathway more quickly than others.
Some medications are CYP2D6 inhibitors and reduce or even completely block the conversion of codeine to morphine. The best-known of these are two of the selective serotonin reuptake inhibitors, paroxetine (Paxil) and fluoxetine (Prozac) as well as the antihistamine diphenhydramine (Benadryl) and the antidepressant bupropion (Wellbutrin, also known as Zyban). Other drugs, such as rifampicin and dexamethasone, induce CYP450 isozymes and thus increase the conversion rate.
CYP2D6 converts codeine into morphine, which then undergoes glucuronidation. Life-threatening intoxication, including respiratory depression requiring intubation, can develop over a matter of days in patients who have multiple functional alleles of CYP2D6, resulting in ultrarapid metabolism of opioids such as codeine into morphine.
Studies on codeine's analgesic effect are consistent with the idea that metabolism by CYP2D6 to morphine is important, but some studies show no major differences between those who are poor metabolizers and extensive metabolizers. Evidence supporting the hypothesis that ultrarapid metabolizers may get greater analgesia from codeine due to increased morphine formation is limited to case reports.
Due to the increased metabolism of codeine to morphine, ultrarapid metabolizers (those possessing more than two functional copies of the CYP2D6 allele) are at increased risk of adverse drug effects related to morphine toxicity. Guidelines released by the Clinical Pharmacogenomics Implementation Consortium (CPIC) advise against administering codeine to ultrarapid metabolizers, where this genetic information is available. The CPIC also suggests that codeine use be avoided in poor metabolizers, due to its lack of efficacy in this group.
Codeine and its salts are readily absorbed from the gastrointestinal tract, and ingestion of codeine phosphate produces peak plasma concentrations in about one hour. Plasma half life is between 3 and 4 hours, and oral/intramuscular analgesic potency ratio is approximately equal to 1:1.5. The most common conversion ratio, given on equianalgesia charts used in the United States, Canada, the UK, Republic of Ireland, the European Union, Russia and elsewhere as 130 mg IM equals 200 mg PO—both of which are equivalent to 10 mg of morphine sulphate IV and 60 mg of morphine sulphate PO. The salt:freebase ratio of the salts of both drugs in use are roughly equivalent, and do not generally make a clinical difference.
Codeine is metabolised by O- and N-demethylation in the liver to morphine and norcodeine. Hydrocodone is also a metabolite of codeine in humans. Codeine and its metabolites are mostly removed from the body by the kidneys, primarily as conjugates with glucuronic acid.
The active metabolites of codeine, notably morphine, exert their effects by binding to and activating the μ-opioid receptor. In people that can extensively metabolize the codeine, a 30 mg dose could yield up to 4 mg of morphine.
Chemistry
While codeine can be directly extracted from opium, its source, most codeine is synthesized from the much more abundant morphine through the process of O-methylation, through a process first completed in the late 20th century by Robert C. Corcoran and Junning Ma.
Relation to other opioids
Codeine has been used in the past as the starting material and prototype of a large class of mainly mild to moderately strong opioids, such as hydrocodone (1920 in Germany), oxycodone (1916 in Germany), dihydrocodeine (1908 in Germany), and its derivatives such as nicocodeine (1956 in Austria). However, these opioids are no longer synthesized from codeine and are usually synthesized from other opium alkaloids, specifically thebaine. Other series of codeine derivatives include isocodeine and its derivatives, which were developed in Germany starting around 1920. In general, the various classes of morphine derivatives such as ketones, semisynthetics like dihydromorphine, halogeno-morphides, esters, ethers, and others have codeine, dihydrocodeine, and isocodeine analogues. The codeine ester acetylcodeine is a common active impurity in street heroin as some codeine tends to dissolve with the morphine when it is extracted from opium in underground heroin and morphine base labs.
As an analgesic, codeine compares weakly to other opiates. Related to codeine in other ways are codoxime, thebacon, codeine-N-oxide (genocodeine), related to the nitrogen morphine derivatives as is codeine methobromide, and heterocodeine, which is a drug six times stronger than morphine and 72 times stronger than codeine due to a small re-arrangement of the molecule, namely moving the methyl group from the 3 to the 6 position on the morphine carbon skeleton.
Drugs bearing resemblance to codeine in effects due to close structural relationship are variations on the methyl groups at the 3 position including ethylmorphine, also known as codethyline (Dionine), and benzylmorphine (Peronine). While having no narcotic effects of its own, the important opioid precursor thebaine differs from codeine only slightly in structure. Pseudocodeine and some other similar alkaloids not currently used in medicine are found in trace amounts in opium as well.
History
Codeine, or 3-methylmorphine, is an alkaloid found in the opium poppy, Papaver somniferum var. album, a plant in the family Papaveraceae. Opium poppy has been cultivated and utilized throughout human history for a variety of medicinal (analgesic, anti-tussive and anti-diarrheal) and hypnotic properties linked to the diversity of its active components, which include morphine, codeine and papaverine.
Codeine is found in concentrations of 1% to 3% in opium prepared by the latex method from unripe pods of Papaver somniferum. The name codeine is derived from the Ancient Greek (, "poppy head"). The relative proportion of codeine to morphine, the most common opium alkaloid at 4% to 23%, tends to be somewhat higher in the poppy straw method of preparing opium alkaloids.
Until the beginning of the 19th century, raw opium was used in diverse preparations known as laudanum (see Thomas de Quincey's Confessions of an English Opium-Eater, 1821) and paregoric elixirs, several which were popular in England since the beginning of the 18th century; the original preparation seems to have been elaborated in Leiden, the Netherlands around 1715 by a chemist Jakob Le Mort; in 1721 the London Pharmacopoeia mentions an Elixir Asthmaticum, replaced by the term Elixir Paregoricum ("pain soother") in 1746.
The progressive isolation of opium's several active components opened the path to improved selectivity and safety of the opiates-based pharmacopeia.
Morphine had already been isolated in Germany by in 1804. Codeine was first isolated in 1832 in France by , already famous for the discovery of alizarin, the most widespread red dye, while working on refined morphine extraction processes. Robiquet is also credited with discovering caffeine independently of Pelletier, Caventou, and Runge. Thomas Anderson determined the correct composition in 1853 but a chemical structure was proposed only in 1925 by J. M. Gulland and Robert Robinson. The first crystal structure would have to wait until 1954.
Codeine and morphine, as well as opium, were used in an attempt to treat diabetes in the 1880s and thereafter, as recently as the 1950s.
Numerous codeine salts have been prepared since the drug was discovered. The most commonly used are the hydrochloride (freebase conversion ratio 0.805, i.e. 10 mg of the hydrochloride salt is equivalent in effect to 8.05 mg of the freebase form), phosphate (0.736), sulphate (0.859), and citrate (0.842). Others include a salicylate NSAID, codeine salicylate (0.686), a bromide (codeine methylbromide, 0.759), and at least five codeine-based barbiturates, the phenylethylbarbiturate (0.56), cyclohexenylethylbarbiturate (0.559), cyclopentenylallylbarbiturate (0.561), (0.561), and diethylbarbiturate (0.619). The latter was introduced as Codeonal in 1912, indicated for pain with nervousness. Codeine methylbromide is also considered a separate drug for various purposes.
Society and culture
Codeine is the most widely used opiate in the world, and is one of the most commonly used drugs overall according to numerous reports by organizations including the World Health Organization and its League of Nations predecessor agency.
Names
It is often sold as a salt in the form of either codeine sulfate or codeine phosphate in the United States, United Kingdom, and Australia. Codeine hydrochloride is more common worldwide and the citrate, hydroiodide, hydrobromide, tartrate, and other salts are also seen. The chemical name for codeine is morphinan-6-ol, 7,8-didehydro-4,5-epoxy-3-methoxy-17-methyl-, (5α,6α)-
Recreational use
A heroin (diamorphine) or other opiate/opioid addict may use codeine to ward off the effects of withdrawal during periods where their preferred drug is unavailable or unaffordable.
Codeine is also available in conjunction with the anti-nausea medication promethazine in the form of a syrup. Brand named as Phenergan with Codeine or in generic form as promethazine with Codeine, it began to be mixed with soft drinks in the 1990s as a recreational drug, called 'syrup', 'lean', or 'purple drank'. Rapper Pimp C, from the group UGK, died from an overdose of this combination.
Codeine is used in illegal drug laboratories to make morphine.
Detection
Codeine and its major metabolites may be quantitated in blood, plasma, or urine to monitor therapy, confirm a diagnosis of poisoning, or assist in a medico-legal death investigation. Drug abuse screening programs generally test urine, hair, sweat or saliva. Many commercial opiate screening tests directed at morphine cross-react appreciably with codeine and its metabolites, but chromatographic techniques can easily distinguish codeine from other opiates and opioids. It is important to note that codeine usage results in significant amounts of morphine as an excretion product. Furthermore, heroin contains codeine (or acetyl codeine) as an impurity and its use will result in the excretion of small amounts of codeine. Poppy seed foods represent yet another source of low levels of codeine in one's biofluids. Blood or plasma codeine concentrations are typically in the 50–300 μg/L range in persons taking the drug therapeutically, 700–7,000 μg/L in chronic users, and 1,000–10,000 μg/L in cases of acute fatal over dosage.
Codeine is produced in the human body along the same biosynthetic pathway as morphine. Urinary concentrations of endogenous codeine and morphine have been found to significantly increase in individuals taking L-DOPA for the treatment of Parkinson's disease.
Legal status
Around the world, codeine is, contingent on its concentration, a Schedule II and III drug under the Single Convention on Narcotic Drugs. In Australia, Canada, New Zealand, Sweden, the United Kingdom, the United States and many other countries, codeine is regulated under various narcotic control laws. In some countries, it is available without a medical prescription in combination preparations from licensed pharmacists in doses up to 20 mg, or 30 mg when sold combined with 500 mg paracetamol.
As of 2015, of the European Union member states, 11 countries (Bulgaria, Cyprus, Denmark, Estonia, Ireland, Latvia, Lithuania, Malta, Poland, Romania, and Slovenia) allow the sale of OTC codeine solid dosage forms.
Australia
In Australia, since 1 February 2018, preparations containing codeine are not available without a prescription.
Preparations containing pure codeine (e.g., codeine phosphate tablets or codeine phosphate linctus) are available on prescription and are considered S8 (Schedule 8, or "Controlled Drug Possession without authority illegal"). Schedule 8 preparations are subject to the strictest regulation of all medications available to consumers.
Prior to 1 February 2018, Codeine was available over-the-counter (OTC).
Canada
In Canada, codeine is regulated under the Narcotic Control Regulations (NCR), which falls under the Controlled Drugs and Substances Act (CDSA). Regulations state the pharmacists may, without a prescription, sell low-dose codeine products (containing up to 8 mg of codeine per tablet or up to 20 mg per 30 ml in liquid preparation) if the preparation contains at least two
additional medicinal ingredients other than a narcotic (S.36.1 NCR).
In Canada tablets containing 8 mg of codeine combined with 15 mg of caffeine and 300 mg of acetaminophen are sold as T1s (Tylenol Number 1) without a prescription. A similar tablet called "A.C. & C." (which stands for Acetylsalicylic acid with Caffeine and Codeine) containing 325–375 mg of acetylsalicylic acid (Aspirin) instead of acetaminophen is also available without a prescription. Codeine combined with an antihistamine, and often caffeine, is sold under various trade names and is available without a prescription. These products are kept behind the counter and must be dispensed by a pharmacist who may limit quantities.
Names of many codeine and dihydrocodeine products in Canada tend to follow the narcotic content number system (Tylenol With Codeine No. 1, 2, 3, 4 &c) mentioned below in the section on the United States; it came to be in its current form with the Pure Food & Drug Act of 1906.
Controlled Drugs and Substances Act (S.C. 1996, c. 19) effective 28 July 2020. Codeine is now classified under Schedule 1, giving it a higher priority in the treatments of offenders of the law.
Codeine became a prescription-only medication in the province of Manitoba on 1 February 2016. The number of low-dose codeine tablets sold in Manitoba decreased by 94 percent from 52.5 million tablets sold in the year prior to the policy change to 3.3 million in the year after. A pharmacist may issue a prescription, and all purchases are logged to a central database to prevent overprescribing. Saskatchewan's pharmacy college is considering enacting a similar ban to Manitoba's.
On 9 May 2019, the Canadian Pharmacists Association wrote to Health Canada proposing regulations amending the NCR, the BOTSR, and the FDR - Part G, which included requiring that all products containing codeine be available by prescription only.
New safety measures were issued by Health Canada on 28 July 2016; "codeine should no longer be used (contraindicated) in patients under 18 years of age to treat pain after surgery to remove tonsils or adenoids, as these patients are more susceptible to the risk of serious breathing problems. Codeine (prescription and non-prescription) is already not recommended for children under the age of 12, for any use."
Denmark
In Denmark codeine is sold over the counter in dosages up to 9.6 mg (with aspirin, brand name Kodimagnyl); anything stronger requires a prescription.
Estonia
Until 2023, in Estonia codeine was sold over the counter in dosages up to 8 mg (with paracetamol, brand name Co-Codamol).
Ethiopia
Approximately 30% of the Ethiopian population carry an extra copy of the gene CYP2D6, and are classified as codeine ultrametabolizers. These individuals metabolize codeine to morphine at a dangerously fast rate, leading to adverse events and potentially death. As a consequence the Ethiopian Food, Medicine and Health Care Administration and Control Authority has entirely banned the use of codeine as unsafe for the general population.
France
In France, most preparations containing codeine only began requiring a doctor's prescription in 2017. Products containing codeine include Néocodion (codeine and camphor), Tussipax (ethylmorphine and codeine), Paderyl (codeine alone), Codoliprane (codeine with paracetamol), Prontalgine and Migralgine (codeine, paracetamol and caffeine). The 2017 law change made a prescription mandatory for all codeine products, along with those containing ethylmorphine and dextromethorphan.
Greece
Codeine is classed as an illegal drug in Greece, and individuals possessing it could conceivably be arrested, even if they were legitimately prescribed it in another country. It is sold only with a doctor's prescription (Lonarid-N, Lonalgal).
Hong Kong
In Hong Kong, codeine is regulated under the Laws of the Hong Kong, Dangerous Drugs Ordinance, Chapter 134, Schedule 1. It can be used legally only by health professionals and for university research purposes. The substance can be given by pharmacists under a prescription. Anyone who supplies the substance without a prescription can be fined $10,000 (HKD). The maximum penalty for trafficking or manufacturing the substance is a $5,000,000 (HKD) fine and life imprisonment. Possession of the substance for consumption without license from the Department of Health is illegal with a $1,000,000 (HKD) fine and/or 7 years of jail time.
However, codeine is available without prescription from licensed pharmacists in doses up to 0.1% (i.e. 5 mg/5ml)
India
Codeine preparations require a prescription in India. A preparation of paracetamol and codeine is available in India. Codeine is also present in various cough syrups as codeine phosphate including chlorpheniramine maleate. Pure codeine is also available as codeine sulphate tablets. Codeine containing cough medicine has been banned in India with effect from 14 March 2016. The Ministry of Health and Family Welfare has found no proof of its efficacy against cough control.
Ireland
In Ireland, new regulations came into effect on 1 August 2010 concerning codeine, due to worries about the overuse of the drug. Codeine remains a semi non-prescriptive, over-the-counter drug up to a limit of 12.8 mg per pill, but codeine products must be out of the view of the public to facilitate the legislative requirement that these products "are not accessible to the public for self-selection". In practice, this means customers must ask pharmacists for the product containing codeine in name, and the pharmacist makes a judgement whether it is suitable for the patient to be using codeine, and that patients are fully advised of the correct use of these products. Products containing more than 12.8 mg codeine are available on prescription only.
Italy
Codeine tablets or preparations require a prescription in Italy. Preparations of paracetamol and codeine are available in Italy as Co-Efferalgan and Tachidol.
Japan
Codeine is available over the counter at pharmacies, allowing up to 50 mg of codeine phosphate per day for adults.
Latvia
In Latvia codeine is sold over the counter in dosages up to 8 mg (with paracetamol, brand name Co-Codamol).
Nigeria
Nigeria in 2018 plans to ban the manufacture and import of cough syrup that include codeine as an ingredient. This is due to concerns regarding its use to get intoxicated.
South Africa
Codeine is available over the counter in South Africa. Certain pharmacies require people to write down their name and address to ensure they are not buying too much over a short period although many do not require this at all. According to Lochan Naidoo, the former president of the National Narcotics Control Board, making the drugs more difficult to obtain could lead to even worse problems where people in withdrawal would turn to illicit drugs to get their fix. Although codeine is freely available, South Africa has a fairly low annual prevalence rate of opiate use at 0.3% compared to the United States at 0.57% where all opiates are strictly regulated.
United Arab Emirates
The UAE takes an exceptionally strict line on medicines, with many common drugs, notably anything containing codeine being banned unless one has a notarized and authenticated doctor's prescription. Visitors breaking the rules, even inadvertently, have been deported or imprisoned. The US Embassy to the UAE maintains an unofficial list of what may not be imported.
United Kingdom
In the United Kingdom, the sale and possession of codeine are restricted separately under law.
Neat codeine and higher-strength codeine formulations are generally prescription-only medicines (POM) meaning that the sale of such products is restricted under the Medicines Act 1968. Lower-strength products containing combinations of up to 12.8 mg of codeine per dosage unit, combined with paracetamol, ibuprofen or aspirin are available over the counter at pharmacies. Codeine linctus of 15 mg per 5 ml is also available at some pharmacies, although a purchaser would have to request it specifically from the pharmacist.
Under the Misuse of Drugs Act 1971 codeine is a Class B controlled substance or a Class A drug when prepared for injection. The possession of controlled substances without a prescription is a criminal offence. However, certain preparations of codeine are exempt from this restriction under Schedule 5 of the Misuse of Drugs Regulations 2001. It is thus legal to possess codeine without a prescription, provided that it is compounded with at least one other active or inactive ingredient and that the dosage of each tablet, capsule, etc. does not exceed 100 mg or 2.5% concentration in the case of liquid preparations. The exemptions do not to apply to any preparation of codeine designed for injection.
United States
In the United States, codeine is regulated by the Controlled Substances Act. Federal law dictates that codeine be a Schedule II controlled substance when used in products for pain relief that contain codeine alone or more than 80 mg per dosage unit. Codeine without aspirin or acetaminophen (Tylenol) is very rarely available or prescribed to discourage abuse. Tablets of codeine in combination with aspirin or acetaminophen (paracetamol) and intended for pain relief are listed as Schedule III.
Cough syrups are classed as Schedule III, IV, or V, depending on formulation. For example, the acetaminophen/codeine antitussive liquid is a Schedule IV controlled substance.
Some states have chosen to reclassify codeine preparations at a more restrictive schedule to lower the instances of its abuse. Minnesota, for instance, has chosen to reclassify Schedule V some codeine preparations (e.g. Cheratussin) as a Schedule III controlled substance.
Schedule V controlled substances
Substances in this schedule have a low potential for abuse relative to substances listed in Schedule IV and consist primarily of preparations containing limited quantities of certain narcotics.
Examples of Schedule V substances include cough preparations containing not more than 200 milligrams of codeine per 100 milliliters or per 100 grams (Robitussin AC, Phenergan with Codeine).
References
Notes
Further reading
External links
Benzylisoquinoline alkaloids
Antidiarrhoeals
Antitussives
Catechol ethers
4,5-Epoxymorphinans
Glycine receptor antagonists
Natural opium alkaloids
Opiates
Prodrugs
Secondary metabolites
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Codeine | [
"Chemistry"
] | 7,434 | [
"Chemical ecology",
"Secondary metabolites",
"Prodrugs",
"Chemicals in medicine",
"Metabolism"
] |
9,717,200 | https://en.wikipedia.org/wiki/Thymol%20blue | Thymol blue (thymolsulfonephthalein) is a brownish-green or reddish-brown crystalline powder that is used as a pH indicator. It is insoluble in water but soluble in alcohol and dilute alkali solutions.
It transitions from red to yellow at pH 1.2–2.8 and from yellow to blue at pH 8.0–9.6. It is usually a component of Universal indicator.
At wavelength (378 - 382) nm, extinction coefficient > 8000 and at wavelength (298 - 302) nm, the extinction coefficient > 12000.
Structures
Thymol blue has different structures at different pH.
thymol blue.
Safety
It may cause irritation. Its toxicological properties have not been fully investigated. Harmful if swallowed, Acute Toxicity. Only Hazardous when percent values are above 10%.
Bibliography
Merck. "Thymol Blue." The Merck Index. 14th ed. 2006. Accessed via web on 2007-02-25.
References
External links
PubChem entry
PH indicators
Triarylmethane dyes
Benzenesulfonates
Phenol dyes
Isopropyl compounds | Thymol blue | [
"Chemistry",
"Materials_science"
] | 238 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry"
] |
9,717,637 | https://en.wikipedia.org/wiki/Myomesin | Myomesin is a protein family found in the M-line of the sarcomere structure. Myomesin has various forms throughout the body in striated muscles with specialized functions. This includes both slow and fast muscle fibers. Myomesin are made of 13 domains including a unique N-terminal followed by two immunoglobulin-like (Ig) domains, five fibronectin type III (Fn) domains, five more Ig domains. These domains all promote binding which indicates that myomesin is regulated through binding.
Functions
Sarcomere structure
Myomesin plays an important role in the structure of sarcomeres. They are found in the M-band region of the sarcomere, between the thick filaments (myosin). Its main purpose in this setting is to provide structural integrity by linking the antiparallel myosin fibers and titin filaments which are connected to the Z-discs. These myosin filaments form a hexagonal lattice with titin and myomesin. This shape allows the M-band to withstand large conformational changes during muscle contraction and return to their original shape upon relaxation. Since the Z-disc region of the sarcomere is very stiff and unable to bend for contraction, the elastic activity of myomesin in the M-band is what makes muscle contraction possible as it acts as a molecular spring.
Sarcomere assembly
In addition to sarcomere activity, it has been shown that myomesin also plays a role in the assembly of the sarcomere. In order for myomesin to be implemented into the sarcomere, myosin and titin must be present, indicating that myomesin is the last component to be added during assembly of the lattice. It is believed that this postponed addition is due to the role of myomesin to act as an "integrity check" to ensure the sarcomere has been formed correctly and monitor its integrity. This is extremely important as if even one piece of the M-line is missing, the A-band of the sarcomere will collapse and the muscle will be paralyzed.
Response to injury
Myomesin has also been shown to play a role in injury response and expression. It was previously thought that myosin chaperones were the first alert of sarcomere damage, but recent studies show a flux of expression the gene myomesin1a much earlier than that of the myosin, suggesting that there is a myomesin-dependent injury response pathway in striated muscles. Additionally, it is thought that this gene could be used as an enhanced biomarker for sarcomere damage compared to the current biomarker, muscle creatine kinase (CKM). When tested in vivo in zebrafish, myom1a expression was displayed much earlier than creatine kinase, indicating that the latter is less specific to muscle diseases. This supports the use of myomesin assays for detection of muscular pathologies earlier than the current practices.
Myomesin variants
There are three types of myomesin that are found in various striated muscles of the body: myomesin 1, myomesin 2, and myomesin 3. It is thought that each myomesin binds to myosin in a different spot, regulating the formation of the M-band.
Myomesin 1
Myomesin 1 is the most researched of the forms of myomesin due to its presence in all striated muscles and that it is the largest of the myomesin class. It is sometimes just simply called myomesin because of it widespread expression. Myomesin 1 is found in mainly on the M4/M4' lines of the M-band. It is encoded by the MYOM1 gene. There are two variants of myomesin 1, one located between the My6 and My7 domains, and the other at the end of the C-terminal after the My13 domain. The prior is known as the embryonic heart (EH)-sequence and the latter, which has only been found in birds, is called the H or S splice variant (H is for heart and S is for skeletal). EH-myomesin can be found during embryonic development of the human heart (later replaced by myomesin 2). As the muscle matures, EH-myomesin is downregulated in favor of myomesin 1 with no genetic variations.
Myomesin 2
Myomesin 2 (also known as M-protein) is located in the M1 line of the M-band. It is encoded by the MYOM2 gene. There is currently only one known variant of myomesin 2 and it can be found in fast skeletal muscles and adult cardiac muscles. Myomesin 2 has been shown to have an inverse relationship with the expression of EH-myomesin; as cardiac muscles mature, EH-myomesin is downregulated while myomesin 2 is upregulated.
Myomesin 3
Myomesin 3 is the least researched in the myomesin class due to it being the most recently discovered. It is encoded by the MYOM3 gene. It is located in the M6/M6' lines of the M-band and is expressed in intermediate skeletal muscles and adult cardiac muscles (specifically in the left ventricle and left atrium). MYOM3 is especially expressed in neonatal skeletal muscles, extraocular muscles, slow muscles, and IIA skeletal fibers. Myomesin 3 is the only member of the myomesin protein family to be completely absent from cardiac expression. Myomesin 3 displays an inverse relationship with myomesin 2.
Pathologies
Myocardial atrophy
Deficiency in myomesin 1 causes atrophy and dysfunction in its tissue. In cardiomyocytes, sarcomere length and uniformity are decreased when MYOM1 is absent, resulting in smaller cardiomyocytes. This is also linked to issues in contractile function due to the disruption of calcium levels in the tissue.
Dilated cardiomyopathy (DCM)
Re-emergence of EH-myomesin in adult cardiac muscles has been associated with dilated cardiomyopathy. It is still uncertain if this expression is to help stabilize the sarcomere during strenuous contractions or if it is a result of misaligned sarcomere filaments due to lessened contractile forces. It has been shown that this uncommon expression is the result of altered alternative splicing.
References
Proteins | Myomesin | [
"Chemistry"
] | 1,344 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
9,718,351 | https://en.wikipedia.org/wiki/Quantum%20state%20space | In physics, a quantum state space is an abstract space in which different "positions" represent not literal locations, but rather quantum states of some physical system. It is the quantum analog of the phase space of classical mechanics.
Relative to Hilbert space
In quantum mechanics a state space is a separable complex Hilbert space. The dimension of this Hilbert space depends on the system we choose to describe. The different states that could come out of any particular measurement form an orthonormal basis, so any state vector in the state space can be written as a linear combination of these basis vectors. Having an nonzero component along multiple dimensions is called a superposition. In the formalism of quantum mechanics these state vectors are often written using Dirac's compact bra–ket notation.
Examples
The spin state of a silver atom in the Stern–Gerlach experiment can be represented in a two state space. The spin can be aligned with a measuring apparatus (arbitrarily called 'up') or oppositely ('down'). In Dirac's notation these two states can be written as . The space of a two spin system has four states, .
The spin state is a discrete degree of freedom; quantum state spaces can have continuous degrees of freedom. For example, a particle in one space dimension has one degree of freedom ranging from to . In Dirac notation, the states in this space might be written as or .
Relative to 3D space
Even in the early days of quantum mechanics, the state space (or configurations as they were called at first) was understood to be essential for understanding simple quantum-mechanical problems. In 1929, Nevill Mott showed that "tendency to picture the wave as existing in ordinary three dimensional space, whereas we are really dealing with wave functions in multispace" makes analysis of simple interaction problems more difficult. Mott analyzes -particle emission in a cloud chamber. The emission process is isotropic, a spherical wave in quantum mechanics, but the tracks observed are linear.
As Mott says, "it is a little difficult to picture how it is that an
outgoing spherical wave can produce a straight track; we think intuitively that it should ionise atoms at random throughout space". This issue became known at the Mott problem. Mott then derives the straight track by considering correlations between the positions of the source and two representative atoms, showing that consecutive ionization results from just that state in which all three positions are co-linear.
Relative to classical phase space
Classical mechanics for multiple objects describes their motion in terms of a list or vector of every object's coordinates and velocity. As the objects move, the values in the vector change; the set of all possible values is called a phase space. In quantum mechanics a state space is similar, however in the state space two vectors which are scalar multiples of each other represent the same state. Furthermore, the character of values in the quantum state differ from the classical values: in the quantum case the values can only be measured statistically (by repetition over many examples) and thus do not have well defined values at every instant of time.
See also
References
Further reading
Concepts in physics
Hilbert spaces | Quantum state space | [
"Physics"
] | 646 | [
"Hilbert spaces",
"Quantum mechanics",
"nan"
] |
9,718,355 | https://en.wikipedia.org/wiki/Eilenberg%27s%20inequality | Eilenberg's inequality, also known as the coarea inequality is a mathematical inequality for Lipschitz-continuous functions between metric spaces. Informally, it gives an upper bound on the average size of the fibers of a Lipschitz map in terms of the Lipschitz constant of the function and the measure of the domain.
The Eilenberg's inequality has applications in geometric measure theory and manifold theory. It is also a key ingredient in the proof of the coarea formula.
Formal statement
Let ƒ : X → Y be a Lipschitz-continuous function between metric spaces whose Lipschitz constant is denoted by Lip ƒ. Let s and t be nonnegative real numbers. Then, Eilenberg's inequality states that
for any A ⊂ X.
the asterisk denotes the upper integral,
vt are universal constants. If t=n, then vt equals the volume of the unit ball in Rn,
Ht is the t-dimensional Hausdorff measure.
The use of upper integral is necessary because in general the function
may fail to be Ht measurable.
History
The inequality was first proved by Eilenberg in 1938 for the case when the function was the distance to a fixed point in the metric space. Then it was generalized in 1943 by Eilenberg and Harold to the case of any real-valued Lipschitz function on a metric space.
The inequality in the form above was proved by Federer in 1954, except that he could prove it only under additional assumptions that he conjectured were unnecessary. Years later, Davies proved some deep results about Hausdorff contents and this conjecture was proved as a consequence. But recently a new proof, independent of Davies's result, has been found as well.
About the proof
In many texts the inequality is proved for the case where the target space is a Euclidean space or a manifold. This is because the isodiametric inequality is available (locally in the case of manifolds), which allows for a straightforward proof. The isodiametric inequality is not available in general metric spaces. The proof of Eilenberg's inequality in the general case is quite involved and requires the notion of the so-called weighted integrals.
References
Inequalities | Eilenberg's inequality | [
"Mathematics"
] | 452 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
9,719,512 | https://en.wikipedia.org/wiki/Bonnesen%27s%20inequality | Bonnesen's inequality is an inequality relating the length, the area, the radius of the incircle and the radius of the circumcircle of a Jordan curve. It is a strengthening of the classical isoperimetric inequality.
More precisely, consider a planar simple closed curve of length bounding a domain of area . Let and denote the radii of the incircle and the circumcircle. Bonnesen proved the inequality
The term in the right hand side is known as the isoperimetric defect.
Loewner's torus inequality with isosystolic defect is a systolic analogue of Bonnesen's inequality.
References
Geometric inequalities | Bonnesen's inequality | [
"Mathematics"
] | 149 | [
"Geometric inequalities",
"Geometry",
"Geometry stubs",
"Inequalities (mathematics)",
"Theorems in geometry"
] |
9,721,524 | https://en.wikipedia.org/wiki/Cunningham%20correction%20factor | In fluid dynamics, the Cunningham correction factor, or Cunningham slip correction factor (denoted ), is used to account for non-continuum effects when calculating the drag on small particles. The derivation of Stokes' law, which is used to calculate the drag force on small particles, assumes a no-slip condition which is no longer correct at high Knudsen numbers. The Cunningham slip correction factor allows predicting the drag force on a particle moving a fluid with Knudsen number between the continuum regime and free molecular flow.
The drag coefficient calculated with standard correlations is divided by the Cunningham correction factor, , given below.
Ebenezer Cunningham derived the correction factor in 1910 and with Robert Andrews Millikan, verified the correction in the same year.
where
is the correction factor
is the mean free path
is the particle diameter
are experimentally determined coefficients.
For air (Davies, 1945):
A1 = 1.257
A2 = 0.400
A3 = 0.55
The Cunningham correction factor becomes significant when particles become smaller than 15 micrometers, for air at ambient conditions.
For sub-micrometer particles, Brownian motion must be taken into account.
References
Fluid dynamics
Dimensionless numbers of physics
Aerosols | Cunningham correction factor | [
"Chemistry",
"Engineering"
] | 243 | [
"Chemical engineering",
"Colloids",
"Aerosols",
"Piping",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
9,722,260 | https://en.wikipedia.org/wiki/Chemical%20substance | A chemical substance is a unique form of matter with constant chemical composition and characteristic properties. Chemical substances may take the form of a single element or chemical compounds. If two or more chemical substances can be combined without reacting, they may form a chemical mixture. If a mixture is separated to isolate one chemical substance to a desired degree, the resulting substance is said to be chemically pure.
Chemical substances can exist in several different physical states or phases (e.g. solids, liquids, gases, or plasma) without changing their chemical composition. Substances transition between these phases of matter in response to changes in temperature or pressure. Some chemical substances can be combined or converted into new substances by means of chemical reactions. Chemicals that do not possess this ability are said to be inert.
Pure water is an example of a chemical substance, with a constant composition of two hydrogen atoms bonded to a single oxygen atom (i.e. H2O). The atomic ratio of hydrogen to oxygen is always 2:1 in every molecule of water. Pure water will tend to boil near , an example of one of the characteristic properties that define it. Other notable chemical substances include diamond (a form of the element carbon), table salt (NaCl; an ionic compound), and refined sugar (C12H22O11; an organic compound).
Definitions
In addition to the generic definition offered above, there are several niche fields where the term "chemical substance" may take alternate usages that are widely accepted, some of which are outlined in the sections below.
Inorganic chemistry
Chemical Abstracts Service (CAS) lists several alloys of uncertain composition within their chemical substance index. While an alloy could be more closely defined as a mixture, referencing them in the chemical substances index allows CAS to offer specific guidance on standard naming of alloy compositions. Non-stoichiometric compounds are another special case from inorganic chemistry, which violate the requirement for constant composition. For these substances, it may be difficult to draw the line between a mixture and a compound, as in the case of palladium hydride. Broader definitions of chemicals or chemical substances can be found, for example: "the term 'chemical substance' means any organic or inorganic substance of a particular molecular identity, including – (i) any combination of such substances occurring in whole or in part as a result of a chemical reaction or occurring in nature".
Geology
In the field of geology, inorganic solid substances of uniform composition are known as minerals. When two or more minerals are combined to form mixtures (or aggregates), they are defined as rocks. Many minerals, however, mutually dissolve into solid solutions, such that a single rock is a uniform substance despite being a mixture in stoichiometric terms. Feldspars are a common example: anorthoclase is an alkali aluminum silicate, where the alkali metal is interchangeably either sodium or potassium.
Law
In law, "chemical substances" may include both pure substances and mixtures with a defined composition or manufacturing process. For example, the EU regulation REACH defines "monoconstituent substances", "multiconstituent substances" and "substances of unknown or variable composition". The latter two consist of multiple chemical substances; however, their identity can be established either by direct chemical analysis or reference to a single manufacturing process. For example, charcoal is an extremely complex, partially polymeric mixture that can be defined by its manufacturing process. Therefore, although the exact chemical identity is unknown, identification can be made with a sufficient accuracy. The CAS index also includes mixtures.
Polymer chemistry
Polymers almost always appear as mixtures of molecules of multiple molar masses, each of which could be considered a separate chemical substance. However, the polymer may be defined by a known precursor or reaction(s) and the molar mass distribution. For example, polyethylene is a mixture of very long chains of -CH2- repeating units, and is generally sold in several molar mass distributions, LDPE, MDPE, HDPE and UHMWPE.
History
The concept of a "chemical substance" became firmly established in the late eighteenth century after work by the chemist Joseph Proust on the composition of some pure chemical compounds such as basic copper carbonate. He deduced that, "All samples of a compound have the same composition; that is, all samples have the same proportions, by mass, of the elements present in the compound." This is now known as the law of constant composition. Later with the advancement of methods for chemical synthesis particularly in the realm of organic chemistry; the discovery of many more chemical elements and new techniques in the realm of analytical chemistry used for isolation and purification of elements and compounds from chemicals that led to the establishment of modern chemistry, the concept was defined as is found in most chemistry textbooks. However, there are some controversies regarding this definition mainly because the large number of chemical substances reported in chemistry literature need to be indexed.
Isomerism caused much consternation to early researchers, since isomers have exactly the same composition, but differ in configuration (arrangement) of the atoms. For example, there was much speculation about the chemical identity of benzene, until the correct structure was described by Friedrich August Kekulé. Likewise, the idea of stereoisomerism – that atoms have rigid three-dimensional structure and can thus form isomers that differ only in their three-dimensional arrangement – was another crucial step in understanding the concept of distinct chemical substances. For example, tartaric acid has three distinct isomers, a pair of diastereomers with one diastereomer forming two enantiomers.
Chemical elements
An element is a chemical substance made up of a particular kind of atom and hence cannot be broken down or transformed by a chemical reaction into a different element, though it can be transmuted into another element through a nuclear reaction. This is because all of the atoms in a sample of an element have the same number of protons, though they may be different isotopes, with differing numbers of neutrons.
As of 2019, there are 118 known elements, about 80 of which are stable – that is, they do not change by radioactive decay into other elements. Some elements can occur as more than a single chemical substance (allotropes). For instance, oxygen exists as both diatomic oxygen (O2) and ozone (O3). The majority of elements are classified as metals. These are elements with a characteristic lustre such as iron, copper, and gold. Metals typically conduct electricity and heat well, and they are malleable and ductile. Around 14 to 21 elements, such as carbon, nitrogen, and oxygen, are classified as non-metals. Non-metals lack the metallic properties described above, they also have a high electronegativity and a tendency to form negative ions. Certain elements such as silicon sometimes resemble metals and sometimes resemble non-metals, and are known as metalloids.
Chemical compounds
A chemical compound is a chemical substance that is composed of a particular set of atoms or ions. Two or more elements combined into one substance through a chemical reaction form a chemical compound. All compounds are substances, but not all substances are compounds.
A chemical compound can be either atoms bonded together in molecules or crystals in which atoms, molecules or ions form a crystalline lattice. Compounds based primarily on carbon and hydrogen atoms are called organic compounds, and all others are called inorganic compounds. Compounds containing bonds between carbon and a metal are called organometallic compounds.
Compounds in which components share electrons are known as covalent compounds. Compounds consisting of oppositely charged ions are known as ionic compounds, or salts.
Coordination complexes are compounds where a dative bond keeps the substance together without a covalent or ionic bond. Coordination complexes are distinct substances with distinct properties different from a simple mixture. Typically these have a metal, such as a copper ion, in the center and a nonmetals atom, such as the nitrogen in an ammonia molecule or oxygen in water in a water molecule, forms a dative bond to the metal center, e.g. tetraamminecopper(II) sulfate [Cu(NH3)4]SO4·H2O. The metal is known as a "metal center" and the substance that coordinates to the center is called a "ligand". However, the center does not need to be a metal, as exemplified by boron trifluoride etherate BF3OEt2, where the highly Lewis acidic, but non-metallic boron center takes the role of the "metal". If the ligand bonds to the metal center with multiple atoms, the complex is called a chelate.
In organic chemistry, there can be more than one chemical compound with the same composition and molecular weight. Generally, these are called isomers. Isomers usually have substantially different chemical properties, and often may be isolated without spontaneously interconverting. A common example is glucose vs. fructose. The former is an aldehyde, the latter is a ketone. Their interconversion requires either enzymatic or acid-base catalysis.
However, tautomers are an exception: the isomerization occurs spontaneously in ordinary conditions, such that a pure substance cannot be isolated into its tautomers, even if these can be identified spectroscopically or even isolated in special conditions. A common example is glucose, which has open-chain and ring forms. One cannot manufacture pure open-chain glucose because glucose spontaneously cyclizes to the hemiacetal form.
Substances versus mixtures
All matter consists of various elements and chemical compounds, but these are often intimately mixed together. Mixtures contain more than one chemical substance, and they do not have a fixed composition. Butter, soil and wood are common examples of mixtures. Sometimes, mixtures can be separated into their component substances by mechanical processes, such as chromatography, distillation, or evaporation.
Grey iron metal and yellow sulfur are both chemical elements, and they can be mixed together in any ratio to form a yellow-grey mixture. No chemical process occurs, and the material can be identified as a mixture by the fact that the sulfur and the iron can be separated by a mechanical process, such as using a magnet to attract the iron away from the sulfur.
In contrast, if iron and sulfur are heated together in a certain ratio (1 atom of iron for each atom of sulfur, or by weight, 56 grams (1 mol) of iron to 32 grams (1 mol) of sulfur), a chemical reaction takes place and a new substance is formed, the compound iron(II) sulfide, with chemical formula FeS. The resulting compound has all the properties of a chemical substance and is not a mixture. Iron(II) sulfide has its own distinct properties such as melting point and solubility, and the two elements cannot be separated using normal mechanical processes; a magnet will be unable to recover the iron, since there is no metallic iron present in the compound.
Chemicals versus chemical substances
While the term chemical substance is a precise technical term that is synonymous with chemical for chemists, the word chemical is used in general usage to refer to both (pure) chemical substances and mixtures (often called compounds), and especially when produced or purified in a laboratory or an industrial process. In other words, the chemical substances of which fruits and vegetables, for example, are naturally composed even when growing wild are not called "chemicals" in general usage. In countries that require a list of ingredients in products, the "chemicals" listed are industrially produced "chemical substances". The word "chemical" is also often used to refer to addictive, narcotic, or mind-altering drugs.
Within the chemical industry, manufactured "chemicals" are chemical substances, which can be classified by production volume into bulk chemicals, fine chemicals and chemicals found in research only:
Bulk chemicals are produced in very large quantities, usually with highly optimized continuous processes and to a relatively low price.
Fine chemicals are produced at a high cost in small quantities for special low-volume applications such as biocides, pharmaceuticals and speciality chemicals for technical applications.
Research chemicals are produced individually for research, such as when searching for synthetic routes or screening substances for pharmaceutical activity. In effect, their price per gram is very high, although they are not sold.
The cause of the difference in production volume is the complexity of the molecular structure of the chemical. Bulk chemicals are usually much less complex. While fine chemicals may be more complex, many of them are simple enough to be sold as "building blocks" in the synthesis of more complex molecules targeted for single use, as named above. The production of a chemical includes not only its synthesis but also its purification to eliminate by-products and impurities involved in the synthesis. The last step in production should be the analysis of batch lots of chemicals in order to identify and quantify the percentages of impurities for the buyer of the chemicals. The required purity and analysis depends on the application, but higher tolerance of impurities is usually expected in the production of bulk chemicals. Thus, the user of the chemical in the US might choose between the bulk or "technical grade" with higher amounts of impurities or a much purer "pharmaceutical grade" (labeled "USP", United States Pharmacopeia). "Chemicals" in the commercial and legal sense may also include mixtures of highly variable composition, as they are products made to a technical specification instead of particular chemical substances. For example, gasoline is not a single chemical compound or even a particular mixture: different gasolines can have very different chemical compositions, as "gasoline" is primarily defined through source, properties and octane rating.
Naming and indexing
Every chemical substance has one or more systematic names, usually named according to the IUPAC rules for naming. An alternative system is used by the Chemical Abstracts Service (CAS).
Many compounds are also known by their more common, simpler names, many of which predate the systematic name. For example, the long-known sugar glucose is now systematically named 6-(hydroxymethyl)oxane-2,3,4,5-tetrol. Natural products and pharmaceuticals are also given simpler names, for example the mild pain-killer Naproxen is the more common name for the chemical compound (S)-6-methoxy-α-methyl-2-naphthaleneacetic acid.
Chemists frequently refer to chemical compounds using chemical formulae or molecular structure of the compound. There has been a phenomenal growth in the number of chemical compounds being synthesized (or isolated), and then reported in the scientific literature by professional chemists around the world. An enormous number of chemical compounds are possible through the chemical combination of the known chemical elements. As of Feb 2021, about "177 million organic and inorganic substances" (including 68 million defined-sequence biopolymers) are in the scientific literature and registered in public databases. The names of many of these compounds are often nontrivial and hence not very easy to remember or cite accurately. Also, it is difficult to keep track of them in the literature. Several international organizations like IUPAC and CAS have initiated steps to make such tasks easier. CAS provides the abstracting services of the chemical literature, and provides a numerical identifier, known as CAS registry number to each chemical substance that has been reported in the chemical literature (such as chemistry journals and patents). This information is compiled as a database and is popularly known as the Chemical substances index. Other computer-friendly systems that have been developed for substance information are: SMILES and the International Chemical Identifier or InChI.
Isolation, purification, characterization, and identification
Often a pure substance needs to be isolated from a mixture, for example from a natural source (where a sample often contains numerous chemical substances) or after a chemical reaction (which often gives mixtures of chemical substances).
Measurement
See also
Hazard symbol
Homogeneous and heterogeneous mixtures
Prices of chemical elements
Dedicated bio-based chemical
Fire diamond
Research chemical
References
External links
General chemistry
Artificial materials | Chemical substance | [
"Physics",
"Chemistry"
] | 3,286 | [
"Artificial materials",
"Materials",
"nan",
"Chemical substances",
"Matter"
] |
9,724,528 | https://en.wikipedia.org/wiki/Special%20Sensor%20Ultraviolet%20Limb%20Imager | The Special Sensor Ultraviolet Limb Imager (SSULI) is an imaging spectrometer that is used to observe the earth's ionosphere and thermosphere. These sensors provide vertical intensity profiles of airglow emissions in the extreme ultraviolet and far ultraviolet spectral range of 800 to 1700 Angstrom (80 to 170 nanometre) and scan from 75 km to 750 km tangent altitude. The data from these sensors will be used to infer altitude profiles of ion, electron and neutral density.
The Naval Research Laboratory (NRL) developed five ultraviolet remote sensing instruments for the Air Force Defense Meteorological Satellite Program (DMSP). These instruments known as SSULI (Special Sensor Ultraviolet Limb Imager) launched aboard the DMSP block of 5D3 satellites, which started in 2003. SSULI measures vertical profiles of the natural airglow radiation from atoms, molecules and ions in the upper atmosphere and ionosphere by viewing the Earth's limb at a tangent altitude of approximately 50 km to 750 km.
Overview
The United States Naval Research Laboratory (NRL) built five of these ultraviolet spectrographs for the United States Air Force (USAF) Defense Meteorological Satellite Program (DMSP) block of 5D3 satellites.
Launch
The first sensor was launched on the DMSP F16 spacecraft in October 2003 into a Sun-synchronous 830 km circular orbit at a local time of 0800-2000 UT. Three of the remaining four SSULIs were launched on the following DMSP Block 5D3 satellitesDefense Meteorological Satellite Program#Block 5D:
DMSP F17 - November 4, 2006
DMSP F18 - October 18, 2009
DMSP F19 - April 3, 2014
The last SSULI is at NRL awaiting a new "ride" due to the cancellation and preservation of the last DMSP satellite.
Mission details
Measurements are made from the extreme ultraviolet (EUV) to the far ultraviolet (FUV) over the wavelength range of 80 nanometers to 170 nanometers, with 1.8 nanometer resolution. The satellites will be launched in a near-polar, Sun-synchronous orbit at an altitude of approximately 830 km. The Low Resolution Airglow and Auroral Spectrograph (LORAAS), a SSULI prototype, was launched on board the Advanced Research and Global Observation Satellite (ARGOS_(satellite)) on February 23, 1999. LORAAS data was used to validate SSULI algorithms that convert raw measurements (Figure 2) into useful environmental parameters that characterize the upper atmosphere.
Software
An extensive operational data processing system has been developed to generate environmental data from SSULI spectral data. Spectral data from the LORAAS instrument is also part of this platform. This system, known as the Ground Data Analysis Software (GDAS), includes operational data reduction software using advanced science algorithms also developed at NRL, a customized graphical user interface (GUI), and comprehensive validation techniques. Programs are designed to generate a SSULI Prep file from multiple data sources including Raw Sensor Data Records (RSDR) at the Air Force Weather Agency (AFWA), HIRAAS real-time data assembled at US Space Command, and an extensive HIRAAS infobase on site at the Naval Research Laboratory.
Technical information
The sensor has a field-of-view of 2.4°x0.15° and sweeps out a 2.4°x17° field-of-regard during each 90 second scan, with wavelength coverage between 800Å and 1700Å at 23Å resolution. The field of view scans ahead of the spacecraft in the orbital plane through a 17° field of regard, corresponding to approximately 75–750 km altitude.
References
External links
Special Sensor Ultraviolet Limb Imager
Spectrometers
Earth observation satellite sensors | Special Sensor Ultraviolet Limb Imager | [
"Physics",
"Chemistry"
] | 772 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
9,724,850 | https://en.wikipedia.org/wiki/Tiny%20Ionospheric%20Photometer | The tiny ionospheric photometer (TIP) is a small space-based photometer that observes the Earth's ionosphere at 135.6 nm. The TIP instruments were designed and built by the US Naval Research Laboratory (NRL) and are a part of the COSMIC program.
Operation
Although each TIP instrument is fairly simple in design and operation, the value of this instrument is that six of them were launched at once, and they observe the Earth simultaneously from three orbital planes spaced equally apart around the Earth. The data of this instrument when combined with the data from the other COSMIC payloads allows a 3D tomographic analysis of the Earth's ionosphere to be performed.
See also
Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC)
References
External links
Official Website
Electromagnetic radiation meters
Ionosphere
Optical instruments | Tiny Ionospheric Photometer | [
"Physics",
"Materials_science",
"Astronomy",
"Technology",
"Engineering"
] | 167 | [
"Materials science stubs",
"Spectrum (physical sciences)",
"Plasma physics",
"Electromagnetic radiation meters",
"Spacecraft stubs",
"Electromagnetic spectrum",
"Astrophysics",
"Astronomy stubs",
"Measuring instruments",
"Astrophysics stubs",
"Plasma physics stubs",
"Electromagnetism stubs"
] |
1,074,990 | https://en.wikipedia.org/wiki/Clebsch%E2%80%93Gordan%20coefficients | In physics, the Clebsch–Gordan (CG) coefficients are numbers that arise in angular momentum coupling in quantum mechanics. They appear as the expansion coefficients of total angular momentum eigenstates in an uncoupled tensor product basis. In more mathematical terms, the CG coefficients are used in representation theory, particularly of compact Lie groups, to perform the explicit direct sum decomposition of the tensor product of two irreducible representations (i.e., a reducible representation into irreducible representations, in cases where the numbers and types of irreducible components are already known abstractly). The name derives from the German mathematicians Alfred Clebsch and Paul Gordan, who encountered an equivalent problem in invariant theory.
From a vector calculus perspective, the CG coefficients associated with the SO(3) group can be defined simply in terms of integrals of products of spherical harmonics and their complex conjugates. The addition of spins in quantum-mechanical terms can be read directly from this approach as spherical harmonics are eigenfunctions of total angular momentum and projection thereof onto an axis, and the integrals correspond to the Hilbert space inner product. From the formal definition of angular momentum, recursion relations for the Clebsch–Gordan coefficients can be found. There also exist complicated explicit formulas for their direct calculation.
The formulas below use Dirac's bra–ket notation and the Condon–Shortley phase convention is adopted.
Review of the angular momentum operators
Angular momentum operators are self-adjoint operators , , and that satisfy the commutation relations
where is the Levi-Civita symbol. Together the three operators define a vector operator, a rank one Cartesian tensor operator,
It is also known as a spherical vector, since it is also a spherical tensor operator. It is only for rank one that spherical tensor operators coincide with the Cartesian tensor operators.
By developing this concept further, one can define another operator as the inner product of with itself:
This is an example of a Casimir operator. It is diagonal and its eigenvalue characterizes the particular irreducible representation of the angular momentum algebra . This is physically interpreted as the square of the total angular momentum of the states on which the representation acts.
One can also define raising () and lowering () operators, the so-called ladder operators,
Spherical basis for angular momentum eigenstates
It can be shown from the above definitions that commutes with , , and :
When two Hermitian operators commute, a common set of eigenstates exists. Conventionally, and are chosen. From the commutation relations, the possible eigenvalues can be found. These eigenstates are denoted where is the angular momentum quantum number and is the angular momentum projection onto the z-axis.
They comprise the spherical basis, are complete, and satisfy the following eigenvalue equations,
The raising and lowering operators can be used to alter the value of ,
where the ladder coefficient is given by:
In principle, one may also introduce a (possibly complex) phase factor in the definition of . The choice made in this article is in agreement with the Condon–Shortley phase convention. The angular momentum states are orthogonal (because their eigenvalues with respect to a Hermitian operator are distinct) and are assumed to be normalized,
Here the italicized and denote integer or half-integer angular momentum quantum numbers of a particle or of a system. On the other hand, the roman , , , , , and denote operators. The symbols are Kronecker deltas.
Tensor product space
We now consider systems with two physically different angular momenta and . Examples include the spin and the orbital angular momentum of a single electron, or the spins of two electrons, or the orbital angular momenta of two electrons. Mathematically, this means that the angular momentum operators act on a space of dimension and also on a space of dimension . We are then going to define a family of "total angular momentum" operators acting on the tensor product space , which has dimension . The action of the total angular momentum operator on this space constitutes a representation of the SU(2) Lie algebra, but a reducible one. The reduction of this reducible representation into irreducible pieces is the goal of Clebsch–Gordan theory.
Let be the -dimensional vector space spanned by the states
and the -dimensional vector space spanned by the states
The tensor product of these spaces, , has a -dimensional uncoupled basis
Angular momentum operators are defined to act on states in in the following manner:
and
where denotes the identity operator.
The total angular momentum operators are defined by the coproduct (or tensor product) of the two representations acting on ,
The total angular momentum operators can be shown to satisfy the very same commutation relations,
where . Indeed, the preceding construction is the standard method for constructing an action of a Lie algebra on a tensor product representation.
Hence, a set of coupled eigenstates exist for the total angular momentum operator as well,
for . Note that it is common to omit the part.
The total angular momentum quantum number must satisfy the triangular condition that
such that the three nonnegative integer or half-integer values could correspond to the three sides of a triangle.
The total number of total angular momentum eigenstates is necessarily equal to the dimension of :
As this computation suggests, the tensor product representation decomposes as the direct sum of one copy of each of the irreducible representations of dimension , where ranges from to in increments of 1. As an example, consider the tensor product of the three-dimensional representation corresponding to with the two-dimensional representation with . The possible values of are then and . Thus, the six-dimensional tensor product representation decomposes as the direct sum of a two-dimensional representation and a four-dimensional representation.
The goal is now to describe the preceding decomposition explicitly, that is, to explicitly describe basis elements in the tensor product space for each of the component representations that arise.
The total angular momentum states form an orthonormal basis of :
These rules may be iterated to, e.g., combine doublets (=1/2) to obtain the Clebsch-Gordan decomposition series, (Catalan's triangle),
where is the integer floor function; and the number preceding the boldface irreducible representation dimensionality () label indicates multiplicity of that representation in the representation reduction. For instance, from this formula, addition of three spin 1/2s yields a spin 3/2 and two spin 1/2s, .
Formal definition of Clebsch–Gordan coefficients
The coupled states can be expanded via the completeness relation (resolution of identity) in the uncoupled basis
The expansion coefficients
are the Clebsch–Gordan coefficients. Note that some authors write them in a different order such as . Another common notation is
.
Applying the operators
to both sides of the defining equation shows that the Clebsch–Gordan coefficients can only be nonzero when
Recursion relations
The recursion relations were discovered by physicist Giulio Racah from the Hebrew University of Jerusalem in 1941.
Applying the total angular momentum raising and lowering operators
to the left hand side of the defining equation gives
Applying the same operators to the right hand side gives
Combining these results gives recursion relations for the Clebsch–Gordan coefficients, where was defined in :
Taking the upper sign with the condition that gives initial recursion relation:In the Condon–Shortley phase convention, one adds the constraint that
(and is therefore also real).
The Clebsch–Gordan coefficients can then be found from these recursion relations. The normalization is fixed by the requirement that the sum of the squares, which equivalent to the requirement that the norm of the state must be one.
The lower sign in the recursion relation can be used to find all the Clebsch–Gordan coefficients with . Repeated use of that equation gives all coefficients.
This procedure to find the Clebsch–Gordan coefficients shows that they are all real in the Condon–Shortley phase convention.
Explicit expression
Orthogonality relations
These are most clearly written down by introducing the alternative notation
The first orthogonality relation is
(derived from the fact that ) and the second one is
Special cases
For the Clebsch–Gordan coefficients are given by
For and we have
For and we have
For we have
For , we have
For we have
Symmetry properties
A convenient way to derive these relations is by converting the Clebsch–Gordan coefficients to Wigner 3-j symbols using . The symmetry properties of Wigner 3-j symbols are much simpler.
Rules for phase factors
Care is needed when simplifying phase factors: a quantum number may be a half-integer rather than an integer, therefore is not necessarily for a given quantum number unless it can be proven to be an integer. Instead, it is replaced by the following weaker rule:
for any angular-momentum-like quantum number .
Nonetheless, a combination of and is always an integer, so the stronger rule applies for these combinations:
This identity also holds if the sign of either or or both is reversed.
It is useful to observe that any phase factor for a given pair can be reduced to the canonical form:
where and (other conventions are possible too). Converting phase factors into this form makes it easy to tell whether two phase factors are equivalent. (Note that this form is only locally canonical: it fails to take into account the rules that govern combinations of pairs such as the one described in the next paragraph.)
An additional rule holds for combinations of , , and that are related by a Clebsch-Gordan coefficient or Wigner 3-j symbol:
This identity also holds if the sign of any is reversed, or if any of them are substituted with an instead.
Relation to Wigner 3-j symbols
Clebsch–Gordan coefficients are related to Wigner 3-j symbols which have more convenient symmetry relations.
The factor is due to the Condon–Shortley constraint that , while is due to the time-reversed nature of .
This allows to reach the general expression:
The summation is performed over those integer values for which the argument of each factorial in the denominator is non-negative, i.e. summation limits and are taken equal: the lower one the upper one Factorials of negative numbers are conventionally taken equal to zero, so that the values of the 3j symbol at, for example, or are automatically set to zero.
Relation to Wigner D-matrices
Relation to spherical harmonics
In the case where integers are involved, the coefficients can be related to integrals of spherical harmonics:
It follows from this and orthonormality of the spherical harmonics that CG coefficients are in fact the expansion coefficients of a product of two spherical harmonics in terms of a single spherical harmonic:
Other properties
Clebsch–Gordan coefficients for specific groups
For arbitrary groups and their representations, Clebsch–Gordan coefficients are not known in general. However, algorithms to produce Clebsch–Gordan coefficients for the special unitary group SU(n) are known. In particular, SU(3) Clebsch-Gordan coefficients have been computed and tabulated because of their utility in characterizing hadronic decays, where a flavor-SU(3) symmetry exists that relates the up, down, and strange quarks. A web interface for tabulating SU(N) Clebsch–Gordan coefficients is readily available.
See also
3-j symbol
6-j symbol
9-j symbol
Racah W-coefficient
Spherical harmonics
Spherical basis
Tensor products of representations
Associated Legendre polynomials
Angular momentum
Angular momentum coupling
Total angular momentum quantum number
Azimuthal quantum number
Table of Clebsch–Gordan coefficients
Wigner D-matrix
Wigner–Eckart theorem
Angular momentum diagrams (quantum mechanics)
Clebsch–Gordan coefficient for SU(3)
Littlewood–Richardson coefficient
Remarks
Notes
References
Albert Messiah (1966). Quantum Mechanics (Vols. I & II), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons.
External links
Clebsch–Gordan, 3-j and 6-j Coefficient Web Calculator
Downloadable Clebsch–Gordan Coefficient Calculator for Mac and Windows
Web interface for tabulating SU(N) Clebsch–Gordan coefficients
Further reading
Rotation in three dimensions
Rotational symmetry
Representation theory of Lie groups
Quantum mechanics
Mathematical physics | Clebsch–Gordan coefficients | [
"Physics",
"Mathematics"
] | 2,574 | [
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Mathematical physics",
"Symmetry",
"Rotational symmetry"
] |
1,074,997 | https://en.wikipedia.org/wiki/Lamb%20shift | In physics, the Lamb shift, named after Willis Lamb, is an anomalous difference in energy between two electron orbitals in a hydrogen atom. The difference was not predicted by theory and it cannot be derived from the Dirac equation, which predicts identical energies. Hence the Lamb shift is a deviation from theory seen in the differing energies contained by the 2S1/2 and 2P1/2 orbitals of the hydrogen atom.
The Lamb shift is caused by interactions between the virtual photons created through vacuum energy fluctuations and the electron as it moves around the hydrogen nucleus in each of these two orbitals. The Lamb shift has since played a significant role through vacuum energy fluctuations in theoretical prediction of Hawking radiation from black holes.
This effect was first measured in 1947 in the Lamb–Retherford experiment on the hydrogen microwave spectrum and this measurement provided the stimulus for renormalization theory to handle the divergences. It was the harbinger of modern quantum electrodynamics developed by Julian Schwinger, Richard Feynman, Ernst Stueckelberg, Sin-Itiro Tomonaga and Freeman Dyson. Lamb won the Nobel Prize in Physics in 1955 for his discoveries related to the Lamb shift. Victor Weisskopf regretted that his insecurity about his mathematical abilities may have cost him a Nobel Prize when he did not publish results (which turned out to be correct) about what is now known as the Lamb shift.
Importance
In 1978, on Lamb's 65th birthday, Freeman Dyson addressed him as follows: "Those years, when the Lamb shift was the central theme of physics, were golden years for all the physicists of my generation. You were the first to see that this tiny shift, so elusive and hard to measure, would clarify our thinking about particles and fields."
Derivation
This heuristic derivation of the electrodynamic level shift follows Theodore A. Welton's approach.
The fluctuations in the electric and magnetic fields associated with the QED vacuum perturbs the electric potential due to the atomic nucleus. This perturbation causes a fluctuation in the position of the electron, which explains the energy shift. The difference of potential energy is given by
Since the fluctuations are isotropic,
So one can obtain
The classical equation of motion for the electron displacement (δr) induced by a single mode of the field of wave vector and frequency ν is
and this is valid only when the frequency ν is greater than ν0 in the Bohr orbit, . The electron is unable to respond to the fluctuating field if the fluctuations are smaller than the natural orbital frequency in the atom.
For the field oscillating at ν,
therefore
where is some large normalization volume (the volume of the hypothetical "box" containing the hydrogen atom), and denotes the hermitian conjugate of the preceding term. By the summation over all
This result diverges when no limits about the integral (at both large and small frequencies). As mentioned above, this method is expected to be valid only when , or equivalently . It is also valid only for wavelengths longer than the Compton wavelength, or equivalently . Therefore, one can choose the upper and lower limit of the integral and these limits make the result converge.
.
For the atomic orbital and the Coulomb potential,
since it is known that
For p orbitals, the nonrelativistic wave function vanishes at the origin (at the nucleus), so there is no energy shift. But for s orbitals there is some finite value at the origin,
where the Bohr radius is
Therefore,
.
Finally, the difference of the potential energy becomes:
where is the fine-structure constant. This shift is about 500 MHz, within an order of magnitude of the observed shift of 1057 MHz. This is equal to an energy of only 7.00 x 10^-25 J., or 4.37 x 10^-6 eV.
Welton's heuristic derivation of the Lamb shift is similar to, but distinct from, the calculation of the Darwin term using Zitterbewegung, a contribution to the fine structure that is of lower order in than the Lamb shift.
Lamb–Retherford experiment
In 1947 Willis Lamb and Robert Retherford carried out an experiment using microwave techniques to stimulate radio-frequency transitions between
2S1/2 and 2P1/2 levels of hydrogen. By using lower frequencies than for optical transitions the Doppler broadening could be neglected (Doppler broadening is proportional to the frequency). The energy difference Lamb and Retherford found was a rise of about 1000 MHz (0.03 cm−1) of the 2S1/2 level above the 2P1/2 level.
This particular difference is a one-loop effect of quantum electrodynamics, and can be interpreted as the influence of virtual photons that have been emitted and re-absorbed by the atom. In quantum electrodynamics the electromagnetic field is quantized and, like the harmonic oscillator in quantum mechanics, its lowest state is not zero. Thus, there exist small zero-point oscillations that cause the electron to execute rapid oscillatory motions. The electron is "smeared out" and each radius value is changed from r to r + δr (a small but finite perturbation).
The Coulomb potential is therefore perturbed by a small amount and the degeneracy of the two energy levels is removed. The new potential can be approximated (using atomic units) as follows:
The Lamb shift itself is given by
with k(n, 0) around 13 varying slightly with n, and
with log(k(n,)) a small number (approx. −0.05) making k(n,) close to unity.
For a derivation of ΔELamb see for example:
In the hydrogen spectrum
In 1947, Hans Bethe was the first to explain the Lamb shift in the hydrogen spectrum, and he thus laid the foundation for the modern development of quantum electrodynamics. Bethe was able to derive the Lamb shift by implementing the idea of mass renormalization, which allowed him to calculate the observed energy shift as the difference between the shift of a bound electron and the shift of a free electron. The Lamb shift currently provides a measurement of the fine-structure constant α to better than one part in a million, allowing a precision test of quantum electrodynamics.
See also
Uehling potential, first approximation to the Lamb shift
Shelter Island Conference
Zeeman effect used to measure the Lamb shift
References
Further reading
External links
Hans Bethe talking about Lamb-shift calculations on Web of Stories
Nobel Prize biography of Willis Lamb
Nobel lecture of Willis Lamb: Fine Structure of the Hydrogen Atom
Quantum electrodynamics
Physical quantities | Lamb shift | [
"Physics",
"Mathematics"
] | 1,389 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
1,075,022 | https://en.wikipedia.org/wiki/Cluster%20decomposition | In physics, the cluster decomposition property states that experiments carried out far from each other cannot influence each other. Usually applied to quantum field theory, it requires that vacuum expectation values of operators localized in bounded regions factorize whenever these regions becomes sufficiently distant from each other. First formulated by Eyvind Wichmann and James H. Crichton in 1963 in the context of the S-matrix, it was conjectured by Steven Weinberg that in the low energy limit the cluster decomposition property, together with Lorentz invariance and quantum mechanics, inevitably lead to quantum field theory. String theory satisfies all three of the conditions and so provides a counter-example against this being true at all energy scales.
Formulation
The S-matrix describes the amplitude for a process with an initial state evolving into a final state . If the initial and final states consist of two clusters, with and close to each other but far from the pair and , then the cluster decomposition property requires the S-matrix to factorize
as the distance between the two clusters increases. The physical interpretation of this is that any two spatially well separated experiments and cannot influence each other. This condition is fundamental to the ability to doing physics without having to know the state of the entire universe. By expanding the S-matrix into a sum of a product of connected S-matrix elements , which at the perturbative level are equivalent to connected Feynman diagrams, the cluster decomposition property can be restated as demanding that connected S-matrix elements must vanish whenever some of its clusters of particles are far apart from each other.
This position space formulation can also be reformulated in terms of the momentum space S-matrix . Since its Fourier transformation gives the position space connected S-matrix, this only depends on position through the exponential terms. Therefore, performing a uniform translation in a direction on a subset of particles will effectively change the momentum space S-matrix as
By translational invariance, a translation of all particles cannot change the S-matrix, therefore must be proportional to a momentum conserving delta function to ensure that the translation exponential factor vanishes. If there is an additional delta function of only a subset of momenta corresponding to some cluster of particles, then this cluster can be moved arbitrarily far through a translation without changing the S-matrix, which would violate cluster decomposition. This means that in momentum space the property requires that the S-matrix only has a single delta function.
Cluster decomposition can also be formulated in terms of correlation functions, where for any two operators and localized to some region, the vacuum expectation values factorize as the two operators become distantly separated
This formulation allows for the property to be applied to theories that lack an S-matrix such as conformal field theories. It is in terms of these Wightman functions that the property is usually formulated in axiomatic quantum field theory. In some formulations, such as Euclidean constructive field theory, it is explicitly introduced as an axiom.
Properties
If a theory is constructed from creation and annihilation operators, then the cluster decomposition property automatically holds. This can be seen by expanding out the S-matrix as a sum of Feynman diagrams which allows for the identification of connected S-matrix elements with connected Feynman diagrams. Vertices arise whenever creation and annihilation operators commute past each other leaving behind a single momentum delta function. In any connected diagram with V vertices, I internal lines and L loops, I-L of the delta functions go into fixing internal momenta, leaving V-(I-L) delta functions unfixed. A form of Euler's formula states that any graph with C disjoint connected components satisfies C = V-I+L. Since the connected S-matrix elements correspond to C=1 diagrams, these only have a single delta function and thus the cluster decomposition property, as formulated above in momentum space in terms of delta functions, holds.
Microcausality, the locality condition requiring commutation relations of local operators to vanish for spacelike separations, is a sufficient condition for the S-matrix to satisfy cluster decomposition. In this sense cluster decomposition serves a similar purpose for the S-matrix as microcausality does for fields, preventing causal influence from propagating between regions that are distantly separated. However, cluster decomposition is weaker than having no superluminal causation since it can be formulated for classical theories as well.
One key requirement for cluster decomposition is that it requires a unique vacuum state, with it failing if the vacuum state is a mixed state. The rate at which the correlation functions factorize depends on the spectrum of the theory, where if it has mass gap of mass then there is an exponential falloff while if there are massless particles present then it can be as slow as .
References
Quantum field theory
Axiomatic quantum field theory
Theorems in quantum mechanics | Cluster decomposition | [
"Physics",
"Mathematics"
] | 990 | [
"Theorems in quantum mechanics",
"Quantum field theory",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Physics theorems"
] |
1,075,071 | https://en.wikipedia.org/wiki/Transcriptome | The transcriptome is the set of all RNA transcripts, including coding and non-coding, in an individual or a population of cells. The term can also sometimes be used to refer to all RNAs, or just mRNA, depending on the particular experiment. The term transcriptome is a portmanteau of the words transcript and genome; it is associated with the process of transcript production during the biological process of transcription.
The early stages of transcriptome annotations began with cDNA libraries published in the 1980s. Subsequently, the advent of high-throughput technology led to faster and more efficient ways of obtaining data about the transcriptome. Two biological techniques are used to study the transcriptome, namely DNA microarray, a hybridization-based technique and RNA-seq, a sequence-based approach. RNA-seq is the preferred method and has been the dominant transcriptomics technique since the 2010s. Single-cell transcriptomics allows tracking of transcript changes over time within individual cells.
Data obtained from the transcriptome is used in research to gain insight into processes such as cellular differentiation, carcinogenesis, transcription regulation and biomarker discovery among others. Transcriptome-obtained data also finds applications in establishing phylogenetic relationships during the process of evolution and in in vitro fertilization. The transcriptome is closely related to other -ome based biological fields of study; it is complementary to the proteome and the metabolome and encompasses the translatome, exome, meiome and thanatotranscriptome which can be seen as ome fields studying specific types of RNA transcripts. There are quantifiable and conserved relationships between the Transcriptome and other -omes, and Transcriptomics data can be used effectively to predict other molecular species, such as metabolites. There are numerous publicly available transcriptome databases.
Etymology and history
The word transcriptome is a portmanteau of the words transcript and genome. It appeared along with other neologisms formed using the suffixes -ome and -omics to denote all studies conducted on a genome-wide scale in the fields of life sciences and technology. As such, transcriptome and transcriptomics were one of the first words to emerge along with genome and proteome. The first study to present a case of a collection of a cDNA library for silk moth mRNA was published in 1979. The first seminal study to mention and investigate the transcriptome of an organism was published in 1997 and it described 60,633 transcripts expressed in S. cerevisiae using serial analysis of gene expression (SAGE). With the rise of high-throughput technologies and bioinformatics and the subsequent increased computational power, it became increasingly efficient and easy to characterize and analyze enormous amount of data. Attempts to characterize the transcriptome became more prominent with the advent of automated DNA sequencing during the 1980s. During the 1990s, expressed sequence tag sequencing was used to identify genes and their fragments. This was followed by techniques such as serial analysis of gene expression (SAGE), cap analysis of gene expression (CAGE), and massively parallel signature sequencing (MPSS).
Transcription
The transcriptome encompasses all the ribonucleic acid (RNA) transcripts present in a given organism or experimental sample. RNA is the main carrier of genetic information that is responsible for the process of converting DNA into an organism's phenotype. A gene can give rise to a single-stranded messenger RNA (mRNA) through a molecular process known as transcription; this mRNA is complementary to the strand of DNA it originated from. The enzyme RNA polymerase II attaches to the template DNA strand and catalyzes the addition of ribonucleotides to the 3' end of the growing sequence of the mRNA transcript.
In order to initiate its function, RNA polymerase II needs to recognize a promoter sequence, located upstream (5') of the gene. In eukaryotes, this process is mediated by transcription factors, most notably Transcription factor II D (TFIID) which recognizes the TATA box and aids in the positioning of RNA polymerase at the appropriate start site. To finish the production of the RNA transcript, termination takes place usually several hundred nuclecotides away from the termination sequence and cleavage takes place. This process occurs in the nucleus of a cell along with RNA processing by which mRNA molecules are capped, spliced and polyadenylated to increase their stability before being subsequently taken to the cytoplasm. The mRNA gives rise to proteins through the process of translation that takes place in ribosomes.
Types of RNA transcripts
Almost all functional transcripts are derived from known genes. The only exceptions are a small number of transcripts that might play a direct role in regulating gene expression near the prompters of known genes. (See Enhancer RNA.)
Gene occupy most of prokaryotic genomes so most of their genomes are transcribed. Many eukaryotic genomes are very large and known genes may take up only a fraction of the genome. In mammals, for example, known genes only account for 40-50% of the genome. Nevertheless, identified transcripts often map to a much larger fraction of the genome suggesting that the transcriptome contains spurious transcripts that do not come from genes. Some of these transcripts are known to be non-functional because they map to transcribed pseudogenes or degenerative transposons and viruses. Others map to unidentified regions of the genome that may be junk DNA.
Spurious transcription is very common in eukaryotes, especially those with large genomes that might contain a lot of junk DNA. Some scientists claim that if a transcript has not been assigned to a known gene then the default assumption must be that it is junk RNA until it has been shown to be functional. This would mean that much of the transcriptome in species with large genomes is probably junk RNA. (See Non-coding RNA)
The transcriptome includes the transcripts of protein-coding genes (mRNA plus introns) as well as the transcripts of non-coding genes (functional RNAs plus introns).
Ribosomal RNA/rRNA: Usually the most abundant RNA in the transcriptome.
Long non-coding RNA/lncRNA: Non-coding RNA transcripts that are more than 200 nucleotides long. Members of this group comprise the largest fraction of the non-coding transcriptome other than introns. It is not known how many of these transcripts are functional and how many are junk RNA.
transfer RNA/tRNA
micro RNA/miRNA: 19-24 nucleotides (nt) long. Micro RNAs up- or downregulate expression levels of mRNAs by the process of RNA interference at the post-transcriptional level.
small interfering RNA/siRNA: 20-24 nt
small nucleolar RNA/snoRNA
Piwi-interacting RNA/piRNA: 24-31 nt. They interact with Piwi proteins of the Argonaute family and have a function in targeting and cleaving transposons.
enhancer RNA/eRNA:
Scope of study
In the human genome, all genes get transcribed into RNA because that's how the molecular gene is defined. (See Gene.) The transcriptome consists of coding regions of mRNA plus non-coding UTRs, introns, non-coding RNAs, and spurious non-functional transcripts.
Several factors render the content of the transcriptome difficult to establish. These include alternative splicing, RNA editing and alternative transcription among others. Additionally, transcriptome techniques are capable of capturing transcription occurring in a sample at a specific time point, although the content of the transcriptome can change during differentiation. The main aims of transcriptomics are the following: "catalogue all species of transcript, including mRNAs, non-coding RNAs and small RNAs; to determine the transcriptional structure of genes, in terms of their start sites, 5′ and 3′ ends, splicing patterns and other post-transcriptional modifications; and to quantify the changing expression levels of each transcript during development and under different conditions".
The term can be applied to the total set of transcripts in a given organism, or to the specific subset of transcripts present in a particular cell type. Unlike the genome, which is roughly fixed for a given cell line (excluding mutations), the transcriptome can vary with external environmental conditions. Because it includes all mRNA transcripts in the cell, the transcriptome reflects the genes that are being actively expressed at any given time, with the exception of mRNA degradation phenomena such as transcriptional attenuation. The study of transcriptomics, (which includes expression profiling, splice variant analysis etc.), examines the expression level of RNAs in a given cell population, often focusing on mRNA, but sometimes including others such as tRNAs and sRNAs.
Methods of construction
Transcriptomics is the quantitative science that encompasses the assignment of a list of strings ("reads") to the object ("transcripts" in the genome). To calculate the expression strength, the density of reads corresponding to each object is counted. Initially, transcriptomes were analyzed and studied using expressed sequence tags libraries and serial and cap analysis of gene expression (SAGE).
Currently, the two main transcriptomics techniques include DNA microarrays and RNA-Seq. Both techniques require RNA isolation through RNA extraction techniques, followed by its separation from other cellular components and enrichment of mRNA.
There are two general methods of inferring transcriptome sequences. One approach maps sequence reads onto a reference genome, either of the organism itself (whose transcriptome is being studied) or of a closely related species. The other approach, de novo transcriptome assembly, uses software to infer transcripts directly from short sequence reads and is used in organisms with genomes that are not sequenced.
DNA microarrays
The first transcriptome studies were based on microarray techniques (also known as DNA chips). Microarrays consist of thin glass layers with spots on which oligonucleotides, known as "probes" are arrayed; each spot contains a known DNA sequence.
When performing microarray analyses, mRNA is collected from a control and an experimental sample, the latter usually representative of a disease. The RNA of interest is converted to cDNA to increase its stability and marked with fluorophores of two colors, usually green and red, for the two groups. The cDNA is spread onto the surface of the microarray where it hybridizes with oligonucleotides on the chip and a laser is used to scan. The fluorescence intensity on each spot of the microarray corresponds to the level of gene expression and based on the color of the fluorophores selected, it can be determined which of the samples exhibits higher levels of the mRNA of interest.
One microarray usually contains enough oligonucleotides to represent all known genes; however, data obtained using microarrays does not provide information about unknown genes. During the 2010s, microarrays were almost completely replaced by next-generation techniques that are based on DNA sequencing.
RNA sequencing
RNA sequencing is a next-generation sequencing technology; as such it requires only a small amount of RNA and no previous knowledge of the genome. It allows for both qualitative and quantitative analysis of RNA transcripts, the former allowing discovery of new transcripts and the latter a measure of relative quantities for transcripts in a sample.
The three main steps of sequencing transcriptomes of any biological samples include RNA purification, the synthesis of an RNA or cDNA library and sequencing the library. The RNA purification process is different for short and long RNAs. This step is usually followed by an assessment of RNA quality, with the purpose of avoiding contaminants such as DNA or technical contaminants related to sample processing. RNA quality is measured using UV spectrometry with an absorbance peak of 260 nm. RNA integrity can also be analyzed quantitatively comparing the ratio and intensity of 28S RNA to 18S RNA reported in the RNA Integrity Number (RIN) score. Since mRNA is the species of interest and it represents only 3% of its total content, the RNA sample should be treated to remove rRNA and tRNA and tissue-specific RNA transcripts.
The step of library preparation with the aim of producing short cDNA fragments, begins with RNA fragmentation to transcripts in length between 50 and 300 base pairs. Fragmentation can be enzymatic (RNA endonucleases), chemical (trismagnesium salt buffer, chemical hydrolysis) or mechanical (sonication, nebulisation). Reverse transcription is used to convert the RNA templates into cDNA and three priming methods can be used to achieve it, including oligo-DT, using random primers or ligating special adaptor oligos.
Single-cell transcriptomics
Transcription can also be studied at the level of individual cells by single-cell transcriptomics. Single-cell RNA sequencing (scRNA-seq) is a recently developed technique that allows the analysis of the transcriptome of single cells, including bacteria. With single-cell transcriptomics, subpopulations of cell types that constitute the tissue of interest are also taken into consideration. This approach allows to identify whether changes in experimental samples are due to phenotypic cellular changes as opposed to proliferation, with which a specific cell type might be overexpressed in the sample. Additionally, when assessing cellular progression through differentiation, average expression profiles are only able to order cells by time rather than their stage of development and are consequently unable to show trends in gene expression levels specific to certain stages. Single-cell trarnscriptomic techniques have been used to characterize rare cell populations such as circulating tumor cells, cancer stem cells in solid tumors, and embryonic stem cells (ESCs) in mammalian blastocysts.
Although there are no standardized techniques for single-cell transcriptomics, several steps need to be undertaken. The first step includes cell isolation, which can be performed using low- and high-throughput techniques. This is followed by a qPCR step and then single-cell RNAseq where the RNA of interest is converted into cDNA. Newer developments in single-cell transcriptomics allow for tissue and sub-cellular localization preservation through cryo-sectioning thin slices of tissues and sequencing the transcriptome in each slice. Another technique allows the visualization of single transcripts under a microscope while preserving the spatial information of each individual cell where they are expressed.
Analysis
A number of organism-specific transcriptome databases have been constructed and annotated to aid in the identification of genes that are differentially expressed in distinct cell populations.
RNA-seq is emerging (2013) as the method of choice for measuring transcriptomes of organisms, though the older technique of DNA microarrays is still used. RNA-seq measures the transcription of a specific gene by converting long RNAs into a library of cDNA fragments. The cDNA fragments are then sequenced using high-throughput sequencing technology and aligned to a reference genome or transcriptome which is then used to create an expression profile of the genes.
Applications
Mammals
The transcriptomes of stem cells and cancer cells are of particular interest to researchers who seek to understand the processes of cellular differentiation and carcinogenesis. A pipeline using RNA-seq or gene array data can be used to track genetic changes occurring in stem and precursor cells and requires at least three independent gene expression data from the former cell type and mature cells.
Analysis of the transcriptomes of human oocytes and embryos is used to understand the molecular mechanisms and signaling pathways controlling early embryonic development, and could theoretically be a powerful tool in making proper embryo selection in in vitro fertilisation. Analyses of the transcriptome content of the placenta in the first-trimester of pregnancy in in vitro fertilization and embryo transfer (IVT-ET) revealed differences in genetic expression which are associated with higher frequency of adverse perinatal outcomes. Such insight can be used to optimize the practice. Transcriptome analyses can also be used to optimize cryopreservation of oocytes, by lowering injuries associated with the process.
Transcriptomics is an emerging and continually growing field in biomarker discovery for use in assessing the safety of drugs or chemical risk assessment.
Transcriptomes may also be used to infer phylogenetic relationships among individuals or to detect evolutionary patterns of transcriptome conservation.
Transcriptome analyses were used to discover the incidence of antisense transcription, their role in gene expression through interaction with surrounding genes and their abundance in different chromosomes. RNA-seq was also used to show how RNA isoforms, transcripts stemming from the same gene but with different structures, can produce complex phenotypes from limited genomes.
Plants
Transcriptome analysis have been used to study the evolution and diversification process of plant species. In 2014, the 1000 Plant Genomes Project was completed in which the transcriptomes of 1,124 plant species from the families viridiplantae, glaucophyta and rhodophyta were sequenced. The protein coding sequences were subsequently compared to infer phylogenetic relationships between plants and to characterize the time of their diversification in the process of evolution. Transcriptome studies have been used to characterize and quantify gene expression in mature pollen. Genes involved in cell wall metabolism and cytoskeleton were found to be overexpressed. Transcriptome approaches also allowed to track changes in gene expression through different developmental stages of pollen, ranging from microspore to mature pollen grains; additionally such stages could be compared across species of different plants including Arabidopsis, rice and tobacco.
Relation to other ome fields
Similar to other -ome based technologies, analysis of the transcriptome allows for an unbiased approach when validating hypotheses experimentally. This approach also allows for the discovery of novel mediators in signaling pathways. As with other -omics based technologies, the transcriptome can be analyzed within the scope of a multiomics approach. It is complementary to metabolomics but contrary to proteomics, a direct association between a transcript and metabolite cannot be established.
There are several -ome fields that can be seen as subcategories of the transcriptome. The exome differs from the transcriptome in that it includes only those RNA molecules found in a specified cell population, and usually includes the amount or concentration of each RNA molecule in addition to the molecular identities. Additionally, the transcritpome also differs from the translatome, which is the set of RNAs undergoing translation.
The term meiome is used in functional genomics to describe the meiotic transcriptome or the set of RNA transcripts produced during the process of meiosis. Meiosis is a key feature of sexually reproducing eukaryotes, and involves the pairing of homologous chromosome, synapse and recombination. Since meiosis in most organisms occurs in a short time period, meiotic transcript profiling is difficult due to the challenge of isolation (or enrichment) of meiotic cells (meiocytes). As with transcriptome analyses, the meiome can be studied at a whole-genome level using large-scale transcriptomic techniques. The meiome has been well-characterized in mammal and yeast systems and somewhat less extensively characterized in plants.
The thanatotranscriptome consists of all RNA transcripts that continue to be expressed or that start getting re-expressed in internal organs of a dead body 24–48 hours following death. Some genes include those that are inhibited after fetal development. If the thanatotranscriptome is related to the process of programmed cell death (apoptosis), it can be referred to as the apoptotic thanatotranscriptome. Analyses of the thanatotranscriptome are used in forensic medicine.
eQTL mapping can be used to complement genomics with transcriptomics; genetic variants at DNA level and gene expression measures at RNA level.
Relation to proteome
The transcriptome can be seen as a subset of the proteome, that is, the entire set of proteins expressed by a genome.
However, the analysis of relative mRNA expression levels can be complicated by the fact that relatively small changes in mRNA expression can produce large changes in the total amount of the corresponding protein present in the cell. One analysis method, known as gene set enrichment analysis, identifies coregulated gene networks rather than individual genes that are up- or down-regulated in different cell populations.
Although microarray studies can reveal the relative amounts of different mRNAs in the cell, levels of mRNA are not directly proportional to the expression level of the proteins they code for. The number of protein molecules synthesized using a given mRNA molecule as a template is highly dependent on translation-initiation features of the mRNA sequence; in particular, the ability of the translation initiation sequence is a key determinant in the recruiting of ribosomes for protein translation.
Transcriptome databases
Ensembl:
OmicTools:
Transcriptome Browser:
ArrayExpress:
See also
Notes
References
Further reading
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP. (2005). Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci USA 102(43):15545-50.
Laule O, Hirsch-Hoffmann M, Hruz T, Gruissem W, and P Zimmermann. (2006) Web-based analysis of the mouse transcriptome using Genevestigator. BMC Bioinformatics 7:311
Gene expression
Omics
RNA
RNA splicing | Transcriptome | [
"Chemistry",
"Biology"
] | 4,476 | [
"Gene expression",
"Bioinformatics",
"Omics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
1,075,211 | https://en.wikipedia.org/wiki/Metabolome | The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the more significant early papers on metabolome analysis are listed in the references below.
Measuring the metabolome
The metabolome reflects the interaction between an organism's genome and its environment. As a result, an organism's metabolome can serve as an excellent probe of its phenotype (i.e. the product of its genotype and its environment). Metabolites can be measured (identified, quantified or classified) using a number of different technologies including NMR spectroscopy and mass spectrometry. Most mass spectrometry (MS) methods must be coupled to various forms of liquid chromatography (LC), gas chromatography (GC) or capillary electrophoresis (CE) to facilitate compound separation. Each method is typically able to identify or characterize 50-5,000 different metabolites or metabolite "features" at a time, depending on the instrument or protocol being used. Currently it is not possible to analyze the entire range of metabolites by a single analytical method.
Nuclear magnetic resonance (NMR) spectroscopy is an analytical chemistry technique that measures the absorption of radiofrequency radiation of specific nuclei when molecules containing those nuclei are placed in strong magnetic fields. The frequency (i.e. the chemical shift) at which a given atom or nucleus absorbs is highly dependent on the chemical environment (bonding, chemical structure nearest neighbours, solvent) of that atom in a given molecule. The NMR absorption patterns produce "resonance" peaks at different frequencies or different chemical shifts – this collection of peaks is called an NMR spectrum. Because each chemical compound has a different chemical structure, each compound will have a unique (or almost unique) NMR spectrum. As a result, NMR is particularly useful for the characterization, identification and quantification of small molecules, such as metabolites. The widespread use of NMR for "classical" metabolic studies, along with its exceptional capacity to handle complex metabolite mixtures is likely the reason why NMR was one of the first technologies to be widely adopted for routine metabolome measurements. As an analytical technique, NMR is non-destructive, non-biased, easily quantifiable, requires little or no separation, permits the identification of novel compounds and it needs no chemical derivatization. NMR is particularly amenable to detecting compounds that are less tractable to LC-MS analysis, such as sugars, amines or volatile liquids or GC-MS analysis, such as large molecules (>500 Da) or relatively non-reactive compounds. NMR is not a very sensitive technique with a lower limit of detection of about 5 μM. Typically 50-150 compounds can be identified by NMR-based metabolomic studies.
Mass spectrometry is an analytical technique that measures the mass-to-charge ratio of molecules. Molecules or molecular fragments are typically charged or ionized by spraying them through a charged field (electrospray ionization), bombarding them with electrons from a hot filament (electron ionization) or blasting them with a laser when they are placed on specially coated plates (matrix assisted laser desorption ionization). The charged molecules are then propelled through space using electrodes or magnets and their speed, rate of curvature, or other physical characteristics are measured to determine their mass-to-charge ratio. From these data the mass of the parent molecule can be determined. Further fragmentation of the molecule through controlled collisions with gas molecules or with electrons can help determine the structure of molecules. Very accurate mass measurements can also be used to determine the elemental formulas or elemental composition of compounds. Most forms of mass spectrometry require some form of separation using liquid chromatography or gas chromatography. This separation step is required to simplify the resulting mass spectra and to permit more accurate compound identification. Some mass spectrometry methods also require that the molecules be derivatized or chemically modified so that they are more amenable for chromatographic separation (this is particularly true for GC-MS). As an analytical technique, MS is a very sensitive method that requires very little sample (<1 ng of material or <10 μL of a biofluid) and can generate signals for thousands of metabolites from a single sample. MS instruments can also be configured for very high throughput metabolome analyses (hundreds to thousands of samples a day). Quantification of metabolites and the characterization of novel compound structures is more difficult by MS than by NMR. LC-MS is particularly amenable to detecting hydrophobic molecules (lipids, fatty acids) and peptides while GC-MS is best for detecting small molecules (<500 Da) and highly volatile compounds (esters, amines, ketones, alkanes, thiols).
Unlike the genome or even the proteome, the metabolome is a highly dynamic entity that can change dramatically, over a period of just seconds or minutes. As a result, there is growing interest in measuring metabolites over multiple time periods or over short time intervals using modified versions of NMR or MS-based metabolomics.
Metabolome databases
Because an organism's metabolome is largely defined by its genome, different species will have different metabolomes. Indeed, the fact that the metabolome of a tomato is different from the metabolome of an apple is the reason why these two fruits taste so different. Furthermore, different tissues, different organs and biofluids associated with those organs and tissues can also have distinctly different metabolomes. The fact that different organisms and different tissues/biofluids have such different metabolomes has led to the development of a number of organism-specific and biofluid-specific metabolome databases. Some of the better known metabolome databases include the Human Metabolome Database or HMDB, the Yeast Metabolome Database or YMDB, the E. coli Metabolome Database or ECMDB, the Arabidopsis metabolome database or AraCyc as well as the Urine Metabolome Database, the Cerebrospinal Fluid (CSF) Metabolome Database and the Serum Metabolome Database. The latter three databases are specific to human biofluids. A number of very popular general metabolite databases also exist including KEGG, MetaboLights, the Golm Metabolome Database, MetaCyc, LipidMaps and Metlin. Metabolome databases can be distinguished from metabolite databases in that metabolite databases contain lightly annotated or synoptic metabolite data from multiple organisms while metabolome databases contain richly detailed and heavily referenced chemical, pathway, spectral and metabolite concentration data for specific organisms.
The Human Metabolome Database
The Human Metabolome Database (HMDB) is a freely available, open-access database containing detailed data on more than 40,000 metabolites that have already been identified or are likely to be found in the human body. The HMDB contains three kinds of information:
Chemical information,
Clinical information and
Biochemical information.
The chemical data includes >40,000 metabolite structures with detailed descriptions, extensive chemical classifications, synthesis information and observed/calculated chemical properties. It also contains nearly 10,000 experimentally measured NMR, GC-MS and LC/MS spectra from more than 1,100 different metabolites. The clinical information includes data on >10,000 metabolite-biofluid concentrations, metabolite concentration information on more than 600 different human diseases and pathway data for more than 200 different inborn errors of metabolism. The biochemical information includes nearly 6,000 protein (and DNA) sequences and more than 5,000 biochemical reactions that are linked to these metabolite entries. The HMDB supports a wide variety of online queries including text searches, chemical structure searches, sequence similarity searches and spectral similarity searches. This makes it particularly useful for metabolomic researchers who are attempting to identify or understand metabolites in clinical metabolomic studies. The first version of the HMDB was released in Jan. 1 2007 and was compiled by scientists at the University of Alberta and the University of Calgary. At that time, they reported data on 2,500 metabolites, 1,200 drugs and 3,500 food components. Since then these scientists have greatly expanded the collection. The version 3.5 of the HMDB contains >16,000 endogenous metabolites, >1,500 drugs and >22,000 food constituents or food metabolites.
Human biofluid metabolomes
Scientists at the University of Alberta have been systematically characterizing specific biofluid metabolomes including the serum metabolome, the urine metabolome, the cerebrospinal fluid (CSF) metabolome and the saliva metabolome. These efforts have involved both experimental metabolomic analysis (involving NMR, GC-MS, ICP-MS, LC-MS and HPLC assays) as well as extensive literature mining. According to their data, the human serum metabolome contains at least 4,200 different compounds (including many lipids), the human urine metabolome contains at least 3,000 different compounds (including hundreds of volatiles and gut microbial metabolites), the human CSF metabolome contains nearly 500 different compounds while the human saliva metabolome contains approximately 400 different metabolites, including many bacterial products.
Yeast Metabolome Database
The Yeast Metabolome Database is a freely accessible, online database of >2,000 small molecule metabolites found in or produced by Saccharomyces cerevisiae (Baker's yeast). The YMDB contains two kinds of information:
Chemical information and
Biochemical information.
The chemical information in YMDB includes 2,027 metabolite structures with detailed metabolite descriptions, extensive chemical classifications, synthesis information and observed/calculated chemical properties. It also contains nearly 4,000 NMR, GC-MS and LC/MS spectra obtained from more than 500 different metabolites. The biochemical information in YMDB includes >1,100 protein (and DNA) sequences and >900 biochemical reactions. The YMDB supports a wide variety of queries including text searches, chemical structure searches, sequence similarity searches and spectral similarity searches. This makes it particularly useful for metabolomic researchers who are studying yeast as a model organism or who are looking into optimizing the production of fermented beverages (wine, beer).
Secondary electrospray ionization-high resolution mass spectrometry SESI-HRMS is a non-invasive analytical technique that allows us to monitor the yeast metabolic activities. SESI-HRMS has found around 300 metabolites in the yeast fermentation process, this suggests that a large number of glucose metabolites are not reported in the literature.
The Escherichia coli Metabolome Database
The E. Coli Metabolome Database is a freely accessible, online database of >2,700 small molecule metabolites found in or produced by Escherichia coli (E. coli strain K12, MG1655). The ECMDB contains two kinds of information:
Chemical information and
Biochemical information.
The chemical information includes more than 2,700 metabolite structures with detailed metabolite descriptions, extensive chemical classifications, synthesis information and observed/calculated chemical properties. It also contains nearly 5,000 NMR, GC-MS and LC-MS spectra from more than 600 different metabolites. The biochemical information includes >1,600 protein (and DNA) sequences and >3,100 biochemical reactions that are linked to these metabolite entries. The ECMDB supports many different types of online queries including text searches, chemical structure searches, sequence similarity searches and spectral similarity searches. This makes it particularly useful for metabolomic researchers who are studying E. coli as a model organism.
Secondary electrospray ionization (SESI-MS) can discriminate between eleven E. Coli strains thanks to the volatile organic compound profiling.
Metabolome atlas of the aging mouse brain
In 2021, the first brain metabolome atlas of the mouse brain – and of an animal (a mammal) across different life stages – was released online. The data differentiates by brain regions and the metabolic changes could be "mapped to existing gene and protein brain atlases".
Intestinal metabolome
Human intestinal microbiota contribute to the etiology of colorectal cancer via their metabolome. In particular, the conversion of primary bile acids to secondary bile acids as a consequence of bacterial metabolism in the colon promotes carcinogenesis.
See also
Tumor metabolome
Protein electrophoresis
Protein sequencing
References
External links
Metabolism
Systems biology
Bioinformatics | Metabolome | [
"Chemistry",
"Engineering",
"Biology"
] | 3,103 | [
"Biological engineering",
"Bioinformatics",
"Cellular processes",
"Biochemistry",
"Metabolism",
"Systems biology"
] |
1,075,379 | https://en.wikipedia.org/wiki/Vitali%E2%80%93Hahn%E2%80%93Saks%20theorem | In mathematics, the Vitali–Hahn–Saks theorem, introduced by , , and , proves that under some conditions a sequence of measures converging point-wise does so uniformly and the limit is also a measure.
Statement of the theorem
If is a measure space with and a sequence of complex measures. Assuming that each is absolutely continuous with respect to and that a for all the finite limits exist Then the absolute continuity of the with respect to is uniform in that is, implies that uniformly in Also is countably additive on
Preliminaries
Given a measure space a distance can be constructed on the set of measurable sets with This is done by defining
where is the symmetric difference of the sets
This gives rise to a metric space by identifying two sets when Thus a point with representative is the set of all such that
Proposition: with the metric defined above is a complete metric space.
Proof: Let
Then
This means that the metric space can be identified with a subset of the Banach space .
Let , with
Then we can choose a sub-sequence such that exists almost everywhere and . It follows that for some (furthermore if and only if for large enough, then we have that the limit inferior of the sequence) and hence Therefore, is complete.
Proof of Vitali-Hahn-Saks theorem
Each defines a function on by taking . This function is well defined, this is it is independent on the representative of the class due to the absolute continuity of with respect to . Moreover is continuous.
For every the set
is closed in , and by the hypothesis we have that
By Baire category theorem at least one must contain a non-empty open set of . This means that there is and a such that
implies
On the other hand, any with can be represented as with and . This can be done, for example by taking and . Thus, if and then
Therefore, by the absolute continuity of with respect to , and since is arbitrary, we get that implies uniformly in In particular, implies
By the additivity of the limit it follows that is finitely-additive. Then, since it follows that is actually countably additive.
References
Theorems in measure theory | Vitali–Hahn–Saks theorem | [
"Mathematics"
] | 434 | [
"Theorems in mathematical analysis",
"Theorems in measure theory"
] |
1,075,892 | https://en.wikipedia.org/wiki/Wave%20mechanics | Wave mechanics may refer to:
the mechanics of waves
the application of the quantum wave equation, especially in position and momentum spaces
the resonant interaction of three or more waves, which includes the "three-wave equation"
See also
Quantum mechanics
Wave equation
Quantum state
Matter wave
Further reading
Flint H.T., (1929) Wave Mechanics, Methuen & Co. Ltd, London | Wave mechanics | [
"Physics"
] | 79 | [
"Waves",
"Wave mechanics",
"Physical phenomena",
"Classical mechanics"
] |
1,076,110 | https://en.wikipedia.org/wiki/Protein%20sequencing | Protein sequencing is the practical process of determining the amino acid sequence of all or part of a protein or peptide. This may serve to identify the protein or characterize its post-translational modifications. Typically, partial sequencing of a protein provides sufficient information (one or more sequence tags) to identify it with reference to databases of protein sequences derived from the conceptual translation of genes.
The two major direct methods of protein sequencing are mass spectrometry and Edman degradation using a protein sequenator (sequencer). Mass spectrometry methods are now the most widely used for protein sequencing and identification but Edman degradation remains a valuable tool for characterizing a protein's N-terminus.
Determining amino acid composition
It is often desirable to know the unordered amino acid composition of a protein prior to attempting to find the ordered sequence, as this knowledge can be used to facilitate the discovery of errors in the sequencing process or to distinguish between ambiguous results. Knowledge of the frequency of certain amino acids may also be used to choose which protease to use for digestion of the protein. The misincorporation of low levels of non-standard amino acids (e.g. norleucine) into proteins may also be determined. A generalized method often referred to as amino acid analysis for determining amino acid frequency is as follows:
Hydrolyse a known quantity of protein into its constituent amino acids.
Separate and quantify the amino acids in some way.
Hydrolysis
Hydrolysis is done by heating a sample of the protein in 6 M hydrochloric acid to 100–110 °C for 24 hours or longer. Proteins with many bulky hydrophobic groups may require longer heating periods. However, these conditions are so vigorous that some amino acids (serine, threonine, tyrosine, tryptophan, glutamine, and cysteine) are degraded. To circumvent this problem, Biochemistry Online suggests heating separate samples for different times, analysing each resulting solution, and extrapolating back to zero hydrolysis time. Rastall suggests a variety of reagents to prevent or reduce degradation, such as thiol reagents or phenol to protect tryptophan and tyrosine from attack by chlorine, and pre-oxidising cysteine. He also suggests measuring the quantity of ammonia evolved to determine the extent of amide hydrolysis.
Separation and quantitation
The amino acids can be separated by ion-exchange chromatography then derivatized to facilitate their detection. More commonly, the amino acids are derivatized then resolved by reversed phase HPLC.
An example of the ion-exchange chromatography is given by the NTRC using sulfonated polystyrene as a matrix, adding the amino acids in acid solution and passing a buffer of steadily increasing pH through the column. Amino acids are eluted when the pH reaches their respective isoelectric points. Once the amino acids have been separated, their respective quantities are determined by adding a reagent that will form a coloured derivative. If the amounts of amino acids are in excess of 10 nmol, ninhydrin can be used for this; it gives a yellow colour when reacted with proline, and a vivid purple with other amino acids. The concentration of amino acid is proportional to the absorbance of the resulting solution. With very small quantities, down to 10 pmol, fluorescent derivatives can be formed using reagents such as ortho-phthaldehyde (OPA) or fluorescamine.
Pre-column derivatization may use the Edman reagent to produce a derivative that is detected by UV light. Greater sensitivity is achieved using a reagent that generates a fluorescent derivative. The derivatized amino acids are subjected to reversed phase chromatography, typically using a C8 or C18 silica column and an optimised elution gradient. The eluting amino acids are detected using a UV or fluorescence detector and the peak areas compared with those for derivatised standards in order to quantify each amino acid in the sample.
N-terminal amino acid analysis
Determining which amino acid forms the N-terminus of a peptide chain is useful for two reasons: to aid the ordering of individual peptide fragments' sequences into a whole chain, and because the first round of Edman degradation is often contaminated by impurities and therefore does not give an accurate determination of the N-terminal amino acid. A generalised method for N-terminal amino acid analysis follows:
React the peptide with a reagent that will selectively label the terminal amino acid.
Hydrolyse the protein.
Determine the amino acid by chromatography and comparison with standards.
There are many different reagents which can be used to label terminal amino acids. They all react with amine groups and will therefore also bind to amine groups in the side chains of amino acids such as lysine - for this reason it is necessary to be careful in interpreting chromatograms to ensure that the right spot is chosen. Two of the more common reagents are Sanger's reagent (1-fluoro-2,4-dinitrobenzene) and dansyl derivatives such as dansyl chloride. Phenylisothiocyanate, the reagent for the Edman degradation, can also be used. The same questions apply here as in the determination of amino acid composition, with the exception that no stain is needed, as the reagents produce coloured derivatives and only qualitative analysis is required. So the amino acid does not have to be eluted from the chromatography column, just compared with a standard. Another consideration to take into account is that, since any amine groups will have reacted with the labelling reagent, ion exchange chromatography cannot be used, and thin-layer chromatography or high-pressure liquid chromatography should be used instead.
C-terminal amino acid analysis
The number of methods available for C-terminal amino acid analysis is much smaller than the number of available methods of N-terminal analysis. The most common method is to add carboxypeptidases to a solution of the protein, take samples at regular intervals, and determine the terminal amino acid by analysing a plot of amino acid concentrations against time. This method will be very useful in the case of polypeptides and protein-blocked N termini. C-terminal sequencing would greatly help in verifying the primary structures of proteins predicted from DNA sequences and to detect any posttranslational processing of gene products from known codon sequences.
Edman degradation
The Edman degradation is a very important reaction for protein sequencing, because it allows the ordered amino acid composition of a protein to be discovered. Automated Edman sequencers are now in widespread use, and are able to sequence peptides up to approximately 50 amino acids long. A reaction scheme for sequencing a protein by the Edman degradation follows; some of the steps are elaborated on subsequently.
Break any disulfide bridges in the protein with a reducing agent like 2-mercaptoethanol. A protecting group such as iodoacetic acid may be necessary to prevent the bonds from re-forming.
Separate and purify the individual chains of the protein complex, if there are more than one.
Determine the amino acid composition of each chain.
Determine the terminal amino acids of each chain.
Break each chain into fragments under 50 amino acids long.
Separate and purify the fragments.
Determine the sequence of each fragment.
Repeat with a different pattern of cleavage.
Construct the sequence of the overall protein.
Digestion into peptide fragments
Peptides longer than about 50–70 amino acids long cannot be sequenced reliably by the Edman degradation. Because of this, long protein chains need to be broken up into small fragments that can then be sequenced individually. Digestion is done either by endopeptidases such as trypsin or pepsin or by chemical reagents such as cyanogen bromide. Different enzymes give different cleavage patterns, and the overlap between fragments can be used to construct an overall sequence.
Reaction
The peptide to be sequenced is adsorbed onto a solid surface. One common substrate is glass fibre coated with polybrene, a cationic polymer. The Edman reagent, phenylisothiocyanate (PITC), is added to the adsorbed peptide, together with a mildly basic buffer solution of 12% trimethylamine. This reacts with the amine group of the N-terminal amino acid.
The terminal amino acid can then be selectively detached by the addition of anhydrous acid. The derivative then isomerises to give a substituted phenylthiohydantoin, which can be washed off and identified by chromatography, and the cycle can be repeated. The efficiency of each step is about 98%, which allows about 50 amino acids to be reliably determined.
Protein sequencer
A protein sequenator is a machine that performs Edman degradation in an automated manner. A sample of the protein or peptide is immobilized in the reaction vessel of the protein sequenator and the Edman degradation is performed. Each cycle releases and derivatises one amino acid from the protein or peptide's N-terminus and the released amino-acid derivative is then identified by HPLC. The sequencing process is done repetitively for the whole polypeptide until the entire measurable sequence is established or for a pre-determined number of cycles.
Identification by mass spectrometry
Protein identification is the process of assigning a name to a protein of interest (POI), based on its amino-acid sequence. Typically, only part of the protein’s sequence needs to be determined experimentally in order to identify the protein with reference to databases of protein sequences deduced from the DNA sequences of their genes. Further protein characterization may include confirmation of the actual N- and C-termini of the POI, determination of sequence variants and identification of any post-translational modifications present.
Proteolytic digests
A general scheme for protein identification is described.
The POI is isolated, typically by SDS-PAGE or chromatography.
The isolated POI may be chemically modified to stabilise Cysteine residues (e.g. S-amidomethylation or S-carboxymethylation).
The POI is digested with a specific protease to generate peptides. Trypsin, which cleaves selectively on the C-terminal side of Lysine or Arginine residues, is the most commonly used protease. Its advantages include i) the frequency of Lys and Arg residues in proteins, ii) the high specificity of the enzyme, iii) the stability of the enzyme and iv) the suitability of tryptic peptides for mass spectrometry.
The peptides may be desalted to remove ionizable contaminants and subjected to MALDI-TOF mass spectrometry. Direct measurement of the masses of the peptides may provide sufficient information to identify the protein (see Peptide mass fingerprinting) but further fragmentation of the peptides inside the mass spectrometer is often used to gain information about the peptides’ sequences. Alternatively, peptides may be desalted and separated by reversed phase HPLC and introduced into a mass spectrometer via an ESI source. LC-ESI-MS may provide more information than MALDI-MS for protein identification but uses more instrument time.
Depending on the type of mass spectrometer, fragmentation of peptide ions may occur via a variety of mechanisms such as collision-induced dissociation (CID) or post-source decay (PSD). In each case, the pattern of fragment ions of a peptide provides information about its sequence.
Information including the measured mass of the putative peptide ions and those of their fragment ions is then matched against calculated mass values from the conceptual (in-silico) proteolysis and fragmentation of databases of protein sequences. A successful match will be found if its score exceeds a threshold based on the analysis parameters. Even if the actual protein is not represented in the database, error-tolerant matching allows for the putative identification of a protein based on similarity to homologous proteins. A variety of software packages are available to perform this analysis.
Software packages usually generate a report showing the identity (accession code) of each identified protein, its matching score, and provide a measure of the relative strength of the matching where multiple proteins are identified.
A diagram of the matched peptides on the sequence of the identified protein is often used to show the sequence coverage (% of the protein detected as peptides). Where the POI is thought to be significantly smaller than the matched protein, the diagram may suggest whether the POI is an N- or C-terminal fragment of the identified protein.
De novo sequencing
The pattern of fragmentation of a peptide allows for direct determination of its sequence by de novo sequencing. This sequence may be used to match databases of protein sequences or to investigate post-translational or chemical modifications. It may provide additional evidence for protein identifications performed as above.
N- and C-termini
The peptides matched during protein identification do not necessarily include the N- or C-termini predicted for the matched protein. This may result from the N- or C-terminal peptides being difficult to identify by MS (e.g. being either too short or too long), being post-translationally modified (e.g. N-terminal acetylation) or genuinely differing from the prediction. Post-translational modifications or truncated termini may be identified by closer examination of the data (i.e. de novo sequencing). A repeat digest using a protease of different specificity may also be useful.
Post-translational modifications
Whilst detailed comparison of the MS data with predictions based on the known protein sequence may be used to define post-translational modifications, targeted approaches to data acquisition may also be used. For instance, specific enrichment of phosphopeptides may assist in identifying phosphorylation sites in a protein. Alternative methods of peptide fragmentation in the mass spectrometer, such as ETD or ECD, may give complementary sequence information.
Whole-mass determination
The protein’s whole mass is the sum of the masses of its amino-acid residues plus the mass of a water molecule and adjusted for any post-translational modifications. Although proteins ionize less well than the peptides derived from them, a protein in solution may be able to be subjected to ESI-MS and its mass measured to an accuracy of 1 part in 20,000 or better. This is often sufficient to confirm the termini (thus that the protein’s measured mass matches that predicted from its sequence) and infer the presence or absence of many post-translational modifications.
Limitations
Proteolysis does not always yield a set of readily analyzable peptides covering the entire sequence of POI. The fragmentation of peptides in the mass spectrometer often does not yield ions corresponding to cleavage at each peptide bond. Thus, the deduced sequence for each peptide is not necessarily complete. The standard methods of fragmentation do not distinguish between leucine and isoleucine residues since they are isomeric.
Because the Edman degradation proceeds from the N-terminus of the protein, it will not work if the N-terminus has been chemically modified (e.g. by acetylation or formation of Pyroglutamic acid). Edman degradation is generally not useful to determine the positions of disulfide bridges. It also requires peptide amounts of 1 picomole or above for discernible results, making it less sensitive than mass spectrometry.
Predicting from DNA/RNA sequences
In biology, proteins are produced by translation of messenger RNA (mRNA) with the protein sequence deriving from the sequence of codons in the mRNA. The mRNA is itself formed by the transcription of genes and may be further modified. These processes are sufficiently understood to use computer algorithms to automate predictions of protein sequences from DNA sequences, such as from whole-genome DNA-sequencing projects, and have led to the generation of large databases of protein sequences such as UniProt. Predicted protein sequences are an important resource for protein identification by mass spectrometry.
Historically, short protein sequences (10 to 15 residues) determined by Edman degradation were back-translated into DNA sequences that could be used as probes or primers to isolate molecular clones of the corresponding gene or complementary DNA. The sequence of the cloned DNA was then determined and used to deduce the full amino-acid sequence of the protein.
Bioinformatics tools
Bioinformatics tools exist to assist with interpretation of mass spectra (see de novo peptide sequencing), to compare or analyze protein sequences (see sequence analysis), or search databases using peptide or protein sequences (see BLAST).
Applications to cryptography
The difficulty of protein sequencing was recently proposed as a basis for creating k-time programs, programs that run exactly k times before self-destructing. Such a thing is impossible to build purely in software because all software is inherently clonable an unlimited number of times.
See also
Proteomics
DNA sequencing
Klaus Biemann
Donald F. Hunt
Matthias Mann
John R. Yates
References
Further reading
Cell biology
Proteomic sequencing | Protein sequencing | [
"Chemistry",
"Biology"
] | 3,589 | [
"Proteomic sequencing",
"Molecular biology techniques",
"Cell biology"
] |
1,076,205 | https://en.wikipedia.org/wiki/Price%20equation | In the theory of evolution and natural selection, the Price equation (also known as Price's equation or Price's theorem) describes how a trait or allele changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the frequency of alleles within each new generation of a population. The Price equation was derived by George R. Price, working in London to re-derive W.D. Hamilton's work on kin selection. Examples of the Price equation have been constructed for various evolutionary cases. The Price equation also has applications in economics.
The Price equation is a mathematical relationship between various statistical descriptors of population dynamics, rather than a physical or biological law, and as such is not subject to experimental verification. In simple terms, it is a mathematical statement of the expression "survival of the fittest".
Statement
The Price equation shows that a change in the average amount of a trait in a population from one generation to the next () is determined by the covariance between the amounts of the trait for subpopulation and the fitnesses of the subpopulations, together with the expected change in the amount of the trait value due to fitness, namely :
Here is the average fitness over the population, and and represent the population mean and covariance respectively. 'Fitness' is the ratio of the average number of offspring for the whole population per the number of adult individuals in the population, and is that same ratio only for subpopulation .
If the covariance between fitness () and trait value () is positive, the trait value is expected to rise on average across population . If the covariance is negative, the characteristic is harmful, and its frequency is expected to drop.
The second term, , represents the portion of due to all factors other than direct selection which can affect trait evolution. This term can encompass genetic drift, mutation bias, or meiotic drive. Additionally, this term can encompass the effects of multi-level selection or group selection. Price (1972) referred to this as the "environment change" term, and denoted both terms using partial derivative notation (∂NS and ∂EC). This concept of environment includes interspecies and ecological effects. Price describes this as follows:
Proof
Suppose we are given four equal-length lists of real numbers , , , from which we may define . and will be called the parent population numbers and characteristics associated with each index i. Likewise and will be called the child population numbers and characteristics, and will be called the fitness associated with index i. (Equivalently, we could have been given , , , with .) Define the parent and child population totals:
{|cellspacing=20
|-
| ||
|}
and the probabilities (or frequencies):
{|cellspacing=20
|-
| ||
|}
Note that these are of the form of probability mass functions in that and are in fact the probabilities that a random individual drawn from the parent or child population has a characteristic . Define the fitnesses:
The average of any list is given by:
so the average characteristics are defined as:
{|cellspacing=20
|-
| ||
|}
and the average fitness is:
A simple theorem can be proved:
so that:
and
The covariance of and is defined by:
Defining , the expectation value of is
The sum of the two terms is:
Using the above mentioned simple theorem, the sum becomes
where
.
Derivation of the continuous-time Price equation
Consider a set of groups with that are characterized by a particular trait, denoted by . The number of individuals belonging to group experiences exponential growth:where corresponds to the fitness of the group. We want to derive an equation describing the time-evolution of the expected value of the trait:Based on the chain rule, we may derive an ordinary differential equation:A further application of the chain rule for gives us:Summing up the components gives us that:
which is also known as the replicator equation. Now, note that:
Therefore, putting all of these components together, we arrive at the continuous-time Price equation:
Simple Price equation
When the characteristic values do not change from the parent to the child generation, the second term in the Price equation becomes zero resulting in a simplified version of the Price equation:
which can be restated as:
where is the fractional fitness: .
This simple Price equation can be proven using the definition in Equation (2) above. It makes this fundamental statement about evolution: "If a certain inheritable characteristic is correlated with an increase in fractional fitness, the average value of that characteristic in the child population will be increased over that in the parent population."
Applications
The Price equation can describe any system that changes over time, but is most often applied in evolutionary biology. The evolution of sight provides an example of simple directional selection. The evolution of sickle cell anemia shows how a heterozygote advantage can affect trait evolution. The Price equation can also be applied to population context dependent traits such as the evolution of sex ratios. Additionally, the Price equation is flexible enough to model second order traits such as the evolution of mutability. The Price equation also provides an extension to Founder effect which shows change in population traits in different settlements
Dynamical sufficiency and the simple Price equation
Sometimes the genetic model being used encodes enough information into the parameters used by the Price equation to allow the calculation of the parameters for all subsequent generations. This property is referred to as dynamical sufficiency. For simplicity, the following looks at dynamical sufficiency for the simple Price equation, but is also valid for the full Price equation.
Referring to the definition in Equation (2), the simple Price equation for the character can be written:
For the second generation:
The simple Price equation for only gives us the value of for the first generation, but does not give us the value of and , which are needed to calculate for the second generation. The variables and can both be thought of as characteristics of the first generation, so the Price equation can be used to calculate them as well:
The five 0-generation variables , , , , and must be known before proceeding to calculate the three first generation variables , , and , which are needed to calculate for the second generation. It can be seen that in general the Price equation cannot be used to propagate forward in time unless there is a way of calculating the higher moments and from the lower moments in a way that is independent of the generation. Dynamical sufficiency means that such equations can be found in the genetic model, allowing the Price equation to be used alone as a propagator of the dynamics of the model forward in time.
Full Price equation
The simple Price equation was based on the assumption that the characters do not change over one generation. If it is assumed that they do change, with being the value of the character in the child population, then the full Price equation must be used. A change in character can come about in a number of ways. The following two examples illustrate two such possibilities, each of which introduces new insight into the Price equation.
Genotype fitness
We focus on the idea of the fitness of the genotype. The index indicates the genotype and the number of type genotypes in the child population is:
which gives fitness:
Since the individual mutability does not change, the average mutabilities will be:
with these definitions, the simple Price equation now applies.
Lineage fitness
In this case we want to look at the idea that fitness is measured by the number of children an organism has, regardless of their genotype. Note that we now have two methods of grouping, by lineage, and by genotype. It is this complication that will introduce the need for the full Price equation. The number of children an -type organism has is:
which gives fitness:
We now have characters in the child population which are the average character of the -th parent.
with global characters:
with these definitions, the full Price equation now applies.
Criticism
The use of the change in average characteristic () per generation as a measure of evolutionary progress is not always appropriate. There may be cases where the average remains unchanged (and the covariance between fitness and characteristic is zero) while evolution is nevertheless in progress. For example, if we have , , and , then for the child population, showing that the peak fitness at is in fact fractionally increasing the population of individuals with . However, the average characteristics are z=2 and z'=2 so that . The covariance is also zero. The simple Price equation is required here, and it yields 0=0. In other words, it yields no information regarding the progress of evolution in this system.
A critical discussion of the use of the Price equation can be found in van Veelen (2005), van Veelen et al. (2012), and van Veelen (2020). Frank (2012) discusses the criticism in van Veelen et al. (2012).
Cultural references
Price's equation features in the plot and title of the 2008 thriller film WΔZ.
The Price equation also features in posters in the computer game BioShock 2, in which a consumer of a "Brain Boost" tonic is seen deriving the Price equation while simultaneously reading a book. The game is set in the 1950s, substantially before Price's work.
See also
The breeder's equation, which is a special case of the Price equation.
References
Further reading
Equations
Evolutionary dynamics
Evolutionary biology
Population genetics | Price equation | [
"Mathematics",
"Biology"
] | 1,968 | [
"Evolutionary biology",
"Mathematical objects",
"Equations"
] |
1,076,615 | https://en.wikipedia.org/wiki/The%20Road%20to%20Reality | The Road to Reality: A Complete Guide to the Laws of the Universe is a book on modern physics by the British mathematical physicist Roger Penrose, published in 2004. It covers the basics of the Standard Model of particle physics, discussing general relativity and quantum mechanics, and discusses the possible unification of these two theories.
Overview
The book discusses the physical world. Many fields that 19th century scientists believed were separate, such as electricity and magnetism, are aspects of more fundamental properties. Some texts, both popular and university level, introduce these topics as separate concepts, and then reveal their combination much later. The Road to Reality reverses this process, first expounding the underlying mathematics of space–time, then showing how electromagnetism and other phenomena fall out fully formed.
The book is just over 1100 pages, of which the first 383 are dedicated to mathematics—Penrose's goal is to acquaint inquisitive readers with the mathematical tools needed to understand the remainder of the book in depth. Physics enters the discussion on page 383 with the topic of spacetime. From there it moves on to fields in spacetime, deriving the classical electrical and magnetic forces from first principles; that is, if one lives in spacetime of a particular sort, these fields develop naturally as a consequence. Energy and conservation laws appear in the discussion of Lagrangians and Hamiltonians, before moving on to a full discussion of quantum physics, particle theory and quantum field theory. A discussion of the measurement problem in quantum mechanics is given a full chapter; superstrings are given a chapter near the end of the book, as are loop gravity and twistor theory. The book ends with an exploration of other theories and possible ways forward.
The final chapters reflect Penrose's personal perspective, which differs in some respects from what he regards as the current fashion among theoretical physicists. He is skeptical about string theory, to which he prefers loop quantum gravity. He is optimistic about his own approach, twistor theory. He also holds some controversial views about the role of consciousness in physics, as laid out in his earlier books (see Shadows of the Mind).
Reception
According to Brian Blank:
According to Nicholas Lezard:
According to Lee Smolin:
According to Frank Wilczek:
Editions
Jonathan Cape (1st edition), 2004, hardcover,
Alfred A. Knopf (publisher), February 2005, hardcover,
Vintage Books, 2005, softcover,
Vintage Books, 2006, softcover,
Vintage Books, 2007, softcover,
References
External links
Site with errata and solutions to some exercises from the first few chapters. Not sponsored by Penrose.
Archive of the Road to Reality internet forum, now defunct.
Solutions for many Road to Reality exercises.
2004 non-fiction books
Alfred A. Knopf books
Cosmology books
Mathematics books
Popular physics books
Quantum mind
String theory books
Works by Roger Penrose | The Road to Reality | [
"Physics"
] | 592 | [
"Quantum mind",
"Quantum mechanics"
] |
1,077,261 | https://en.wikipedia.org/wiki/Spin%20quantum%20number | In physics and chemistry, the spin quantum number is a quantum number (designated ) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. It has the same value for all particles of the same type, such as = for all electrons. It is an integer for all bosons, such as photons, and a half-odd-integer for all fermions, such as electrons and protons.
The component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written . The value of is the component of spin angular momentum, in units of the reduced Planck constant , parallel to a given direction (conventionally labelled the –axis). It can take values ranging from + to − in integer increments. For an electron, can be either or .
Nomenclature
The phrase spin quantum number refers to quantized spin angular momentum.
The symbol is used for the spin quantum number, and is described as the spin magnetic quantum number or as the -component of spin .
Both the total spin and the z-component of spin are quantized, leading to two quantum numbers spin and spin magnet quantum numbers. The (total) spin quantum number has only one value for every elementary particle. Some introductory chemistry textbooks describe as the spin quantum number, and is not mentioned since its value is a fixed property of the electron; some even use the variable in place of .
The two spin quantum numbers and are the spin angular momentum analogs of the two orbital angular momentum quantum numbers and .
Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electron. Capitalized symbols are used: for the total electronic spin, and or for the -axis component. A pair of electrons in a spin singlet state has = 0, and a pair in the triplet state has = 1, with = −1, 0, or +1. Nuclear-spin quantum numbers are conventionally written for spin, and or for the -axis component.
The name "spin" comes from a geometrical spinning of the electron about an axis, as proposed by Uhlenbeck and Goudsmit. However, this simplistic picture was quickly realized to be physically unrealistic, because it would require the electrons to rotate faster than the speed of light. It was therefore replaced by a more abstract quantum-mechanical description.
History
During the period between 1916 and 1925, much progress was being made concerning the arrangement of electrons in the periodic table. In order to explain the Zeeman effect in the Bohr atom, Sommerfeld proposed that electrons would be based on three 'quantum numbers', n, k, and m, that described the size of the orbit, the shape of the orbit, and the direction in which the orbit was pointing. Irving Langmuir had explained in his 1919 paper regarding electrons in their shells, "Rydberg has pointed out that these numbers are obtained from the series . The factor two suggests a fundamental two-fold symmetry for all stable atoms." This configuration was adopted by Edmund Stoner, in October 1924 in his paper 'The Distribution of Electrons Among Atomic Levels' published in the Philosophical Magazine.
The qualitative success of the Sommerfeld quantum number scheme failed to explain the Zeeman effect in weak magnetic field strengths, the anomalous Zeeman effect. In December 1924,
Wolfgang Pauli showed that the core electron angular momentum was not related to the effect as had previously been assumed. Rather he proposed that only the outer "light" electrons determined the angular momentum and he
hypothesized that this required a fourth quantum number with a two-valuedness. This fourth quantum number became the spin magnetic quantum number.
Electron spin
A spin- particle is characterized by an angular momentum quantum number for spin = . In solutions of the Schrödinger-Pauli equation, angular momentum is quantized according to this number, so that magnitude of the spin angular momentum is
The hydrogen spectrum fine structure is observed as a doublet corresponding to two possibilities for the z-component of the angular momentum, where for any given direction :
whose solution has only two possible -components for the electron. In the electron, the two different spin orientations are sometimes called "spin-up" or "spin-down".
The spin property of an electron would give rise to magnetic moment, which was a requisite for the fourth quantum number.
The magnetic moment vector of an electron spin is given by:
where is the electron charge, is the electron mass, and is the electron spin g-factor, which is approximately 2.0023.
Its z-axis projection is given by the spin magnetic quantum number according to:
where is the Bohr magneton.
When atoms have even numbers of electrons the spin of each electron in each orbital has opposing orientation to that of its immediate neighbor(s). However, many atoms have an odd number of electrons or an arrangement of electrons in which there is an unequal number of "spin-up" and "spin-down" orientations. These atoms or electrons are said to have unpaired spins that are detected in electron spin resonance.
Nuclear spin
Atomic nuclei also have spins. The nuclear spin is a fixed property of each nucleus and may be either an integer or a half-integer. The component of nuclear spin parallel to the –axis can have (2 + 1) values , –1, ..., . For example, a N nucleus has = 1, so that there are 3 possible orientations relative to the –axis, corresponding to states = +1, 0 and −1.
The spins of different nuclei are interpreted using the nuclear shell model. Even-even nuclei with even numbers of both protons and neutrons, such as C and O, have spin zero. Odd mass number nuclei have half-integer spins, such as for Li, for C and for O, usually corresponding to the angular momentum of the last nucleon added. Odd-odd nuclei with odd numbers of both protons and neutrons have integer spins, such as 3 for B, and 1 for N. Values of nuclear spin for a given isotope are found in the lists of isotopes for each element. (See isotopes of oxygen, isotopes of aluminium, etc. etc.)
Detection of spin
When lines of the hydrogen spectrum are examined at very high resolution, they are found to be closely spaced doublets. This splitting is called fine structure, and was one of the first experimental evidences for electron spin. The direct observation of the electron's intrinsic angular momentum was achieved in the Stern–Gerlach experiment.
Stern–Gerlach experiment
The theory of spatial quantization of the spin moment of the momentum of electrons of atoms situated in the magnetic field needed to be proved experimentally. In 1922 (two years before the theoretical description of the spin was created) Otto Stern and Walter Gerlach observed it in the experiment they conducted.
Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an in-homogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the in-homogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate.
The phenomenon can be explained with the spatial quantization of the spin moment of momentum. In atoms the electrons are paired such that one spins upward and one downward, neutralizing the effect of their spin on the action of the atom as a whole. But in the valence shell of silver atoms, there is a single electron whose spin remains unbalanced.
The unbalanced spin creates spin magnetic moment, making the electron act like a very small magnet. As the atoms pass through the in-homogeneous magnetic field, the force moment in the magnetic field influences the electron's dipole until its position matches the direction of the stronger field. The atom would then be pulled toward or away from the stronger magnetic field a specific amount, depending on the value of the valence electron's spin. When the spin of the electron is the atom moves away from the stronger field, and when the spin is the atom moves toward it. Thus the beam of silver atoms is split while traveling through the in-homogeneous magnetic field, according to the spin of each atom's valence electron.
In 1927 Phipps and Taylor conducted a similar experiment, using atoms of hydrogen with similar results. Later scientists conducted experiments using other atoms that have only one electron in their valence shell: (copper, gold, sodium, potassium). Every time there were two lines formed on the metallic plate.
The atomic nucleus also may have spin, but protons and neutrons are much heavier than electrons (about 1836 times), and the magnetic dipole moment is inversely proportional to the mass. So the nuclear magnetic dipole momentum is much smaller than that of the whole atom. This small magnetic dipole was later measured by Stern, Frisch and Easterman.
Electron paramagnetic resonance
For atoms or molecules with an unpaired electron, transitions in a magnetic field can also be observed in which only the spin quantum number changes, without change in the electron orbital or the other quantum numbers. This is the method of electron paramagnetic resonance (EPR) or electron spin resonance (ESR), used to study free radicals. Since only the magnetic interaction of the spin changes, the energy change is much smaller than for transitions between orbitals, and the spectra are observed in the microwave region.
Relation to spin vectors
For a solution of either the nonrelativistic Pauli equation or the relativistic Dirac equation, the quantized angular momentum (see angular momentum quantum number) can be written as:
where
is the quantized spin vector or spinor
is the norm of the spin vector
is the spin quantum number associated with the spin angular momentum
is the reduced Planck constant.
Given an arbitrary direction (usually determined by an external magnetic field) the spin -projection is given by
where is the magnetic spin quantum number, ranging from − to + in steps of one. This generates different values of .
The allowed values for are non-negative integers or half-integers. Fermions have half-integer values, including the electron, proton and neutron which all have Bosons such as the photon and all mesons) have integer spin values.
Algebra
The algebraic theory of spin is a carbon copy of the angular momentum in quantum mechanics theory.
First of all, spin satisfies the fundamental commutation relation:
where is the (antisymmetric) Levi-Civita symbol. This means that it is impossible to know two coordinates of the spin at the same time because of the restriction of the uncertainty principle.
Next, the eigenvectors of and satisfy:
where are the ladder (or "raising" and "lowering") operators.
Energy levels from the Dirac equation
In 1928, Paul Dirac developed a relativistic wave equation, now termed the Dirac equation, which predicted the spin magnetic moment correctly, and at the same time treated the electron as a point-like particle. Solving the Dirac equation for the energy levels of an electron in the hydrogen atom, all four quantum numbers including occurred naturally and agreed well with experiment.
Total spin of an atom or molecule
For some atoms the spins of several unpaired electrons (, , ...) are coupled to form a total spin quantum number . This occurs especially in light atoms (or in molecules formed only of light atoms) when spin–orbit coupling is weak compared to the coupling between spins or the coupling between orbital angular momenta, a situation known as coupling because and are constants of motion. Here is the total orbital angular momentum quantum number.
For atoms with a well-defined , the multiplicity of a state is defined as . This is equal to the number of different possible values of the total (orbital plus spin) angular momentum for a given (, ) combination, provided that ≤ (the typical case). For example, if = 1, there are three states which form a triplet. The eigenvalues of for these three states are and . The term symbol of an atomic state indicates its values of , , and .
As examples, the ground states of both the oxygen atom and the dioxygen molecule have two unpaired electrons and are therefore triplet states. The atomic state is described by the term symbol P, and the molecular state by the term symbol Σ.
See also
Total angular momentum quantum number
Rotational spectroscopy
Basic quantum mechanics
References
External links
Atomic physics
Rotation in three dimensions
Rotational symmetry
Quantum numbers
Quantum models | Spin quantum number | [
"Physics",
"Chemistry"
] | 2,611 | [
"Quantum chemistry",
"Quantum mechanics",
"Quantum numbers",
"Quantum models",
"Atomic physics",
" molecular",
" and optical physics",
"Atomic",
"Symmetry",
"Rotational symmetry"
] |
1,077,843 | https://en.wikipedia.org/wiki/Yutaka%20Taniyama | was a Japanese mathematician known for the Taniyama–Shimura conjecture.
Life
Taniyama was born on 22 November 1927 in Kisai, a town in Saitama. He was the sixth of eight children born to a doctor's family. He studied at Urawa High School (present-day Saitama University) after graduating from Fudouoka Middle School. He suspended his college for two years due to his medical condition, but finally graduated in 1950. During Taniyama's college years, he aspired to be a mathematician after reading Teiji Takagi's work.
In 1958, Taniyama worked as an Associate Professor after years of assistant at the University of Tokyo. He also obtained his doctorate from the University in May. In October, Taniyama was engaged to be married to , while the Institute for Advanced Study in Princeton, New Jersey offered him a position.
On 17 November 1958, Taniyama committed suicide by poisoning himself with gas. He left a note explaining how far he had progressed with his teaching duties, and apologizing to his colleagues for the trouble he was causing them. The first paragraph of his suicide note read (quoted in Shimura, 1989):
Until yesterday I had no definite intention of killing myself. But more than a few must have noticed that lately I have been tired both physically and mentally. As to the cause of my suicide, I don't quite understand it myself, but it is not the result of a particular incident, nor of a specific matter. Merely may I say, I am in the frame of mind that I lost confidence in my future. There may be someone to whom my suicide will be troubling or a blow to a certain degree. I sincerely hope that this incident will cast no dark shadow over the future of that person. At any rate, I cannot deny that this is a kind of betrayal, but please excuse it as my last act in my own way, as I have been doing my own way all my life.
Although his note is mostly enigmatic it does mention tiredness and a loss of confidence in his future. Taniyama's ideas had been criticized as unsubstantiated and his behavior had occasionally been deemed peculiar. Goro Shimura mentioned that he suffered from depression.
About a month later, Suzuki also committed suicide by gas, leaving a note reading: "We promised each other that no matter where we went, we would never be separated. Now that he is gone, I must go too in order to join him."
After Taniyama's death, Goro Shimura stated that:
He was always kind to his colleagues, especially to his juniors, and he genuinely cared about their welfare. He was the moral support of many of those who came into mathematical contact with him, including of course myself. Probably he was never conscious of this role he was playing. But I feel his noble generosity in this respect even more strongly now than when he was alive. And yet nobody was able to give him any support when he desperately needed it. Reflecting on this, I am overwhelmed by the bitterest grief.
Contribution
Taniyama was best known for conjecturing, in modern language, automorphic properties of L-functions of elliptic curves over any number field. A partial and refined case of this conjecture for elliptic curves over rationals is called the Taniyama–Shimura conjecture or the modularity theorem whose statement he subsequently refined in collaboration with Goro Shimura. The names Taniyama, Shimura and Weil have all been attached to this conjecture, but the idea is essentially due to Taniyama.
Taniyama's interests were in algebraic number theory. His work has been influenced by André Weil, who had met Taniyama during the symposiums on algebraic number theory in 1955, in which he became famous after proposing his problems at it.
Taniyama's problems proposed in 1955 form the basis of a Taniyama–Shimura conjecture, that "every elliptic curve defined over the rational field is a factor of the Jacobian of a modular function field". In 1986, Ken Ribet proved that if the Taniyama–Shimura conjecture held, then so would Fermat's Last Theorem, which inspired Andrew Wiles to work for a number of years in secrecy on it, and to prove enough of it to prove Fermat's Last Theorem. Owing to the pioneering contribution of Wiles and the efforts of a number of mathematicians, the Taniyama–Shimura conjecture was finally proven in 1999. The original Taniyama conjecture for elliptic curves over arbitrary number fields remains open.
Goro Shimura stated:
Taniyama was not a very careful person as a mathematician. He made a lot of mistakes. But he made mistakes in a good direction and so eventually he got right answers. I tried to imitate him, but I found out that it is very difficult to make good mistakes.
See also
Taniyama group
Taniyama's problems
Notes
Publications
This book is hard to find, but an expanded version was later published as
References
Singh, Simon (hardcover, 1998). Fermat's Enigma. Bantam Books. (previously published under the title Fermat's Last Theorem).
External links
1927 births
1958 suicides
People from Saitama Prefecture
Japanese mathematicians
20th-century Japanese mathematicians
Number theorists
Japanese scientists
University of Tokyo alumni
Suicides in Japan
1958 deaths | Yutaka Taniyama | [
"Mathematics"
] | 1,106 | [
"Number theorists",
"Number theory"
] |
1,078,104 | https://en.wikipedia.org/wiki/Siladium | Siladium is a trademark for a stainless steel alloy used in jewelry, particularly in high school and college class rings.
The trademark was registered in 1973 to John Roberts, Inc., maker of the Artcarved brand of class rings. John Roberts, Inc., and the Siladium trademark were subsequently acquired by CJC Holdings, Inc, then by Commemorative Brands, Inc
References
Steel alloys
Jewellery making | Siladium | [
"Chemistry"
] | 84 | [
"Alloys",
"Alloy stubs"
] |
1,078,359 | https://en.wikipedia.org/wiki/Pharmacophore | In medicinal chemistry and molecular biology, a pharmacophore is an abstract description of molecular features that are necessary for molecular recognition of a ligand by a biological macromolecule. IUPAC defines a pharmacophore to be "an ensemble of steric and electronic features that is necessary to ensure the optimal supramolecular interactions with a specific biological target and to trigger (or block) its biological response". A pharmacophore model explains how structurally diverse ligands can bind to a common receptor site. Furthermore, pharmacophore models can be used to identify through de novo design or virtual screening novel ligands that will bind to the same receptor.
Features
Typical pharmacophore features include hydrophobic centroids, aromatic rings, hydrogen bond acceptors or donors, cations, and anions. These pharmacophore points may be located on the ligand itself or may be projected points presumed to be located in the receptor.
The features need to match different chemical groups with similar properties, in order to identify novel ligands. Ligand-receptor interactions are typically "polar positive", "polar negative" or "hydrophobic". A well-defined pharmacophore model includes both hydrophobic volumes and hydrogen bond vectors.
Model development
The process for developing a pharmacophore model generally involves the following steps:
Select a training set of ligands – Choose a structurally diverse set of molecules that will be used for developing the pharmacophore model. As a pharmacophore model should be able to discriminate between molecules with and without bioactivity, the set of molecules should include both active and inactive compounds.
Conformational analysis – Generate a set of low energy conformations that is likely to contain the bioactive conformation for each of the selected molecules.
Molecular superimposition – Superimpose ("fit") all combinations of the low-energy conformations of the molecules. Similar (bioisosteric) functional groups common to all molecules in the set might be fitted (e.g., phenyl rings or carboxylic acid groups). The set of conformations (one conformation from each active molecule) that results in the best fit is presumed to be the active conformation.
Abstraction – Transform the superimposed molecules into an abstract representation. For example, superimposed phenyl rings might be referred to more conceptually as an 'aromatic ring' pharmacophore element. Likewise, hydroxy groups could be designated as a 'hydrogen-bond donor/acceptor' pharmacophore element.
Validation – A pharmacophore model is a hypothesis accounting for the observed biological activities of a set of molecules that bind to a common biological target. The model is only valid insofar as it is able to account for differences in biological activity of a range of molecules.
As the biological activities of new molecules become available, the pharmacophore model can be updated to further refine it.
Applications
In modern computational chemistry, pharmacophores are used to define the essential features of one or more molecules with the same biological activity. A database of diverse chemical compounds can then be searched for more molecules which share the same features arranged in the same relative orientation. Pharmacophores are also used as the starting point for developing 3D-QSAR models. Such tools and a related concept of "privileged structures", which are "defined as molecular frameworks which are able of providing useful ligands for more than one type of receptor or enzyme target by judicious structural modifications", aid in drug discovery.
History
Historically, the modern idea of pharmacophore was popularized by Lemont Kier, who mentions the concept in 1967 and uses the term in a publication in 1971. Nevertheless, F. W. Shueler, in a 1960s book, uses the expression "pharmacophoric moiety" that corresponds to the modern concept.
The development of the concept is often erroneously accredited to Paul Ehrlich. However neither the alleged source nor any of his other works mention the term "pharmacophore" or make use of the concept.
See also
Cheminformatics
Molecule mining
Pharmaceutical company
QSAR
in silico
References
Further reading
External links
The following computer software packages enable the user to model the pharmacophore using a variety of computational chemistry methods:
Discovery Studio
LigandScout
Phase
MOE - Pharmacophore Discovery
ICM-Chemist
ZINCPharmer
Pharmit
Medicinal chemistry
Cheminformatics | Pharmacophore | [
"Chemistry",
"Biology"
] | 933 | [
"Computational chemistry",
"Cheminformatics",
"Medicinal chemistry",
"Biochemistry",
"nan"
] |
369,482 | https://en.wikipedia.org/wiki/Addition%20reaction | In organic chemistry, an addition reaction is an organic reaction in which two or more molecules combine to form a larger molecule called the adduct.
An addition reaction is limited to chemical compounds that have multiple bonds. Examples include a molecule with a carbon–carbon double bond (an alkene) or a triple bond (an alkyne). Another example is a compound that has rings (which are also considered points of unsaturation). A molecule that has carbon—heteroatom double bonds, such as a carbonyl group () or imine group (), can undergo an addition reaction because its double-bond.
An addition reaction is the reverse of an elimination reaction, in which one molecule divides into two or more molecules. For instance, the hydration of an alkene to an alcohol is reversed by dehydration.
There are two main types of polar addition reactions: electrophilic addition and nucleophilic addition. Two non-polar addition reactions exist as well, called free-radical addition and cycloadditions. Addition reactions are also encountered in polymerizations and called addition polymerization.
Depending on the product structure, it could promptly react further to eject a leaving group to give the addition–elimination reaction sequence.
Addition reactions are useful in analytic chemistry, as they can identify the existence and number of double bonds in a molecule. For example, bromine addition will consume a bromine solution, resulting in a color change:
RR'C=CR''R''' + Br2(orange-brown) ->[{}\atop\ce{CCl4}] RR'CBr-BrCR''R'''(typically\ colorless)
Likewise hydrogen addition often proceeds on all double-bonds of a molecule, and thus gives a count of the number of a double and triple bonds through stoichiometry:
{(H2C=CH)2} + 2H2 ->[{}\atop\ce{Pt}/\ce{Pd}] (H3C-CH2)2
References
External links
Reaction mechanisms | Addition reaction | [
"Chemistry"
] | 437 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
370,346 | https://en.wikipedia.org/wiki/Frequency%20domain | In mathematics, physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency (and possibly phase), rather than time, as in time series. Put simply, a time-domain graph shows how a signal changes over time, whereas a frequency-domain graph shows how the signal is distributed within different frequency bands over a range of frequencies. A complex valued frequency-domain representation consists of both the magnitude and the phase of a set of sinusoids (or other basis waveforms) at the frequency components of the signal. Although it is common to refer to the magnitude portion (the real valued frequency-domain) as the frequency response of a signal, the phase portion is required to uniquely define the signal.
A given function or signal can be converted between the time and frequency domains with a pair of mathematical operators called transforms. An example is the Fourier transform, which converts a time function into a complex valued sum or integral of sine waves of different frequencies, with amplitudes and phases, each of which represents a frequency component. The "spectrum" of frequency components is the frequency-domain representation of the signal. The inverse Fourier transform converts the frequency-domain function back to the time-domain function. A spectrum analyzer is a tool commonly used to visualize electronic signals in the frequency domain.
A frequency-domain representation may describe either a static function or a particular time period of a dynamic function (signal or system). The frequency transform of a dynamic function is performed over a finite time period of that function and assumes the function repeats infinitely outside of that time period. Some specialized signal processing techniques for dynamic functions use transforms that result in a joint time–frequency domain, with the instantaneous frequency response being a key link between the time domain and the frequency domain.
Advantages
One of the main reasons for using a frequency-domain representation of a problem is to simplify the mathematical analysis. For mathematical systems governed by linear differential equations, a very important class of systems with many real-world applications, converting the description of the system from the time domain to a frequency domain converts the differential equations to algebraic equations, which are much easier to solve.
In addition, looking at a system from the point of view of frequency can often give an intuitive understanding of the qualitative behavior of the system, and a revealing scientific nomenclature has grown up to describe it, characterizing the behavior of physical systems to time varying inputs using terms such as bandwidth, frequency response, gain, phase shift, resonant frequencies, time constant, resonance width, damping factor, Q factor, harmonics, spectrum, power spectral density, eigenvalues, poles, and zeros.
An example of a field in which frequency-domain analysis gives a better understanding than time domain is music; the theory of operation of musical instruments and the musical notation used to record and discuss pieces of music is implicitly based on the breaking down of complex sounds into their separate component frequencies (musical notes).
Magnitude and phase
In using the Laplace, Z-, or Fourier transforms, a signal is described by a complex function of frequency: the component of the signal at any given frequency is given by a complex number. The modulus of the number is the amplitude of that component, and the argument is the relative phase of the wave. For example, using the Fourier transform, a sound wave, such as human speech, can be broken down into its component tones of different frequencies, each represented by a sine wave of a different amplitude and phase. The response of a system, as a function of frequency, can also be described by a complex function. In many applications, phase information is not important. By discarding the phase information, it is possible to simplify the information in a frequency-domain representation to generate a frequency spectrum or spectral density. A spectrum analyzer is a device that displays the spectrum, while the time-domain signal can be seen on an oscilloscope.
Types
Although "the" frequency domain is spoken of in the singular, there are a number of different mathematical transforms which are used to analyze time-domain functions and are referred to as "frequency domain" methods. These are the most common transforms, and the fields in which they are used:
Fourier series – periodic signals, oscillating systems.
Fourier transform – aperiodic signals, transients.
Laplace transform – electronic circuits and control systems.
Z transform – discrete-time signals, digital signal processing.
Wavelet transform — image analysis, data compression.
More generally, one can speak of the with respect to any transform. The above transforms can be interpreted as capturing some form of frequency, and hence the transform domain is referred to as a frequency domain.
Discrete frequency domain
A discrete frequency domain is a frequency domain that is discrete rather than continuous.
For example, the discrete Fourier transform maps a function having a discrete time domain into one having a discrete frequency domain. The discrete-time Fourier transform, on the other hand, maps functions with discrete time (discrete-time signals) to functions that have a continuous frequency domain.
A periodic signal has energy only at a base frequency and its harmonics; thus it can be analyzed using a discrete frequency domain. A discrete-time signal gives rise to a periodic frequency spectrum. In a situation where both these conditions occur, a signal which is discrete and periodic results in a frequency spectrum which is also discrete and periodic; this is the usual context for a discrete Fourier transform.
History of term
The use of the terms "frequency domain" and "time domain" arose in communication engineering in the 1950s and early 1960s, with "frequency domain" appearing in 1953. See time domain: origin of term for details.
See also
Bandwidth
Blackman–Tukey transformation
Fourier analysis for computing periodicity in evenly spaced data
Least-squares spectral analysis for computing periodicity in unevenly spaced data
Short-time Fourier transform
Time–frequency representation
Time–frequency analysis
Wavelet
Wavelet transform – digital image processing, signal compression
References
Goldshleger, N., Shamir, O., Basson, U., Zaady, E. (2019). Frequency Domain Electromagnetic Method (FDEM) as tool to study contamination at the sub-soil layer. Geoscience 9 (9), 382.
Further reading
.
.
Frequency-domain analysis | Frequency domain | [
"Physics"
] | 1,290 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)"
] |
370,681 | https://en.wikipedia.org/wiki/Separation%20barrier | A separation barrier or separation wall is a barrier, wall or fence, constructed to limit the movement of people across a certain line or border, or to separate peoples or cultures. A separation barrier that runs along an internationally recognized border is known as a border barrier.
David Henley opines in The Guardian that separation barriers are being built at a record-rate around the world along borders and do not only surround dictatorships or pariah states. In 2014, The Washington Post listed notable 14 separation walls as of 2011, indicating that the total concurrent number of walls and barriers which separate countries and territories is 45.
The term "separation barrier" has been applied to structures erected in Belfast, Homs, the West Bank, São Paulo, Cyprus, and along the Greece-Turkey border and the Mexico-United States border. In 2016, Julia Sonnevend listed in her book Stories Without Borders: The Berlin Wall and the Making of a Global Iconic Event the concurrent separation barriers of Sharm el-Sheikh (Egypt), Limbang border (Brunei), the Kazakh-Uzbekistan barrier, Indian border fence with Bangladesh, United States separation barrier with Mexico, Saudi Arabian border fence with Iraq and Hungary's fence with Serbia. Several erected separation barriers are no longer active or in place, including the Berlin Wall, the Maginot Line and some barrier sections in Jerusalem.
Construction rate of separation barriers and walls
David Henley opines in The Guardian that separation barriers are being built at a record-rate around the world along borders and do not only surround dictatorships or pariah states. In 2014, The Washington Post listed notable 14 separation walls as of 2011, indicating that the total concurrent number of walls and barriers which separate countries and territories is 45.
Structures described as "separation barriers" or "separation walls"
Central Europe
Communities in the Czech Republic, Romania and Slovakia have long built Roma walls in urban environments when a Roma group is in close proximity to the rest of the population.
Cyprus
Since the Turkish invasion of Cyprus in 1974, Turkey has constructed and maintained what economics professor Rongxing Guo has called a "separation barrier" of along the 1974 Green Line (or ceasefire line) dividing the island of Cyprus into two parts, with a United Nations buffer zone between them.
Egypt
Egypt-Gaza barrier
The Egypt–Gaza barrier is often referred as "separation barrier" in the media or as a "separating wall". In December 2009, Egypt started the construction of the Egypt–Gaza barrier along the border with Gaza, consisting of a steel wall. Egypt's foreign minister said that the wall, being built along the country's border with the Gaza Strip will defend it "against threats to national security". Though the construction paused a number of times, the wall is nearly complete.
Sharm el-Sheikh barrier
According to Julia Sonnevend, the anti-terrorist barrier around the Sharm el-Sheikh resort in Egypt is in fact a separation barrier.
India
The Line of Control (LoC) refers to the military control line between the Indian and Pakistani controlled parts of the former princely state of Kashmir and Jammu—a line which, to this day, does not constitute a legally recognized international boundary, but is the de facto border. Originally known as the Cease-fire Line, it was redesignated as the "Line of Control" following the Simla Agreement, which was signed on 3 July 1972. The part of the former princely state that is under Indian control was known as the state of Jammu and Kashmir, which was split into two separate Union Territories. The two parts of the former princely state that are under Pakistani control are known as Gilgit–Baltistan and Azad Kashmir (AJK). Its northernmost point is known as the NJ9842. This territorial division, which to this day still exists, severed many villages and separated family members from each other.
A separation fence construction between Indian and Pakistani controlled areas, based on 1972 cease-fire line, was initiated by India in 2003.
In December 2013, it was revealed that India plans a construction of a separation wall in the Himalayan area in Kashmir. The wall is aimed to cover 179 km.
The other sections of India's borders also have a fence or wall.
Israel
Israel began building the Israeli West Bank barrier in 2002, in order to protect civilians from Palestinian terrorism such as suicide bombing attacks which increased significantly during the Second Intifada. Barrier opponents claim it seeks to annex Palestinian land under the guise of security and undermines peace negotiations by unilaterally establishing new borders. When completed it will be a 700-kilometres long network of high walls, electronic fences, gates and trenches. It is a controversial barrier because much of it is built outside the 1949 Armistice Line (Green Line), de facto annexing potentially 10 percent of Palestinian land, according to the United Nations Office for the Coordination of Humanitarian Affairs. It cuts far into the West Bank and encompasses Israel's largest settlement blocs containing hundreds of thousands of settlers.
In June 2004, the Israeli Supreme Court held that building the wall on West Bank Palestinian land is in itself legal, but it ordered some changes to the original route, which separated 35,000 Palestinian farmers from their lands and crops. The Israeli finance minister (Benjamin Netanyahu) replied that it was disputed land, not Palestinian, and its final status would be resolved in political negotiation. In July 2004, the International Court of Justice at The Hague in an advisory opinion declared the barriers illegal under international law and called on Israel to dismantle the walls, return confiscated land and make reparations for damages. In spite of all this, the number of Arab terrorist suicide bombings continued to decrease with the gradual completion of segments of the Security Barrier as was initially stated it would by the Israeli authorities.
Israel refers to land between the 1949 lines and the separation barrier as the Seam Zone, including all of East Jerusalem. In 2003, the military declared that only Israeli citizens and Palestinians with permits are allowed to be inside it; Palestinians have found it increasingly difficult to get permits unless they own land in the zone. The separation barrier cuts off east Jerusalem and some settlement blocs from the West Bank, even as Israelis and Arabs build structures and communities in eastern Jerusalem. Palestinians in the West Bank, including East Jerusalem, have continued to protest the separation barrier.
The existing barrier cuts off access to the Jordan River for Palestinian farmers in the West Bank. Due to international condemnation after the International Court ruling, Israel did not build an even stronger barrier, instead instituting permit-based access control. It has been opined that this change was to allow land to be annexed. Israeli settlement councils already have de facto control of 86 percent of the Jordan Valley and the Dead Sea as the settler population steadily grows there.
Kuwait
Writer Damon DiMarco has described as a "separation barrier" the Kuwait-Iraq barricade constructed by the United Nations in 1991 after the Iraqi invasion of Kuwait was repelled. With electrified fencing and concertina wire, it includes a 5-meter-wide trench and a high berm. It runs 180 kilometers along the border between the two nations.
Lebanon
A 2016 separation wall around the Ain al-Hilweh camp in Lebanon is intended to separate the local Palestinian-Lebanese population and Syrian refugee Palestinians from the surrounding society.
Malaysia
Renee Pirrong of The Heritage Foundation described the Malaysia–Thailand border barrier as a "separation barrier". Its purpose is to cut down on smuggling, drug trafficking, illegal immigration, crime and insurgency.
Saudi Arabia
In 2004 Saudi Arabia began construction of a Saudi-Yemen barrier between its territory and Yemen to prevent the unauthorized movement of people and goods into and out of the Kingdom. Some have labeled it a "separation barrier". In February 2004 The Guardian reported that Yemeni opposition newspapers likened the barrier to the Israeli West Bank barrier, while The Independent wrote "Saudi Arabia, one of the most vocal critics in the Arab world of Israel's 'security fence' in the West Bank, is quietly emulating the Israeli example by erecting a barrier along its porous border with Yemen". Saudi officials rejected the comparison saying it was built to prevent infiltration and smuggling.
Saudi Arabia has also built a wall on the Saudi Iraqi border.
Turkey
The Syria–Turkey barrier is a wall and fence under construction along the Syria–Turkey border aimed at preventing illegal crossings and smuggling from Syria into Turkey. In 2017, The Syrian government accused Turkey of building a separation wall, referring to the barrier.
From 2017, Turkey began construction a barrier along the Iran–Turkey border aimed at preventing illegal immigration and smuggling.
United Kingdom
Over 21 miles of high walling or fencing separate Catholic and Protestant communities in Northern Ireland, with most concentrated in Belfast and Derry. The wall was built in 1969 in order to separate the Catholic and Protestant areas in Belfast. An Army Major, overseeing the construction of the wall at the time, said: 'This is a temporary measure ... we do not want to see another Berlin wall situation in Western Europe ... it will be gone by Christmas'. In 2013, that wall still remains and almost 100 additional walls and barriers now complement the original. Technically known as 'peace walls', there are moves to remove all of them by 2023 by mutual consent.
United States
The United States constructed a barrier on the border with Mexico of to prevent unauthorized immigration into the United States and to deter smuggling of contraband. The US President Trump stated that he would replace the wall with an updated Mexico–United States border wall; some parts of the old wall have been replaced.
The Detroit Wall was erected to enforce redlining as part of the policies of racial segregation in the United States.
Western Sahara
Morocco has constructed a 2,700 km (1,700 mi) long sand wall cutting through the length of Western Sahara. Minefields and watchtowers serve to separate the Moroccan-controlled zone from the sparsely populated Free Zone.
Past separation barriers
Germany
The Berlin Wall was a barrier that divided Berlin from 1961 to 1989, constructed by the German Democratic Republic (GDR, East Germany) starting on 13 August 1961, that completely cut off (by land) West Berlin from surrounding East Germany and from East Berlin until it was opened in November 1989. Its demolition officially began on 13 June 1990 and was completed in 1992. The barrier included guard towers placed along large concrete walls, which circumscribed a wide area (later known as the "death strip") that contained anti-vehicle trenches, "fakir beds" and other defenses. The Eastern Bloc claimed that the wall was erected to protect its population from fascist elements conspiring to prevent the "will of the people" in building a socialist state in East Germany. In practice, the Wall served to prevent the massive emigration and defection that marked East Germany and the communist Eastern Bloc during the post-World War II period.
The Berlin Wall was officially referred to as the "Anti-Fascist Protection Rampart" () by GDR authorities, implying that the NATO countries and West Germany in particular were "fascists". The West Berlin city government sometimes referred to it as the "Wall of Shame"—a term coined by mayor Willy Brandt—while condemning the Wall's restriction on freedom of movement. Along with the separate and much longer Inner German border (IGB), which demarcated the border between East and West Germany, it came to symbolize the "Iron Curtain" that separated Western Europe and the Eastern Bloc during the Cold War.
Before the Wall's erection, 3.5 million East Germans circumvented Eastern Bloc emigration restrictions and defected from the GDR, many by crossing over the border from East Berlin into West Berlin, from where they could then travel to West Germany and other Western European countries. Between 1961 and 1989, the wall prevented almost all such emigration. During this period, around 5,000 people attempted to escape over the wall, with an estimated death toll of from 136 to more than 200 in and around Berlin.
In 1989, a series of radical political changes occurred in the Eastern Bloc, associated with the liberalization of the Eastern Bloc's authoritarian systems and the erosion of political power in the pro-Soviet governments in nearby Poland and Hungary. After several weeks of civil unrest, the East German government announced on 9 November 1989 that all GDR citizens could visit West Germany and West Berlin. Crowds of East Germans crossed and climbed onto the wall, joined by West Germans on the other side in a celebratory atmosphere. Over the next few weeks, euphoric people and souvenir hunters chipped away parts of the wall; the governments later used industrial equipment to remove most of what was left. Contrary to popular belief the wall's actual demolition did not begin until the summer of 1990 and was not completed until 1992. The fall of the Berlin Wall paved the way for German reunification, which was formally concluded on 3 October 1990.
Map of separation barriers worldwide
Excluding historical ones
See also
Border barrier
Defensive walls
List of fortifications
List of walls
List of cities with defensive walls
Pest-exclusion fence
Buffer zone
References
External links
Security fences around the world
Security fences in The Atlantic Monthly
Article about city walls on Erasmuspc
"Obama's Border Fence", NOW on PBS, July 3, 2009
Fences
Fortifications by type
Types of wall
Borders
Divided cities
Physical security
Human migration
Political geography | Separation barrier | [
"Physics",
"Engineering"
] | 2,691 | [
"Structural engineering",
"Separation barriers",
"Types of wall",
"Space",
"Spacetime",
"Borders"
] |
371,462 | https://en.wikipedia.org/wiki/Fraunhofer%20lines | The Fraunhofer lines are a set of spectral absorption lines. They are dark absorption lines, seen in the optical spectrum of the Sun, and are formed when atoms in the solar atmosphere absorb light being emitted by the solar photosphere. The lines are named after German physicist Joseph von Fraunhofer, who observed them in 1814.
Discovery
In 1802, English chemist William Hyde Wollaston was the first person to note the appearance of a number of dark features in the solar spectrum. In 1814, Joseph von Fraunhofer independently rediscovered the lines and began to systematically study and measure their wavelengths. He mapped over 570 lines, designating the most prominent with the letters A through K and weaker lines with other letters. Modern observations of sunlight can detect many thousands of lines.
About 45 years later, Gustav Kirchhoff and Robert Bunsen noticed that several Fraunhofer lines coincide with characteristic emission lines identified in the spectra of heated chemical elements. They inferred that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. Some of the other observed features were instead identified as telluric lines originating from absorption by oxygen molecules in the Earth's atmosphere.
Sources
The Fraunhofer lines are typical spectral absorption lines. Absorption lines are narrow regions of decreased intensity in a spectrum, which are the result of photons being absorbed as light passes from the source to the detector. In the Sun, Fraunhofer lines are a result of gas in the Sun's atmosphere and outer photosphere. These regions have lower temperatures than gas in the inner photosphere, and absorbs some of the light emitted by it.
Naming
The major Fraunhofer lines, and the elements they are associated with, are shown in the following table:
The Fraunhofer C, F, G′, and h lines correspond to the alpha, beta, gamma, and delta lines of the Balmer series of emission lines of the hydrogen atom. The Fraunhofer letters are now rarely used for those lines.
The D1 and D2 lines form a pair known as the "sodium doublet", the centre wavelength of which (589.29 nm) is given the designation letter "D". This historical designation for this line has stuck and is given to all the transitions between the ground state and the first excited state of the other alkali atoms as well. The D1 and D2 lines correspond to the fine-structure splitting of the excited states.
The Fraunhofer H and K letters are also still used for the calcium doublet in the violet part of the spectrum, important in astronomical spectroscopy.
There is disagreement in the literature for some line designations; for example, the Fraunhofer d line may refer to the cyan iron line at 466.814 nm, or alternatively to the yellow helium line (also labeled D3) at 587.5618 nm. Similarly, there is ambiguity regarding the e line, since it can refer to the spectral lines of both iron (Fe) and mercury (Hg). In order to resolve ambiguities that arise in usage, ambiguous Fraunhofer line designations are preceded by the element with which they are associated (e.g., Mercury e line and Helium d line).
Because of their well-defined wavelengths, Fraunhofer lines are often used to specify standard wavelengths for characterising the refractive index and dispersion properties of optical materials.
See also
Abbe number, measure of glass dispersion defined using Fraunhofer lines
Timeline of solar astronomy
Spectrum analysis
References
Further reading
External links
Atomic physics
Absorption spectroscopy
Astrochemistry | Fraunhofer lines | [
"Physics",
"Chemistry",
"Astronomy"
] | 747 | [
"Spectrum (physical sciences)",
"Quantum mechanics",
"Absorption spectroscopy",
"Astrochemistry",
"Atomic physics",
" molecular",
"nan",
"Atomic",
"Spectroscopy",
"Astronomical sub-disciplines",
" and optical physics"
] |
371,968 | https://en.wikipedia.org/wiki/Debye%20model | In thermodynamics and solid-state physics, the Debye model is a method developed by Peter Debye in 1912 to estimate phonon contribution to the specific heat (heat capacity) in a solid. It treats the vibrations of the atomic lattice (heat) as phonons in a box in contrast to the Einstein photoelectron model, which treats the solid as many individual, non-interacting quantum harmonic oscillators. The Debye model correctly predicts the low-temperature dependence of the heat capacity of solids, which is proportional to – the Debye T 3 law. Similarly to the Einstein photoelectron model, it recovers the Dulong–Petit law at high temperatures. Due to simplifying assumptions, its accuracy suffers at intermediate temperatures.
Derivation
The Debye model treats atomic vibrations as phonons confined in the solid's volume. It is analogous to Planck's law of black body radiation, which treats electromagnetic radiation as a photon gas confined in a vacuum space. Most of the calculation steps are identical, as both are examples of a massless Bose gas with a linear dispersion relation.
For a cube of side-length , the resonating modes of the sonic disturbances (considering for now only those aligned with one axis), treated as particles in a box, have wavelengths given as
where is an integer. The energy of a phonon is given as
where is the Planck constant and is the frequency of the phonon. Making the approximation that the frequency is inversely proportional to the wavelength,
in which is the speed of sound inside the solid. In three dimensions, energy can be generalized to
in which is the magnitude of the three-dimensional momentum of the phonon, and , , and are the components of the resonating mode along each of the three axes.
The approximation that the frequency is inversely proportional to the wavelength (giving a constant speed of sound) is good for low-energy phonons but not for high-energy phonons, which is a limitation of the Debye model. This approximation leads to incorrect results at intermediate temperatures, whereas the results are exact at the low and high temperature limits.
The total energy in the box, , is given by
where is the number of phonons in the box with energy ; the total energy is equal to the sum of energies over all energy levels, and the energy at a given level is found by multiplying its energy by the number of phonons with that energy. In three dimensions, each combination of modes in each of the three axes corresponds to an energy level, giving the total energy as:
The Debye model and Planck's law of black body radiation differ here with respect to this sum. Unlike electromagnetic photon radiation in a box, there are a finite number of phonon energy states because a phonon cannot have an arbitrarily high frequency. Its frequency is bounded by its propagation medium—the atomic lattice of the solid. The following illustration describes transverse phonons in a cubic solid at varying frequencies:
It is reasonable to assume that the minimum wavelength of a phonon is twice the atomic separation, as shown in the lowest example. With atoms in a cubic solid, each axis of the cube measures as being atoms long. Atomic separation is then given by , and the minimum wavelength is
making the maximum mode number :
This contrasts with photons, for which the maximum mode number is infinite. This number bounds the upper limit of the triple energy sum
If is a function that is slowly varying with respect to , the sums can be approximated with integrals:
To evaluate this integral, the function , the number of phonons with energy must also be known. Phonons obey Bose–Einstein statistics, and their distribution is given by the Bose–Einstein statistics formula:
Because a phonon has three possible polarization states (one longitudinal, and two transverse, which approximately do not affect its energy) the formula above must be multiplied by 3,
Considering all three polarization states together also means that an effective sonic velocity must be determined and used as the value of the standard sonic velocity The Debye temperature defined below is proportional to ; more precisely, , where longitudinal and transversal sound-wave velocities are averaged, weighted by the number of polarization states. The Debye temperature or the effective sonic velocity is a measure of the hardness of the crystal.
Substituting into the energy integral yields
These integrals are evaluated for photons easily because their frequency, at least semi-classically, is unbound. The same is not true for phonons, so in order to approximate this triple integral, Peter Debye used spherical coordinates,
and approximated the cube with an eighth of a sphere,
where is the radius of this sphere. As the energy function does not depend on either of the angles, the equation can be simplified to
The number of particles in the original cube and in the eighth of a sphere should be equivalent. The volume of the cube is unit cell volumes,
such that the radius must be
The substitution of integration over a sphere for the correct integral over a cube introduces another source of inaccuracy into the resulting model.
After making the spherical substitution and substituting in the function , the energy integral becomes
.
Changing the integration variable to ,
To simplify the appearance of this expression, define the Debye temperature
where is the volume of the cubic box of side-length .
Some authors describe the Debye temperature as shorthand for some constants and material-dependent variables. However, is roughly equal to the phonon energy of the minimum wavelength mode, and so we can interpret the Debye temperature as the temperature at which the highest-frequency mode is excited. Additionally, since all other modes are of a lower energy than the highest-frequency mode, all modes are excited at this temperature.
From the total energy, the specific internal energy can be calculated:
where is the third Debye function. Differentiating this function with respect to produces the dimensionless heat capacity:
These formulae treat the Debye model at all temperatures. The more elementary formulae given further down give the asymptotic behavior in the limit of low and high temperatures. The essential reason for the exactness at low and high energies is, respectively, that the Debye model gives the exact dispersion relation at low frequencies, and corresponds to the exact density of states at high temperatures, concerning the number of vibrations per frequency interval.
Debye's derivation
Debye derived his equation differently and more simply. Using continuum mechanics, he found that the number of vibrational states with a frequency less than a particular value was asymptotic to
in which is the volume and is a factor that he calculated from elasticity coefficients and density. Combining this formula with the expected energy of a harmonic oscillator at temperature (already used by Einstein in his model) would give an energy of
if the vibrational frequencies continued to infinity. This form gives the behaviour which is correct at low temperatures. But Debye realized that there could not be more than vibrational states for N atoms. He made the assumption that in an atomic solid, the spectrum of frequencies of the vibrational states would continue to follow the above rule, up to a maximum frequency chosen so that the total number of states is
Debye knew that this assumption was not really correct (the higher frequencies are more closely spaced than assumed), but it guarantees the proper behaviour at high temperature (the Dulong–Petit law). The energy is then given by
Substituting for ,
where is the function later given the name of third-order Debye function.
Another derivation
First the vibrational frequency distribution is derived from Appendix VI of Terrell L. Hill's An Introduction to Statistical Mechanics. Consider a three-dimensional isotropic elastic solid with N atoms in the shape of a rectangular parallelepiped with side-lengths . The elastic wave will obey the wave equation and will be plane waves; consider the wave vector and define , such that
Solutions to the wave equation are
and with the boundary conditions at ,
where are positive integers. Substituting () into () and also using the dispersion relation ,
The above equation, for fixed frequency , describes an eighth of an ellipse in "mode space" (an eighth because are positive). The number of modes with frequency less than is thus the number of integral points inside the ellipse, which, in the limit of (i.e. for a very large parallelepiped) can be approximated to the volume of the ellipse. Hence, the number of modes with frequency in the range is
where is the volume of the parallelepiped. The wave speed in the longitudinal direction is different from the transverse direction and that the waves can be polarised one way in the longitudinal direction and two ways in the transverse direction and ca be defined as .
Following the derivation from A First Course in Thermodynamics, an upper limit to the frequency of vibration is defined ; since there are atoms in the solid, there are quantum harmonic oscillators (3 for each x-, y-, z- direction) oscillating over the range of frequencies . can be determined using
By defining , where k is the Boltzmann constant and h is the Planck constant, and substituting () into (),
this definition is more standard; the energy contribution for all oscillators oscillating at frequency can be found. Quantum harmonic oscillators can have energies where and using Maxwell-Boltzmann statistics, the number of particles with energy is
The energy contribution for oscillators with frequency is then
By noting that (because there are modes oscillating with frequency ),
From above, we can get an expression for 1/A; substituting it into (),
Integrating with respect to ν yields
Temperature limits
The temperature of a Debye solid is said to be low if , leading to
This definite integral can be evaluated exactly:
In the low-temperature limit, the limitations of the Debye model mentioned above do not apply, and it gives a correct relationship between (phononic) heat capacity, temperature, the elastic coefficients, and the volume per atom (the latter quantities being contained in the Debye temperature).
The temperature of a Debye solid is said to be high if . Using if leads to
which upon integration gives
This is the Dulong–Petit law, and is fairly accurate although it does not take into account anharmonicity, which causes the heat capacity to rise further. The total heat capacity of the solid, if it is a conductor or semiconductor, may also contain a non-negligible contribution from the electrons.
Debye versus Einstein
The Debye and Einstein models correspond closely to experimental data, but the Debye model is correct at low temperatures whereas the Einstein model is not. To visualize the difference between the models, one would naturally plot the two on the same set of axes, but this is not immediately possible as both the Einstein model and the Debye model provide a functional form for the heat capacity. As models, they require scales to relate them to their real-world counterparts. One can see that the scale of the Einstein model is given by :
The scale of the Debye model is , the Debye temperature. Both are usually found by fitting the models to the experimental data. (The Debye temperature can theoretically be calculated from the speed of sound and crystal dimensions.) Because the two methods approach the problem from different directions and different geometries, Einstein and Debye scales are the same, that is to say
which means that plotting them on the same set of axes makes no sense. They are two models of the same thing, but of different scales. If one defines the Einstein condensation temperature as
then one can say
and, to relate the two, the ratio is used.
The Einstein solid is composed of single-frequency quantum harmonic oscillators, . That frequency, if it indeed existed, would be related to the speed of sound in the solid. If one imagines the propagation of sound as a sequence of atoms hitting one another, then the frequency of oscillation must correspond to the minimum wavelength sustainable by the atomic lattice, , where
,
which makes the Einstein temperature and the sought ratio is therefore
Using the ratio, both models can be plotted on the same graph. It is the cube root of the ratio of the volume of one octant of a three-dimensional sphere to the volume of the cube that contains it, which is just the correction factor used by Debye when approximating the energy integral above. Alternatively, the ratio of the two temperatures can be seen to be the ratio of Einstein's single frequency at which all oscillators oscillate and Debye's maximum frequency. Einstein's single frequency can then be seen to be a mean of the frequencies available to the Debye model.
Debye temperature table
Even though the Debye model is not completely correct, it gives a good approximation for the low temperature heat capacity of insulating, crystalline solids where other contributions (such as highly mobile conduction electrons) are negligible. For metals, the electron contribution to the heat is proportional to , which at low temperatures dominates the Debye result for lattice vibrations. In this case, the Debye model can only be said to approximate the lattice contribution to the specific heat. The following table lists Debye temperatures for several pure elements and sapphire:
The Debye model's fit to experimental data is often phenomenologically improved by allowing the Debye temperature to become temperature dependent; for example, the value for ice increases from about 222 K to 300 K as the temperature goes from absolute zero to about 100 K.
Extension to other quasi-particles
For other bosonic quasi-particles, e.g., magnons (quantized spin waves) in ferromagnets instead of the phonons (quantized sound waves), one can derive analogous results. In this case at low frequencies one has different dispersion relations of momentum and energy, e.g., in the case of magnons, instead of for phonons (with ). One also has different density of states (e.g., ). As a consequence, in ferromagnets one gets a magnon contribution to the heat capacity, , which dominates at sufficiently low temperatures the phonon contribution, . In metals, in contrast, the main low-temperature contribution to the heat capacity, , comes from the electrons. It is fermionic, and is calculated by different methods going back to Sommerfeld's free electron model.
Extension to liquids
It was long thought that phonon theory is not able to explain the heat capacity of liquids, since liquids only sustain longitudinal, but not transverse phonons, which in solids are responsible for 2/3 of the heat capacity. However, Brillouin scattering experiments with neutrons and with X-rays, confirming an intuition of Yakov Frenkel, have shown that transverse phonons do exist in liquids, albeit restricted to frequencies above a threshold called the Frenkel frequency. Since most energy is contained in these high-frequency modes, a simple modification of the Debye model is sufficient to yield a good approximation to experimental heat capacities of simple liquids. More recently, it has been shown that instantaneous normal modes associated with relaxations from saddle points in the liquid energy landscape, which dominate the frequency spectrum of liquids at low frequencies, may determine the specific heat of liquids as a function of temperature over a broad range.
Debye frequency
The Debye frequency (Symbol: or ) is a parameter in the Debye model that refers to a cut-off angular frequency for waves of a harmonic chain of masses, used to describe the movement of ions in a crystal lattice and more specifically, to correctly predict that the heat capacity in such crystals is constant at high temperatures (Dulong–Petit law). The concept was first introduced by Peter Debye in 1912.
Throughout this section, periodic boundary conditions are assumed.
Definition
Assuming the dispersion relation is
with the speed of sound in the crystal and k the wave vector, the value of the Debye frequency is as follows:
For a one-dimensional monatomic chain, the Debye frequency is equal to
with as the distance between two neighbouring atoms in the chain when the system is in its ground state of energy, here being that none of the atoms are moving with respect to one another; the total number of atoms in the chain; the size of the system, which is the length of the chain; and the linear number density. For , , and , the relation holds.
For a two-dimensional monatomic square lattice, the Debye frequency is equal to
with is the size (area) of the surface, and the surface number density.
For a three-dimensional monatomic primitive cubic crystal, the Debye frequency is equal to
with the size of the system, and the volume number density.
The general formula for the Debye frequency as a function of , the number of dimensions for a (hyper)cubic lattice is
with being the gamma function.
The speed of sound in the crystal depends on the mass of the atoms, the strength of their interaction, the pressure on the system, and the polarisation of the spin wave (longitudinal or transverse), among others. For the following, the speed of sound is assumed to be the same for any polarisation, although this limits the applicability of the result.
The assumed dispersion relation is easily proven inaccurate for a one-dimensional chain of masses, but in Debye's model, this does not prove to be problematic.
Relation to Debye's temperature
The Debye temperature , another parameter in Debye model, is related to the Debye frequency by the relation where is the reduced Planck constant and is the Boltzmann constant.
Debye's derivation
Three-dimensional crystal
In Debye's derivation of the heat capacity, he sums over all possible modes of the system, accounting for different directions and polarisations. He assumed the total number of modes per polarization to be , the amount of masses in the system, and the total to be
with three polarizations per mode. The sum runs over all modes without differentiating between different polarizations, and then counts the total number of polarization-mode combinations. Debye made this assumption based on an assumption from classical mechanics that the number of modes per polarization in a chain of masses should always be equal to the number of masses in the chain.
The left hand side can be made explicit to show how it depends on the Debye frequency, introduced first as a cut-off frequency beyond which no frequencies exist. By relating the cut-off frequency to the maximum number of modes, an expression for the cut-off frequency can be derived.
First of all, by assuming to be very large ( ≫ 1, with the size of the system in any of the three directions) the smallest wave vector in any direction could be approximated by: , with . Smaller wave vectors cannot exist because of the periodic boundary conditions. Thus the summation would become
where ; is the size of the system; and the integral is (as the summation) over all possible modes, which is assumed to be a finite region (bounded by the cut-off frequency).
The triple integral could be rewritten as a single integral over all possible values of the absolute value of (see Jacobian for spherical coordinates). The result is
with the absolute value of the wave vector corresponding with the Debye frequency, so .
Since the dispersion relation is , it can be written as an integral over all possible :
After solving the integral it is again equated to to find
It can be rearranged into
One-dimensional chain in 3D space
The same derivation could be done for a one-dimensional chain of atoms. The number of modes remains unchanged, because there are still three polarizations, so
The rest of the derivation is analogous to the previous, so the left hand side is rewritten with respect to the Debye frequency:
The last step is multiplied by two is because the integrand in the first integral is even and the bounds of integration are symmetric about the origin, so the integral can be rewritten as from 0 to after scaling by a factor of 2. This is also equivalent to the statement that the volume of a one-dimensional ball is twice its radius. Applying a change a substitution of , our bounds are now 0 to , which gives us our rightmost integral. We continue;
Conclusion:
Two-dimensional crystal
The same derivation could be done for a two-dimensional crystal. The number of modes remains unchanged, because there are still three polarizations. The derivation is analogous to the previous two. We start with the same equation,
And then the left hand side is rewritten and equated to
where is the size of the system.
It can be rewritten as
Polarization dependence
In reality, longitudinal waves often have a different wave velocity from that of transverse waves. Making the assumption that the velocities are equal simplified the final result, but reintroducing the distinction improves the accuracy of the final result.
The dispersion relation becomes , with , each corresponding to one of the three polarizations. The cut-off frequency , however, does not depend on . We can write the total number of modes as , which is again equal to . Here the summation over the modes is now dependent on .
One-dimensional chain in 3D space
The summation over the modes is rewritten
The result is
Thus the Debye frequency is found
The calculated effective velocity is the harmonic mean of the velocities for each polarization. By assuming the two transverse polarizations to have the same phase speed and frequency,
Setting recovers the expression previously derived under the assumption that velocity is the same for all polarization modes.
Two-dimensional crystal
The same derivation can be done for a two-dimensional crystal to find
The calculated effective velocity is the square root of the harmonic mean of the squares of velocities. By assuming the two transverse polarizations to be the same,
Setting recovers the expression previously derived under the assumption that velocity is the same for all polarization modes.
Three-dimensional crystal
The same derivation can be done for a three-dimensional crystal to find (the derivation is analogous to previous derivations)
The calculated effective velocity is the cube root of the harmonic mean of the cubes of velocities. By assuming the two transverse polarizations to be the same,
Setting recovers the expression previously derived under the assumption that velocity is the same for all polarization modes.
Derivation with the actual dispersion relation
This problem could be made more applicable by relaxing the assumption of linearity of the dispersion relation. Instead of using the dispersion relation , a more accurate dispersion relation can be used. In classical mechanics, it is known that for an equidistant chain of masses which interact harmonically with each other, the dispersion relation is
with being the mass of each atom, the spring constant for the harmonic oscillator, and still being the spacing between atoms in the ground state. After plotting this relation, Debye's estimation of the cut-off wavelength based on the linear assumption remains accurate, because for every wavenumber bigger than (that is, for is smaller than ), a wavenumber that is smaller than could be found with the same angular frequency. This means the resulting physical manifestation for the mode with the larger wavenumber is indistinguishable from the one with the smaller wavenumber. Therefore, the study of the dispersion relation can be limited to the first Brillouin zone without any loss of accuracy or information. This is possible because the system consists of discretized points, as is demonstrated in the animated picture. Dividing the dispersion relation by and inserting for , we find the speed of a wave with to be
By simply inserting in the original dispersion relation we find
Combining these results the same result is once again found
However, for any chain with greater complexity, including diatomic chains, the associated cut-off frequency and wavelength are not very accurate, since the cut-off wavelength is twice as big and the dispersion relation consists of additional branches, two total for a diatomic chain. It is also not certain from this result whether for higher-dimensional systems the cut-off frequency was accurately predicted by Debye when taking into account the more accurate dispersion relation.
Alternative derivation
For a one-dimensional chain, the formula for the Debye frequency can also be reproduced using a theorem for describing aliasing. The Nyquist–Shannon sampling theorem is used for this derivation, the main difference being that in the case of a one-dimensional chain, the discretization is not in time, but in space.
The cut-off frequency can be determined from the cut-off wavelength. From the sampling theorem, we know that for wavelengths smaller than , or twice the sampling distance, every mode is a repeat of a mode with wavelength larger than , so the cut-off wavelength should be at . This results again in , rendering
It does not matter which dispersion relation is used, as the same cut-off frequency would be calculated.
See also
Bose gas
Gas in a box
Grüneisen parameter
Bloch–Grüneisen temperature
Electrical resistivity and conductivity#Temperature dependence
References
Further reading
CRC Handbook of Chemistry and Physics, 56th Edition (1975–1976)
Schroeder, Daniel V. An Introduction to Thermal Physics. Addison-Wesley, San Francisco (2000). Section 7.5.
External links
Experimental determination of specific heat, thermal and heat conductivity of quartz using a cryostat.
Simon, Steven H. (2014) The Oxford Solid State Basics (most relevant ones: 1, 2 and 6)
Condensed matter physics
Thermodynamic models
American inventions | Debye model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,380 | [
"Thermodynamic models",
"Phases of matter",
"Materials science",
"Thermodynamics",
"Condensed matter physics",
"Matter"
] |
372,198 | https://en.wikipedia.org/wiki/Weierstrass%20preparation%20theorem | In mathematics, the Weierstrass preparation theorem is a tool for dealing with analytic functions of several complex variables, at a given point P. It states that such a function is, up to multiplication by a function not zero at P, a polynomial in one fixed variable z, which is monic, and whose coefficients of lower degree terms are analytic functions in the remaining variables and zero at P.
There are also a number of variants of the theorem, that extend the idea of factorization in some ring R as u·w, where u is a unit and w is some sort of distinguished Weierstrass polynomial. Carl Siegel has disputed the attribution of the theorem to Weierstrass, saying that it occurred under the current name in some of late nineteenth century Traités d'analyse without justification.
Complex analytic functions
For one variable, the local form of an analytic function f(z) near 0 is zkh(z) where h(0) is not 0, and k is the order of the zero of f at 0. This is the result that the preparation theorem generalises.
We pick out one variable z, which we may assume is first, and write our complex variables as (z, z2, ..., zn). A Weierstrass polynomial W(z) is
zk + gk−1zk−1 + ... + g0
where gi(z2, ..., zn) is analytic and gi(0, ..., 0) = 0.
Then the theorem states that for analytic functions f, if
f(0, ...,0) = 0,
and
f(z, z2, ..., zn)
as a power series has some term only involving z, we can write (locally near (0, ..., 0))
f(z, z2, ..., zn) = W(z)h(z, z2, ..., zn)
with h analytic and h(0, ..., 0) not 0, and W a Weierstrass polynomial.
This has the immediate consequence that the set of zeros of f, near (0, ..., 0), can be found by fixing any small values of z2, ..., zn and then solving the equation W(z)=0. The corresponding values of z form a number of continuously-varying branches, in number equal to the degree of W in z. In particular f cannot have an isolated zero.
Division theorem
A related result is the Weierstrass division theorem, which states that if f and g are analytic functions, and g is a Weierstrass polynomial of degree N, then there exists a unique pair h and j such that f = gh + j, where j is a polynomial of degree less than N. In fact, many authors prove the Weierstrass preparation as a corollary of the division theorem. It is also possible to prove the division theorem from the preparation theorem so that the two theorems are actually equivalent.
Applications
The Weierstrass preparation theorem can be used to show that the ring of germs of analytic functions in n variables is a Noetherian ring, which is also referred to as the Rückert basis theorem.
Smooth functions
There is a deeper preparation theorem for smooth functions, due to Bernard Malgrange, called the Malgrange preparation theorem. It also has an associated division theorem, named after John Mather.
Formal power series in complete local rings
There is an analogous result, also referred to as the Weierstrass preparation theorem, for the ring of formal power series over complete local rings A: for any power series such that not all are in the maximal ideal of A, there is a unique unit u in and a polynomial F of the form with (a so-called distinguished polynomial) such that
Since is again a complete local ring, the result can be iterated and therefore gives similar factorization results for formal power series in several variables.
For example, this applies to the ring of integers in a p-adic field. In this case the theorem says that a power series f(z) can always be uniquely factored as πn·u(z)·p(z), where u(z) is a unit in the ring of power series, p(z) is a distinguished polynomial (monic, with the coefficients of the non-leading terms each in the maximal ideal), and π is a fixed uniformizer.
An application of the Weierstrass preparation and division theorem for the ring (also called Iwasawa algebra) occurs in Iwasawa theory in the description of finitely generated modules over this ring.
There exists a non-commutative version of Weierstrass division and preparation, with A being a not necessarily commutative ring, and with formal skew power series in place of formal power series.
Tate algebras
There is also a Weierstrass preparation theorem for Tate algebras
over a complete non-archimedean field k.
These algebras are the basic building blocks of rigid geometry. One application of this form of the Weierstrass preparation theorem is the fact that the rings are Noetherian.
See also
Oka coherence theorem
References
, reprinted in
reprinted by Johnson, New York, 1967.
External links
Several complex variables
Commutative algebra
Theorems in complex analysis | Weierstrass preparation theorem | [
"Mathematics"
] | 1,124 | [
"Theorems in mathematical analysis",
"Functions and mappings",
"Several complex variables",
"Theorems in complex analysis",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Commutative algebra"
] |
372,470 | https://en.wikipedia.org/wiki/Grothendieck%27s%20Galois%20theory | In mathematics, Grothendieck's Galois theory is an abstract approach to the Galois theory of fields, developed around 1960 to provide a way to study the fundamental group of algebraic topology in the setting of algebraic geometry. It provides, in the classical setting of field theory, an alternative perspective to that of Emil Artin based on linear algebra, which became standard from about the 1930s.
The approach of Alexander Grothendieck is concerned with the category-theoretic properties that characterise the categories of finite G-sets for a fixed profinite group G. For example, G might be the group denoted (see profinite integer), which is the inverse limit of the cyclic additive groups Z/nZ — or equivalently the completion of the infinite cyclic group Z for the topology of subgroups of finite index. A finite G-set is then a finite set X on which G acts through a quotient finite cyclic group, so that it is specified by giving some permutation of X.
In the above example, a connection with classical Galois theory can be seen by regarding as the profinite Galois group Gal(F/F) of the algebraic closure F of any finite field F, over F. That is, the automorphisms of F fixing F are described by the inverse limit, as we take larger and larger finite splitting fields over F. The connection with geometry can be seen when we look at covering spaces of the unit disk in the complex plane with the origin removed: the finite covering realised by the zn map of the disk, thought of by means of a complex number variable z, corresponds to the subgroup n.Z of the fundamental group of the punctured disk.
The theory of Grothendieck, published in SGA1, shows how to reconstruct the category of G-sets from a fibre functor Φ, which in the geometric setting takes the fibre of a covering above a fixed base point (as a set). In fact there is an isomorphism proved of the type
G ≅ Aut(Φ),
the latter being the group of automorphisms (self-natural equivalences) of Φ. An abstract classification of categories with a functor to the category of sets is given, by means of which one can recognise categories of G-sets for G profinite.
To see how this applies to the case of fields, one has to study the tensor product of fields. In topos theory this is a part of the study of atomic toposes.
See also
Tannakian formalism
Fiber functor
Anabelian geometry
References
(This book introduces the reader to the Galois theory of Grothendieck, and some generalisations, leading to Galois groupoids.)
Galois theory
Algebraic geometry
Category theory | Grothendieck's Galois theory | [
"Mathematics"
] | 572 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory",
"Algebraic geometry"
] |
372,619 | https://en.wikipedia.org/wiki/Optical%20engineering | Optical engineering is the field of engineering encompassing the physical phenomena and technologies associated with the generation, transmission, manipulation, detection, and utilization of light. Optical engineers use the science of optics to solve problems and to design and build devices that make light do something useful. They design and operate optical equipment that uses the properties of light using physics and chemistry, such as lenses, microscopes, telescopes, lasers, sensors, fiber-optic communication systems and optical disc systems (e.g. CD, DVD).
Optical engineering metrology uses optical methods to measure either micro-vibrations with instruments like the laser speckle interferometer, or properties of masses with instruments that measure refraction.
Nano-measuring and nano-positioning machines are devices designed by optical engineers. These machines, for example microphotolithographic steppers, have nanometer precision, and consequently are used in the fabrication of goods at this scale.
See also
Optical lens design
Optical physics
Optician
Photonics
References
Further reading
Driggers, Ronald G. (ed.) (2003). Encyclopedia of Optical Engineering. New York: Marcel Dekker. 3 vols.
Bruce H. Walker, Historical Review, SPIE Press, Bellingham, WA.
FTS Yu & Xiangyang Yang (1997) Introduction to Optical Engineering, Cambridge University Press, .
Optical Engineering (ISSN 0091-3286)
Engineering
Engineering disciplines | Optical engineering | [
"Physics",
"Chemistry",
"Engineering"
] | 280 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
17,705,066 | https://en.wikipedia.org/wiki/Pirimiphos-methyl | Pirimiphos-methyl, marketed as Actellic and Sybol, is a phosphorothioate used as an insecticide. It was originally developed by Imperial Chemical Industries Ltd., now Syngenta, at their Jealott's Hill site and first marketed in 1977, ten years after its discovery.
This is one of several compounds used for vector control of Triatoma. These insects are implicated in the transmission of Chagas disease in the Americas. Pirimiphos-methyl can be applied as an interior surface paint additive, in order to achieve a residual pesticide effect.
Synthesis
Pirimiphos methyl is manufactured in a two-step process in which N,N-diethylguanidine is reacted with ethyl acetoacetate to form a pyrimidine ring and its hydroxy group is combined with dimethyl chlorothiophosphate to form the insecticide.
Pyrimiphos-ethyl is a related insecticide in which the methoxy groups are replaced with ethoxy groups.
References
External links
Acetylcholinesterase inhibitors
Organothiophosphate esters
Pesticides
Aminopyrimidines
Diethylamino compounds
Methoxy compounds | Pirimiphos-methyl | [
"Biology",
"Environmental_science"
] | 254 | [
"Biocides",
"Toxicology",
"Pesticides"
] |
17,705,397 | https://en.wikipedia.org/wiki/Procymidone | Procymidone is a pesticide. It is often used for killing unwanted ferns and nettles, and as a dicarboximide fungicide for killing fungi, for example as seed dressing, pre-harvest spray or post-harvest dip of lupins, grapes, stone fruit, strawberries. It is a known endocrine disruptor (androgen receptor antagonist) which interferes with the sexual differentiation of male rats. It is considered to be a poison.
See also
Phenothrin
Prochloraz
Vinclozolin
References
External links
Chlorobenzene derivatives
Endocrine disruptors
Imides
Nonsteroidal antiandrogens
Pesticides
Heterocyclic compounds with 2 rings
Lactams | Procymidone | [
"Chemistry",
"Biology",
"Environmental_science"
] | 152 | [
"Pesticides",
"Toxicology",
"Endocrine disruptors",
"Functional groups",
"Organic compounds",
"Imides",
"Biocides",
"Organic compound stubs",
"Organic chemistry stubs"
] |
17,710,184 | https://en.wikipedia.org/wiki/Engineering%20diffraction | Engineering diffraction refers to a sub-field of neutron scattering which investigates microstructural features that influence the mechanical properties of materials. These include:
lattice strain, a measure of distortion in crystals
texture, a measure of grain orientations
dislocation density, a measure of the microstructure
grain morphology
References
Neutron-related techniques
Scattering | Engineering diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 72 | [
"Scattering stubs",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
3,172,558 | https://en.wikipedia.org/wiki/Selectable%20marker | A selectable marker is a gene introduced into cells, especially bacteria or cells in culture, which confers one or more traits suitable for artificial selection. They are a type of reporter gene used in laboratory microbiology, molecular biology, and genetic engineering to indicate the success of a transfection or transformation or other procedure meant to introduce foreign DNA into a cell. Selectable markers are often antibiotic resistance genes: bacteria subjected to a procedure by which exogenous DNA containing an antibiotic resistance gene (usually alongside other genes of interest) has been introduced are grown on a medium containing an antibiotic, such that only those bacterial cells which have successfully taken up and expressed the introduced genetic material, including the gene which confers antibiotic resistance, can survive and produce colonies. The genes encoding resistance to antibiotics such as ampicillin, chloramphenicol, tetracycline, kanamycin, etc., are all widely used as selectable markers for molecular cloning and other genetic engineering techniques in E. coli.
Modus operandi
Selectable markers allow scientists to separate non-recombinant organisms (those which do not contain the selectable marker) from recombinant organisms (those which do); that is, a recombinant DNA molecule such as a plasmid expression vector is introduced into bacterial cells, and some bacteria are successfully transformed while some remain non-transformed. Antibiotics such as ampicillin, at sufficient concentrations, are toxic to most bacteria, which ordinarily lack resistance to them; when cultured on a nutrient medium containing ampicillin, bacteria lacking ampicillin resistance fail to divide and eventually die. The position is later noted on nitrocellulose paper and separated out to move them to a nutrient medium for mass production of the required product. An alternative to a selectable marker is a screenable marker, another type of reporter gene which allows the researcher to distinguish between wanted and unwanted cells or colonies, such as between blue and white colonies in blue–white screening. These wanted or unwanted cells are simply non-transformed cells that were unable to take up the screenable gene during the experiment.
Positive and negative markers
For molecular biology research, different types of markers may be used based on the selection sought. These include:
Positive or selection markers are selectable markers that confer selective advantage to the host organism. An example would be antibiotic resistance, which allows the host organism to survive antibiotic selection.
Negative or counterselectable markers are selectable markers that eliminate or inhibit growth of the host organism upon selection. An example would be thymidine kinase, which makes the host sensitive to ganciclovir selection.
Selectable markers may serve as both positive and negative markers by conferring an advantage to the host under one condition, but inhibiting growth under a different condition. An example would be an enzyme that can complement an auxotrophy (positive selection) and be able to convert a chemical to a toxic compound (negative selection).
Common examples
Examples of selectable markers include:
Beta-lactamase, which confers ampicillin resistance to bacterial hosts.
Neo gene from Tn5, which confers resistance to kanamycin in bacteria and geneticin in eukaryotic cells.
Mutant FabI gene (mFabI) from the E. coli genome, which confers triclosan resistance to the host.
URA3, an orotidine-5' phosphate decarboxylase from yeast, is a positive and negative selectable marker. It is required for uracil biosynthesis and can complement URA3 mutants that are auxotrophic for uracil (positive selection). The enzyme URA3 also converts 5-fluoroorotic acid (5FOA) into the toxic compound 5-fluorouracil, so any cells carrying the URA3 gene will be killed in the presence of 5FOA (negative selection).
Future developments
In the future, alternative marker technologies will need to be used more often to, at the least, assuage concerns about their persistence into the final product. It is also possible that markers will be replaced entirely by future techniques which use removable markers, and others which do not use markers at all, instead relying on co-transformation, homologous recombination, and recombinase-mediated excision.
See also
Genetic marker
Marker gene
Biomarker
References
Genetics techniques
Molecular biology
Antimicrobial resistance | Selectable marker | [
"Chemistry",
"Engineering",
"Biology"
] | 916 | [
"Genetics techniques",
"Biochemistry",
"Genetic engineering",
"Molecular biology"
] |
3,173,180 | https://en.wikipedia.org/wiki/Sonochemistry | In chemistry, the study of sonochemistry is concerned with understanding the effect of ultrasound in forming acoustic cavitation in liquids, resulting in the initiation or enhancement of the chemical activity in the solution. Therefore, the chemical effects of ultrasound do not come from a direct interaction of the ultrasonic sound wave with the molecules in the solution.
History
The influence of sonic waves travelling through liquids was first reported by Robert Williams Wood (1868–1955) and Alfred Lee Loomis (1887–1975) in 1927. The experiment was about the frequency of the energy that it took for sonic waves to "penetrate" the barrier of water. He came to the conclusion that sound does travel faster in water, but because of the water's density compared to Earth's atmosphere it was incredibly hard to get the sonic waves to couple their energy into the water. Due to the sudden density change, much of the energy is lost, similar to shining a flashlight towards a piece of glass; some of the light is transmitted into the glass, but much of it is lost to reflection outwards. Similarly with an air-water interface, almost all of the sound is reflected off the water, instead of being transmitted into it. After much research they decided that the best way to disperse sound into the water was to create bubbles at the same time as the sound. Another issue was the ratio of the amount of time it took for the lower frequency waves to penetrate the bubbles walls and access the water around the bubble, compared to the time from that point to the point on the other end of the body of water. But despite the revolutionary ideas of this article it was left mostly unnoticed. Sonochemistry experienced a renaissance in the 1980s with the advent of inexpensive and reliable generators of high-intensity ultrasound, most based around piezoelectric elements.
Physical principles
Sound waves propagating through a liquid at ultrasonic frequencies have wavelengths many times longer than the molecular dimensions or the bond length between atoms in the molecule. Therefore, the sound wave cannot directly affect the vibrational energy of the bond, and can therefore not directly increase the internal energy of a molecule. Instead, sonochemistry arises from acoustic cavitation: the formation, growth, and implosive collapse of bubbles in a liquid. The collapse of these bubbles is an almost adiabatic process, thereby resulting in the massive build-up of energy inside the bubble, resulting in extremely high temperatures and pressures in a microscopic region of the sonicated liquid. The high temperatures and pressures result in the chemical excitation of any matter within or very near the bubble as it rapidly implodes. A broad variety of outcomes can result from acoustic cavitation including sonoluminescence, increased chemical activity in the solution due to the formation of primary and secondary radical reactions, and increased chemical activity through the formation of new, relatively stable chemical species that can diffuse further into the solution to create chemical effects (for example, the formation of hydrogen peroxide from the combination of two hydroxyl radicals following the dissociation of water vapor within collapsing bubbles when water is exposed to ultrasound).
Upon irradiation with high intensity sound or ultrasound, acoustic cavitation usually occurs. Cavitation – the formation, growth, and implosive collapse of bubbles irradiated with sound — is the impetus for sonochemistry and sonoluminescence. Bubble collapse in liquids produces enormous amounts of energy from the conversion of kinetic energy of the liquid motion into heating the contents of the bubble. The compression of the bubbles during cavitation is more rapid than thermal transport, which generates a short-lived localized hot-spot. Experimental results have shown that these bubbles have temperatures around 5000 K, pressures of roughly 1000 atm, and heating and cooling rates above 1010 K/s. These cavitations can create extreme physical and chemical conditions in otherwise cold liquids.
With liquids containing solids, similar phenomena may occur with exposure to ultrasound. Once cavitation occurs near an extended solid surface, cavity collapse is nonspherical and drives high-speed jets of liquid to the surface. These jets and associated shock waves can damage the now highly heated surface. Liquid-powder suspensions produce high velocity interparticle collisions. These collisions can change the surface morphology, composition, and reactivity.
Sonochemical reactions
Three classes of sonochemical reactions exist: homogeneous sonochemistry of liquids, heterogeneous sonochemistry of liquid-liquid or solid–liquid systems, and, overlapping with the aforementioned, sonocatalysis (the catalysis or increasing the rate of a chemical reaction with ultrasound). Sonoluminescence is a consequence of the same cavitation phenomena that are responsible for homogeneous sonochemistry. The chemical enhancement of reactions by ultrasound has been explored and has beneficial applications in mixed phase synthesis, materials chemistry, and biomedical uses. Because cavitation can only occur in liquids, chemical reactions are not seen in the ultrasonic irradiation of solids or solid–gas systems.
For example, in chemical kinetics, it has been observed that ultrasound can greatly enhance chemical reactivity in a number of systems by as much as a million-fold; effectively acting to activate heterogeneous catalysts. In addition, in reactions at liquid-solid interfaces, ultrasound breaks up the solid pieces and exposes active clean surfaces through microjet pitting from cavitation near the surfaces and from fragmentation of solids by cavitation collapse nearby. This gives the solid reactant a larger surface area of active surfaces for the reaction to proceed over, increasing the observed rate of reaction.,
While the application of ultrasound often generates mixtures of products, a paper published in 2007 in the journal Nature described the use of ultrasound to selectively affect a certain cyclobutane ring-opening reaction. Atul Kumar has reported multicomponent reaction Hantzsch ester synthesis in Aqueous Micelles using ultrasound.
Some water pollutants, especially chlorinated organic compounds, can be destroyed sonochemically.
Sonochemistry can be performed by using a bath (usually used for ultrasonic cleaning) or with a high power probe, called an ultrasonic horn, which funnels and couples a piezoelectric element's energy into the water, concentrated at one (typically small) point.
Sonochemistry can also be used to weld metals which are not normally feasible to join, or form novel alloys on a metal surface. This is distantly related to the method of calibrating ultrasonic cleaners using a sheet of aluminium foil and counting the holes. The holes formed are a result of microjet pitting resulting from cavitation near the surface, as mentioned previously. Due to the aluminium foil's thinness and weakness, the cavitation quickly results in fragmentation and destruction of the foil.
A new generation of sonochemistry is harnessing the advantages of functional, ferroelectric materials, to further enhance chemistry in a sonochemical reactor in an emerging process called piezocatalysis.
See also
Ultrasound
Sonication
Ultrasonics
ultrasonic homogenizer
homogenizer
Homogenization (chemistry)
Sonoelectrochemistry
Kenneth S. Suslick
References
External links
The Chemical and Physical Effects of Ultrasound by Prof. K. S. Suslick
Sonochemistry – Short Review and Recent Literature
Sonochemistry: New Opportunities for Green Chemistry by Gregory Chatel (Université Savoie Mont Blanc, France)
Physical phenomena
Chemistry
Quantum chemistry
Fluid dynamics
Ultrasound
Acoustics | Sonochemistry | [
"Physics",
"Chemistry",
"Engineering"
] | 1,539 | [
"Physical phenomena",
"Quantum chemistry",
"Chemical engineering",
"Quantum mechanics",
"Classical mechanics",
"Acoustics",
"Piping",
"Theoretical chemistry",
" molecular",
"Atomic",
"Fluid dynamics",
" and optical physics"
] |
3,173,663 | https://en.wikipedia.org/wiki/Plug%20flow%20reactor%20model | The plug flow reactor model (PFR, sometimes called continuous tubular reactor, CTR, or piston flow reactors) is a model used to describe chemical reactions in continuous, flowing systems of cylindrical geometry. The PFR model is used to predict the behavior of chemical reactors of such design, so that key reactor variables, such as the dimensions of the reactor, can be estimated.
Fluid going through a PFR may be modeled as flowing through the reactor as a series of infinitely thin coherent "plugs", each with a uniform composition, traveling in the axial direction of the reactor, with each plug having a different composition from the ones before and after it. The key assumption is that as a plug flows through a PFR, the fluid is perfectly mixed in the radial direction but not in the axial direction (forwards or backwards). Each plug of differential volume is considered as a separate entity, effectively an infinitesimally small continuous stirred tank reactor, limiting to zero volume. As it flows down the tubular PFR, the residence time () of the plug is a function of its position in the reactor. In the ideal PFR, the residence time distribution is therefore a Dirac delta function with a value equal to .
PFR modeling
The stationary PFR is governed by ordinary differential equations, the solution for which can be calculated providing that appropriate boundary conditions are known.
The PFR model works well for many fluids: liquids, gases, and slurries. Although turbulent flow and axial diffusion cause a degree of mixing in the axial direction in real reactors, the PFR model is appropriate when these effects are sufficiently small that they can be ignored.
In the simplest case of a PFR model, several key assumptions must be made in order to simplify the problem, some of which are outlined below. Note that not all of these assumptions are necessary, however the removal of these assumptions does increase the complexity of the problem. The PFR model can be used to model multiple reactions as well as reactions involving changing temperatures, pressures and densities of the flow. Although these complications are ignored in what follows, they are often relevant to industrial processes.
Assumptions:
Plug flow
Steady state
Constant density (reasonable for some liquids but a 20% error for polymerizations; valid for gases only if there is no pressure drop, no net change in the number of moles, nor any large temperature change)
Single reaction occurring in the bulk of the fluid (homogeneously).
A material balance on the differential volume of a fluid element, or plug, on species i of axial length dx between x and x + dx gives:
[accumulation] = [in] - [out] + [generation] - [consumption]
Accumulation is 0 under steady state; therefore, the above mass balance can be re-written as follows:
1. .
where:
x is the reactor tube axial position, m
dx the differential thickness of fluid plug
the index i refers to the species i
Fi(x) is the molar flow rate of species i at the position x, mol/s
D is the tube diameter, m
At is the tube transverse cross sectional area, m2
ν is the stoichiometric coefficient, dimensionless
r is the volumetric source/sink term (the reaction rate), mol/m3s.
The flow linear velocity, u (m/s) and the concentration of species i, Ci (mol/m3) can be introduced as:
and
where is the volumetric flow rate.
On application of the above to Equation 1, the mass balance on i becomes:
2. .
When like terms are cancelled and the limit dx → 0 is applied to Equation 2 the mass balance on species i becomes
3. ,
The temperature dependence of the reaction rate, r, can be estimated using the Arrhenius equation. Generally, as the temperature increases so does the rate at which the reaction occurs. Residence time, , is the average amount of time a discrete quantity of reagent spends inside the tank.
Assume:
isothermal conditions, or constant temperature (k is constant)
single, irreversible reaction (νA = -1)
first-order reaction (r = k CA)
After integration of Equation 3 using the above assumptions, solving for CA(x) we get an explicit equation for the concentration of species A as a function of position:
4. ,
where CA0 is the concentration of species A at the inlet to the reactor, appearing from the integration boundary condition.
Operation and uses
PFRs are used to model the chemical transformation of compounds as they are transported in systems resembling "pipes". The "pipe" can represent a variety of engineered or natural conduits through which liquids or gases flow. (e.g. rivers, pipelines, regions between two mountains, etc.)
An ideal plug flow reactor has a fixed residence time: Any fluid (plug) that enters the reactor at time will exit the reactor at time , where is the residence time of the reactor. The residence time distribution function is therefore a Dirac delta function at . A real plug flow reactor has a residence time distribution that is a narrow pulse around the mean residence time distribution.
A typical plug flow reactor could be a tube packed with some solid material (frequently a catalyst). Typically these types of reactors are called packed bed reactors or PBR's. Sometimes the tube will be a tube in a shell and tube heat exchanger.
When a plug flow model can not be applied, the dispersion model is usually employed.
Residence-time distribution
The residence-time distribution (RTD) of a reactor is a characteristic of the mixing that occurs in the chemical reactor. There is no axial mixing in a plug-flow reactor, and this omission is reflected in the RTD which is exhibited by this class of reactors.
Real plug flow reactors do not satisfy the idealized flow patterns, back mix flow or plug flow deviation from ideal behavior can be due to channeling of fluid through the vessel, recycling of fluid within the vessel or due to the presence of stagnant region or dead zone of fluid in the vessel. Real plug flow reactors with non-ideal behavior have also been modelled. To predict the exact behavior of a vessel as a chemical reactor, RTD or stimulus response technique is used. The tracer technique, the most widely used method for the study of axial dispersion, is usually used in the form of:
Pulse input
Step input
Cyclic input
Random input
The RTD is determined experimentally by injecting an inert chemical, molecule, or atom, called a tracer, into the reactor at some time t = 0 and then measuring the tracer concentration, C, in the effluent stream as a function of time.
The RTD curve of fluid leaving a vessel is called the E-Curve. This curve is normalized in such a way that the area under it is unity:
(1)
The mean age of the exit stream or mean residence time is:
(2)
When a tracer is injected into a reactor at a location more than two or three particle diameters downstream from the entrance and measured some distance upstream from the exit, the system can be described by the dispersion model with combinations of open or close boundary conditions. For such a system where there is no discontinuity in type of flow at the point of tracer injection or at the point of tracer measurement, the variance for open-open system is:
(3)
Where,
(4)
which represents the ratio of rate of transport by convection to rate of transport by diffusion or dispersion.
= characteristic length (m)
= effective dispersion coefficient ( m2/s)
= superficial velocity (m/s) based on empty cross-section
Vessel dispersion number is defined as:
The variance of a continuous distribution measured at a finite number of equidistant locations is given by:
(5)
Where mean residence time τ is given by:
(6)
(7)
Thus (σθ)2 can be evaluated from the experimental data on C vs. t and for known values of , the dispersion number can be obtained from eq. (3) as:
(8)
Thus axial dispersion coefficient DL can be estimated (L = packed height)
As mentioned before, there are also other boundary conditions that can be applied to the dispersion model giving different relationships for the dispersion number.
Advantages
From the safety technical point of view the PFR has the advantages that
It operates in a steady state
It is well controllable
Large heat transfer areas can be installed
Concerns
The main problems lies in difficult and sometimes critical start-up and shut down operations.
Applications
Plug flow reactors are used for some of the following applications:
Large-scale production
Fast reactions
Homogeneous or heterogeneous reactions
Continuous production
High-temperature reactions
See also
Flow chemistry
Continuous stirred-tank reactor
Laminar flow reactor
Microreactor
Oscillatory baffled reactor
Reference and sources
Chemical reactors | Plug flow reactor model | [
"Chemistry",
"Engineering"
] | 1,826 | [
"Chemical reactors",
"Chemical reaction engineering",
"Chemical equipment"
] |
3,173,705 | https://en.wikipedia.org/wiki/Packed%20bed | In chemical processing, a packed bed is a hollow tube, pipe, or other vessel that is filled with a packing material. The packed bed can be randomly filled with small objects like Raschig rings or else it can be a specifically designed structured packing. Packed beds may also contain catalyst particles or adsorbents such as zeolite pellets, granular activated carbon, etc.
The purpose of a packed bed is typically to improve contact between two phases in a chemical or similar process. Packed beds can be used in a chemical reactor, a distillation process, or a scrubber, but packed beds have also been used to store heat in chemical plants. In this case, hot gases are allowed to escape through a vessel that is packed with a refractory material until the packing is hot. Air or other cool gas is then fed back to the plant through the hot bed, thereby pre-heating the air or gas feed.
Applications
A packed bed used to perform separation processes, such as absorption, stripping, and distillation is known as a packed column. Columns used in certain types of chromatography consisting of a tube filled with packing material can also be called packed columns and their structure has similarities to packed beds.
The column bed can be filled with randomly dumped packing material (creating a random packed bed) or with structured packing sections, which are arranged in a way that force fluids to take complicated paths through the bed (creating a structured packed bed). In the column, liquids tend to wet the surface of the packing material and the vapors pass across this wetted surface, where mass transfer takes place. Packing materials can be used instead of trays to improve separation in distillation columns. Packing offers the advantage of a lower pressure drop across the column (when compared to plates or trays), which is beneficial while operating under vacuum. Differently shaped packing materials have different surface areas and void space between the packing. Both of these factors affect packing performance.
Another factor in performance, in addition to the packing shape and surface area, is the liquid and vapor distribution that enters the packed bed. The number of theoretical stages required to make a given separation is calculated using a specific vapor to liquid ratio. If the liquid and vapor are not evenly distributed across the superficial tower area as it enters the packed bed, the liquid to vapor ratio will not be correct and the required separation will not be achieved. The packing will appear to not be working properly. The height equivalent to a theoretical plate (HETP) will be greater than expected. The problem is not the packing itself but the mal-distribution of the fluids entering the packed bed. These columns can contain liquid distributors and redistributors which help to distribute the liquid evenly over a section of packing, increasing the efficiency of the mass transfer. The design of the liquid distributors used to introduce the feed and reflux to a packed bed is critical to making the packing perform at maximum efficiency.
Packed columns have a continuous vapor-equilibrium curve, unlike conventional tray distillation in which every tray represents a separate point of vapor-liquid equilibrium. However, when modeling packed columns, it is useful to compute a number of theoretical plates to denote the separation efficiency of the packed column with respect to more traditional trays. In design, the number of necessary theoretical equilibrium stages is first determined and then the packing height equivalent to a theoretical equilibrium stage, known as the height equivalent to a theoretical plate (HETP), is also determined. The total packing height required is the number theoretical stages multiplied by the HETP.
Packed Bed Reactors (PBRs)
Packed bed reactors are reactor vessels containing a fixed bed of catalytic material, they are widely used in the chemical process industry and find primary use in heterogeneous, gas-phase, catalytic reactions. The advantages of using a packed bed reactor include the high conversion of reactants per unit mass of catalyst, relatively low operating costs, and continuous operation. Disadvantages include the presence of thermal gradients throughout the bed, poor temperature control, and difficult servicing of the reactor.
Theory
The Ergun equation can be used to predict the pressure drop along the length of a packed bed given the fluid velocity, the packing size, and the viscosity and density of the fluid.
The Ergun equation, while reliable for systems on the surface of the earth, is unreliable for predicting the behavior of systems in microgravity. Experiments are currently underway aboard the International Space Station to collect data and develop reliable models for in-orbit packed-bed reactors.
Monitoring
The performance of a packed bed is highly dependent on the flow of material through it, which in turn is dependent on the packing and how the flow is managed. Tomographic techniques such as near-infrared, x-ray, gamma ray, electrical capacitance, electrical resistance tomography are used to quantify liquid distribution patterns in packed columns; choice of tomographic technique depends on the primary measurement of interest, randomness of packing, safety requirements, desired data acquisition rate, and budget.
See also
Bibliography
References
Distillation
Chemical equipment | Packed bed | [
"Chemistry",
"Engineering"
] | 1,026 | [
"Chemical equipment",
"Distillation",
"nan",
"Separation processes"
] |
3,175,932 | https://en.wikipedia.org/wiki/Phosphorene | Phosphorene is a two-dimensional material consisting of phosphorus. It consists of a single layer of black phosphorus, the most stable allotrope of phosphorus. Phosphorene is analogous to graphene (single layer graphite). Among two-dimensional materials, phosphorene is a competitor to graphene because it has a nonzero fundamental band gap that can be modulated by strain and the number of layers in a stack. Phosphorene was first isolated in 2014 by mechanical exfoliation. Liquid exfoliation is a promising method for scalable phosphorene production.
History
In 1914 black phosphorus, a layered, semiconducting allotrope of phosphorus, was synthesized. This allotrope exhibits high carrier mobility. In 2014, several groups isolated single-layer phosphorene, a monolayer of black phosphorus. It attracted renewed attention because of its potential in optoelectronics and electronics due to its band gap, which can be tuned via modifying its thickness, anisotropic photoelectronic properties and carrier mobility. Phosphorene was initially prepared using mechanical cleavage, a commonly used technique in graphene production.
In 2023, alloys of arsenic-phosphorene displayed higher hole mobility than pure phosphorene and were also magnetic.
Synthesis
Synthesis of phosphorene is a significant challenge. Currently, there are two main ways of phosphorene production: scotch-tape-based microcleavage and liquid exfoliation, while several other methods are being developed as well. Phosphorene production from plasma etching has also been reported.
In scotch-tape-based microcleavage, phosphorene is mechanically exfoliated from a bulk of black phosphorus crystal using scotch-tape. Phosphorene is then transferred on a Si/SiO2 substrate, where it is then cleaned with acetone, isopropyl alcohol and methanol to remove any scotch tape residue. The sample is then heated to 180 °C to remove solvent residue.
In the liquid exfoliation method, first reported by Brent et al. in 2014 and modified by others, bulk black phosphorus is first ground in a mortar and pestle and then sonicated in deoxygenated, anhydrous organic liquids such as NMP under an inert atmosphere using low-power bath sonication. Suspensions are then centrifuged for 30 minutes to filter out the unexfoliated black phosphorus. Resulting 2D monolayer and few-layer phosphorene unoxidized and crystalline structure, while exposure to air oxidizes the phosphorene and produces acid.
Another variation of liquid exfoliation is "basic N-methyl-2-pyrrolidone (NMP) liquid exfoliation". Bulk black phosphorene is added to a saturated NaOH/NMP solution, which is further sonicated for 4 hours to conduct liquid exfoliation. The solution is then centrifuged twice, first for 10 minutes to remove any unexfoliated black phosphorus and then for 20 minutes at a higher speed to separate thick layers of phosphorene (5–12 layers) from NMP. The supernatant then is centrifuged again at higher speed for another 20 minutes to separate thinner layers of phosphorene (1–7 layers). The precipitate from centrifugation is then redispersed in water and washed several times by deionized water. Phosphorene/water solution is dropped onto silicon with a 280-nm SiO2 surface, where it is further dried under vacuum. NMP liquid exfoliation method was shown to yield phosphorene with controllable size and layer number, excellent water stability and in high yield.
The disadvantage of the current methods includes long sonication time, high boiling point solvents, and low efficiency. Therefore, other physical methods for liquid exfoliation are still under development. A laser-assisted method developed by Zheng and co-workers showed a promising yield of up to 90% within 5 minutes. The laser photon interacts with the surface of bulk black phosphorus crystal, causing a plasma and solvent bubbles to weaken the interlayer interaction. Depending on the laser energy, solvent (ethanol, methanol, hexane, etc.) and irradiation time, the layer number and lateral size of the phosphorene were controlled.
The high yield production of phosphorene has been demonstrated by many groups in solvents, but to realize the potential applications of this material, it is crucial to deposit these free-standing nanosheets in solvents systematically on substrates. H. Kaur et al. demonstrated the synthesis, interface-driven alignment and subsequent functional properties of few layer semiconducting phosphorene using Langmuir-Blodgett assembly. This is the first study which provides a straightforward and versatile solution towards the challenge of assembling nanosheets of phosphorene onto various supports and subsequently use these sheets in an electronic device. Therefore, wet assemblies techniques like Langmuir-Blodgett serves as a very valuable new entry point for the exploration of electronic as well as opto-electronic properties of phosphorene as well as other 2D layered inorganic materials.
It is still a challenge to directly epitaxially grow 2D phosphorene because the stability of black phosphorene is highly sensitive to substrate, which is understanding by theoretical simulations.
Properties
Structure
Phosphorene 2D materials are composed of individual layers held together by van der Waals forces in lieu of covalent or ionic bonds that are found in most materials. There are three electrons within the 3p orbitals of the phosphorus atom, thus, giving rise to sp3 hybridization of each phosphorus atom within the phosphorene structure. Monolayered phosphorene exhibits the structure of a quadrangular pyramid because three electrons of P atom bond with three other P atoms covalently at 2.18 Å leaving one lone pair. Two of the phosphorus atoms are in the plane of the layer at 99° from one another, and the third phosphorus is between the layers at 103°, yielding an average angle of 102°.
According to density functional theory (DFT) calculations, phosphorene forms in a honeycomb lattice structure with notable nonplanarity in the shape of structural ridges. It is predicted that crystal structure of black phosphorus can be discriminated under high pressure. This is mostly due to the anisotropic compressibility of black phosphorus because of the asymmetrical crystal structures. Subsequently, the van der Waals bond can be greatly compressed in the z-direction. However, there is a great variation in compressibility across the orthogonal x-y plane.
It is reported that controlling the centrifugal speed of production may aid in regulating the thickness of a material. For example, centrifuging at 18,000 rpm during synthesis produced phosphorene with an average diameter of 210 nm and a thickness of 2.8 ± 1.5 nm (2–7 layers).
Band gap and conductivity
Phosphorene has a thickness dependent direct band gap that changes to 1.88 eV in a monolayer from 0.3 eV in the bulk. Increase in band gap value in single-layer phosphorene is predicted to be caused by the absence of interlayer hybridization near the top of the valence and bottom of the conduction band. A pronounced peak centered at around 1.45 eV suggests the band gap structure in few- or single-layer phosphorene difference from bulk crystals.
In vacuum or on weak substrate, an interesting reconstruction with nanotubed termination of phosphorene edge is very easy to happen, transforming phosphorene edge from metallic to semiconducting.
Air stability
One major disadvantage of phosphorene is its limited air-stability. Composed of hygroscopic phosphorus and with extremely high surface-to-volume ratio, phosphorene reacts with water vapor and oxygen assisted by visible light to degrade within the scope of hours. Through the degradation process, phosphorene (solid) reacts with oxygen/water to develop liquid phase acid 'bubbles' on the surface, and finally evaporate (vapor) to fully vanish (S-B-V degradation) and severely reducing overall quality.
Applications
Transistor
Researchers have fabricated transistors of phosphorene to examine its performance in actual devices. Phosphorene-based transistor consists of a channel of 1.0 μm and uses few layered phosphorene with a thickness varying from 2.1 to over 20 nm. Reduction of the total resistance with decreasing gate voltage is observed, indicating the p-type characteristic of phosphorene. Linear I-V relationship of transistor at low drain bias suggests good contact properties at the phosphorene/metal interface. Good current saturation at high drain bias values was observed. However, it was seen that the mobility is reduced in few-layer phosphorene when compared to bulk black phosphorus. Field-effect mobility of phosphorene-based transistor shows a strong thickness dependence, peaking at around 5 nm and decrease steadily with further increase of crystal thickness.
Atomic layer deposition (ALD) dielectric layer and/or hydrophobic polymer is used as encapsulation layers in order to prevent device degradation and failure. Phosphorene devices are reported to maintain their function for weeks with encapsulation layer, whereas experience device failure within a week when exposed to ambient condition.
Battery electrode
Phosphorene is considered a promising anode material for rechargeable batteries, such as lithium-ion batteries. The interlayer space allows lithium storage and transfer. The layer number and lateral size of phosphorene affect the stability and capacity of the anode.
Inverter
Researchers have also constructed the CMOS inverter (logic circuit) by combining a phosphorene PMOS transistor with a MoS2 NMOS transistor, achieving high heterogeneous integration of semiconducting phosphorene crystals as a new channel material for potential electronic applications. In the inverter, the power supply voltage is set to be 1 V. The output voltage shows a clear transition from VDD to 0 within the input voltage range from −10 to −2 V. A maximum gain of ~1.4 is attained.
Solar-cell donor material (optoelectronics)
The potential applications of mixed bilayer phosphorene in solar-cell material was examined as well.
Flexible circuits
Phosphorene is a promising candidate for flexible nano systems due to its ultra-thin nature with ideal electrostatic control and superior mechanical flexibility. Researchers have demonstrated the flexible transistors, circuits and AM demodulator based on few-layer phosphorus, showing enhanced am bipolar transport with high room temperature carrier mobility as high as ~310 cm2/Vs and strong current saturation. Fundamental circuit units including digital inverter, voltage amplifier and frequency doubler have been realized. Radio frequency (RF) transistors with highest intrinsic cutoff frequency of 20 GHz has been realized for potential applications in high frequency flexible smart nano systems.
See also
Borophene
Germanene
Graphene
Silicene
Stanene
References
Phosphorus
Semiconductor materials
Monolayers | Phosphorene | [
"Physics",
"Chemistry"
] | 2,408 | [
"Monolayers",
"Semiconductor materials",
"Atoms",
"Matter"
] |
3,176,865 | https://en.wikipedia.org/wiki/Bredt%27s%20rule | In organic chemistry, an anti-Bredt molecule is a bridged molecule with a double bond at the bridgehead. Bredt's rule is the empirical observation that such molecules only form in large ring systems. For example, two of the following norbornene isomers violate Bredt's rule, and are too unstable to prepare:
The rule is named after Julius Bredt, who first discussed it in 1902 and codified it in 1924. There are a few instances where the anti-Bredt phenomenon is mentioned, but the isolation of these molecules is difficult, so they are typically trapped in situ. Authors such as Mehta (2002) and Khan (2015) discovered the existence of anti-Bredt olefins. Later, in 2024, Neil Garg and his team demonstrated that the formation of anti-Bredt molecules is possible, even if only as short-lived intermediates.
Bredt's rule results from geometric strain: a double bond at a bridgehead atom necessarily must be trans in at least one ring. For small rings (fewer than eight atoms), a trans alkene cannot be achieved without substantial ring and angle strain (the p orbitals are improperly aligned for a π bond). Bredt's rule also applies to carbocations and, to a lesser degree, free radicals, because these intermediates also prefer a planar geometry with 120° angles and sp2 hybridization. It generally does not apply to hypervalent heteroatoms, although they are commonly written with a formal double bond.
There has been an active research program to seek anti-Bredt molecules, with success quantified in S, the non-bridgehead atom count. The above norbornene system has S = 5, and Fawcett originally postulated that stability required S ≥ 9 in bicyclic systems and S ≥ 11 in tricyclic systems. For bicyclic systems examples now indicate a limit of S ≥ 7, with several such compounds having been prepared. Bridgehead double bonds can be found in some natural products.
Bredt's rule can predict the viability of competing elimination reactions in a bridged system. For example, the metal alkyl complexes usually decompose quickly via beta elimination, but Bredt strain prevents tetranorbornyl complexes from doing so. Bicyclo[5.3.1]undecane-11-one-1-carboxylic acid undergoes decarboxylation on heating to 132 °C, but the similar compound bicyclo[2.2.1]heptan-7-one-1-carboxylic acid remains stable beyond 500 °C, because the decarboxylation proceeds through an anti-Bredt enol.
Bredt's rule may also prevent a molecule from resonating with certain valence bond isomers. 2-Quinuclidonium does not exhibit the usual reactivity of an amide, because the iminoether tautomer would violate the rule.
Although exceptions to the rule have long been known, in 2024 chemists from University of California, Los Angeles demonstrated a general method to access Anti-Bredt olefins with S ≤ 7.
See also
Double bond rule — another geometric-strain constraint on alkenes
trans-Cyclooctene — smallest unstrained trans cycloalkene
References
Eponymous chemical rules
Physical organic chemistry
Stereochemistry | Bredt's rule | [
"Physics",
"Chemistry"
] | 708 | [
"Stereochemistry",
"Space",
"nan",
"Physical organic chemistry",
"Spacetime"
] |
3,177,013 | https://en.wikipedia.org/wiki/Diaper%20Genie | Diaper Genie is a baby diaper disposal system. It consists of a large plastic container with a plastic lid. The system seals diapers individually in a scented film to protect against germs and odors. By opening the lid on the top of the canister, a soiled diaper may be inserted into the "mouth" of the container. After inserting the diaper, the lid is replaced and twisted three full rotations to seal the diaper inside. When the container is filled with dirty diapers, it can be emptied by unlatching the bottom of the canister, where the diapers fall out still individually sealed. The resulting string of sealed diapers is colloquially known as a "diaper sausage."
The product was initially a creation of British inventors (currently marketed under the name "Sangenic" by Mayborn in the UK), and was brought to prominence in the US in the mid-1990s.
Diaper Genie is a brand of Playtex Products, Inc., which bought the Diaper Genie business from Mondial Industries L.P. in 1999.
References
External links
Diapers
Waste containers | Diaper Genie | [
"Biology"
] | 234 | [
"Diapers",
"Excretion"
] |
3,177,451 | https://en.wikipedia.org/wiki/Conjugate%20variables%20%28thermodynamics%29 | In thermodynamics, the internal energy of a system is expressed in terms of pairs of conjugate variables such as temperature and entropy, pressure and volume, or chemical potential and particle number. In fact, all thermodynamic potentials are expressed in terms of conjugate pairs. The product of two quantities that are conjugate has units of energy or sometimes power.
For a mechanical system, a small increment of energy is the product of a force times a small displacement. A similar situation exists in thermodynamics. An increment in the energy of a thermodynamic system can be expressed as the sum of the products of certain generalized "forces" that, when unbalanced, cause certain generalized "displacements", and the product of the two is the energy transferred as a result. These forces and their associated displacements are called conjugate variables. The thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy transfer. The intensive (force) variable is the derivative of the internal energy with respect to the extensive (displacement) variable, while all other extensive variables are held constant.
The thermodynamic square can be used as a tool to recall and derive some of the thermodynamic potentials based on conjugate variables.
In the above description, the product of two conjugate variables yields an energy. In other words, the conjugate pairs are conjugate with respect to energy. In general, conjugate pairs can be defined with respect to any thermodynamic state function. Conjugate pairs with respect to entropy are often used, in which the product of the conjugate pairs yields an entropy. Such conjugate pairs are particularly useful in the analysis of irreversible processes, as exemplified in the derivation of the Onsager reciprocal relations.
Overview
Just as a small increment of energy in a mechanical system is the product of a force times a small displacement, so an increment in the energy of a thermodynamic system can be expressed as the sum of the products of certain generalized "forces" which, when unbalanced, cause certain generalized "displacements" to occur, with their product being the energy transferred as a result. These forces and their associated displacements are called conjugate variables. For example, consider the conjugate pair. The pressure acts as a generalized force: Pressure differences force a change in volume , and their product is the energy lost by the system due to work. Here, pressure is the driving force, volume is the associated displacement, and the two form a pair of conjugate variables. In a similar way, temperature differences drive changes in entropy, and their product is the energy transferred by heat transfer. The thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy. The intensive (force) variable is the derivative of the (extensive) internal energy with respect to the extensive (displacement) variable, with all other extensive variables held constant.
The theory of thermodynamic potentials is not complete until one considers the number of particles in a system as a variable on par with the other extensive quantities such as volume and entropy. The number of particles is, like volume and entropy, the displacement variable in a conjugate pair. The generalized force component of this pair is the chemical potential. The chemical potential may be thought of as a force which, when imbalanced, pushes an exchange of particles, either with the surroundings, or between phases inside the system. In cases where there are a mixture of chemicals and phases, this is a useful concept. For example, if a container holds liquid water and water vapor, there will be a chemical potential (which is negative) for the liquid which pushes the water molecules into the vapor (evaporation) and a chemical potential for the vapor, pushing vapor molecules into the liquid (condensation). Only when these "forces" equilibrate, and the chemical potential of each phase is equal, is equilibrium obtained.
The most commonly considered conjugate thermodynamic variables are (with corresponding SI units):
Thermal parameters:
Temperature: (K)
Entropy: (J K−1)
Mechanical parameters:
Pressure: (Pa= J m−3)
Volume: (m3 = J Pa−1)
or, more generally,
Stress: (Pa= J m−3)
Volume × Strain: (m3 = J Pa−1)
Material parameters:
chemical potential: (J)
particle number: (particles or mole)
For a system with different types of particles, a small change in the internal energy is given by:
where is internal energy, is temperature, is entropy, is pressure, is volume, is the chemical potential of the -th particle type, and is the number of -type particles in the system.
Here, the temperature, pressure, and chemical potential are the generalized forces, which drive the generalized changes in entropy, volume, and particle number respectively. These parameters all affect the internal energy of a thermodynamic system. A small change in the internal energy of the system is given by the sum of the flow of energy across the boundaries of the system due to the corresponding conjugate pair. These concepts will be expanded upon in the following sections.
While dealing with processes in which systems exchange matter or energy, classical thermodynamics is not concerned with the rate at which such processes take place, termed kinetics. For this reason, the term thermodynamics is usually used synonymously with equilibrium thermodynamics. A central notion for this connection is that of quasistatic processes, namely idealized, "infinitely slow" processes. Time-dependent thermodynamic processes far away from equilibrium are studied by non-equilibrium thermodynamics. This can be done through linear or non-linear analysis of irreversible processes, allowing systems near and far away from equilibrium to be studied, respectively.
Pressure/volume and stress/strain pairs
As an example, consider the conjugate pair. The pressure acts as a generalized force – pressure differences force a change in volume, and their product is the energy lost by the system due to mechanical work. Pressure is the driving force, volume is the associated displacement, and the two form a pair of conjugate variables.
The above holds true only for non-viscous fluids. In the case of viscous fluids, plastic and elastic solids, the pressure force is generalized to the stress tensor, and changes in volume are generalized to the volume multiplied by the strain tensor. These then form a conjugate pair. If is the ij component of the stress tensor, and is the ij component of the strain tensor, then the mechanical work done as the result of a stress-induced infinitesimal strain is:
or, using Einstein notation for the tensors, in which repeated indices are assumed to be summed:
In the case of pure compression (i.e. no shearing forces), the stress tensor is simply the negative of the pressure times the unit tensor so that
The trace of the strain tensor () is the fractional change in volume so that the above reduces to as it should.
Temperature/entropy pair
In a similar way, temperature differences drive changes in entropy, and their product is the energy transferred by heating. Temperature is the driving force, entropy is the associated displacement, and the two form a pair of conjugate variables. The temperature/entropy pair of conjugate variables is the only heat term; the other terms are essentially all various forms of work.
Chemical potential/particle number pair
The chemical potential is like a force which pushes an increase in particle number. In cases where there are a mixture of chemicals and phases, this is a useful concept. For example, if a container holds water and water vapor, there will be a chemical potential (which is negative) for the liquid, pushing water molecules into the vapor (evaporation) and a chemical potential for the vapor, pushing vapor molecules into the liquid (condensation). Only when these "forces" equilibrate is equilibrium obtained.
See also
Generalized coordinate and generalized force: analogous conjugate variable pairs found in classical mechanics.
Intensive and extensive properties
Bond graph
References
Further reading
Thermodynamic properties | Conjugate variables (thermodynamics) | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,720 | [
"Thermodynamic properties",
"Quantity",
"Thermodynamics",
"Physical quantities"
] |
3,178,199 | https://en.wikipedia.org/wiki/Tethering | Tethering or phone-as-modem (PAM) is the sharing of a mobile device's Internet connection with other connected computers. Connection of a mobile device with other devices can be done over wireless LAN (Wi-Fi), over Bluetooth or by physical connection using a cable, for example through USB.
If tethering is done over WLAN, the feature may be branded as a personal hotspot or mobile hotspot, which allows the device to serve as a portable router. Mobile hotspots may be protected by a PIN or password. The Internet-connected mobile device can act as a portable wireless access point and router for devices connected to it.
Mobile devices' OS support
Many mobile devices are equipped with software to offer tethered Internet access. Windows Mobile 6.5, Windows Phone 7, Android (starting from version 2.2), and iOS 3.0 (or later) offer tethering over a Bluetooth PAN or a USB connection. Tethering over Wi-Fi, also known as Personal Hotspot, is available on iOS starting with iOS 4.2.5 (or later) on iPhone 4 or iPad (3rd gen), certain Windows Mobile 6.5 devices like the HTC HD2, Windows Phone 7, 8 and 8.1 devices (varies by manufacturer and model), and certain Android phones (varies widely depending on carrier, manufacturer, and software version).
For IPv4 networks, the tethering normally works via NAT on the handset's existing data connection, so from the network point of view, there is just one device with a single IPv4 network address, though it is technically possible to attempt to identify multiple machines.
On some mobile network operators, this feature is contractually unavailable by default, and may be activated only by paying to add a tethering package to a data plan or choosing a data plan that includes tethering. This is done primarily because with a computer sharing the network connection, there is typically substantially more network traffic.
Some network-provided devices have carrier-specific software that may deny the inbuilt tethering ability normally available on the device, or enable it only if the subscriber pays an additional fee. Some operators have asked Google or any mobile device producer using Android to completely remove tethering capability from the operating system on certain devices. Handsets purchased SIM-free, without a network provider subsidy, are often unhindered with regard to tethering.
There are, however, several ways to enable tethering on restricted devices without paying the carrier for it, including third-party USB tethering apps such as PDAnet, rooting Android devices or jailbreaking iOS devices and installing a tethering application on the device. Tethering is also available as a downloadable third-party application on most Symbian mobile phones as well as on the MeeGo platform and on WebOS mobiles phones.
In carriers' contracts
Depending on the wireless carrier, a user's cellular device may have restricted functionality. While tethering may be allowed at no extra cost, some carriers impose a one-time charge to enable tethering and others forbid tethering or impose added data charges. Contracts that advertise "unlimited" data usage often have limits detailed in a fair usage policy.
United Kingdom
Since 2014, all pay-monthly plans from the Three network in the UK include a "personal hotspot" feature.
Earlier, two tethering-permitted mobile plans offered unlimited data: The Full Monty on T-Mobile, and The One Plan on Three. Three offered tethering as a standard feature until early 2012, retaining it on selected plans. T-Mobile dropped tethering on its unlimited data plans in late 2012.
United States
As cited in Sprint Nextel's "Terms of Service":
"Except with Phone-as-Modem plans, you may not use a phone (including a Bluetooth phone) as a modem in connection with a computer, PDA, or similar device. We reserve the right to deny or terminate service without notice for any misuse or any use that adversely affects network performance."
T-Mobile US has a similar clause in its "Terms & Conditions":
"Unless explicitly permitted by your Data Plan, other uses, including for example, using your Device as a modem or tethering your Device to a personal computer or other hardware, are not permitted."
T-Mobile's Simple Family or Simple Business plans offer "Hotspot" from devices that offer that function (such as Apple iPhone) to up to five devices. Since March 27, 2014, 1000 MB per month is free in the US with cellular service. The host device has unlimited slow internet for the rest of the month, and all month while roaming in 100 countries, but with no tethering. For US$10 or $20 per month more per host device, the amount of data available for tethering can be increased markedly. The host device cellular services can be canceled, added, or changed at any time, pro-rated, data tethering levels can be changed month-to-month, and T-Mobile no longer requires any long-term service contracts, allowing users to bring their own devices or buy devices from them, independent of whether they continue service with them.
Verizon Wireless and AT&T Mobility offer wired tethering to their plans for a fee, while Sprint Nextel offers a Wi-Fi connected "mobile hotspot" tethering feature at an added charge. However, actions by the Federal Communications Commission (FCC) and a small claims court in California may make it easier for consumers to tether. On July 31, 2012, the FCC released an unofficial announcement of Commission action, decreeing Verizon Wireless must pay $1.25 million to resolve the investigation regarding compliance of the C Block Spectrum (see US Wireless Spectrum Auction of 2008). The announcement also stated that "(Verizon) recently revised its service offerings such that consumers on usage-based pricing plans may tether, using any application, without paying an additional fee." After that judgement, Verizon released "Share Everything" plans that enable tethering, however users must drop old plans they were grandfathered under (such as the Unlimited Data plans) and switch, or pay a tethering fee.
In another instance, Judge Russell Nadel of the Ventura Superior Court awarded AT&T customer Matt Spaccarelli $850, despite the fact that Spaccarelli had violated his terms of service by jailbreaking his iPhone in order to fully utilize his iPhone's hardware. Spaccarelli demonstrated that AT&T had unfairly throttled his data connection. His data shows that AT&T had been throttling his connection after approximately 2 GB of data was used. Spaccarelli responded by creating a personal web page in order to provide information that allows others to file a similar lawsuit, commenting:
"Hopefully with all this concrete data and the courts on our side, AT&T will be forced to change something. Let's just hope it chooses to go the way of Sprint, not T-Mobile."
While T-Mobile did eventually allow tethering, on August 31, 2015, the company announced it will punish users who abuse its unlimited data by violating T-Mobile's rules on tethering (which unlike standard data does carry a 7 GB cap before throttling takes effect) by permanently kicking them off the unlimited plans and making users sign up for tiered data plans. T-Mobile mentioned that it was only a small handful of users who abused the tethering rules by using an Android app that masks T-Mobile's tethering monitoring and uses as much as 2 TBs per month, causing speed issues for most customers who do not abuse the rules.
Germany
Germany has three major cellular providers. The biggest provider, Deutsche Telekom, only states that "[...] cellular services are only provided when used together with a mobile cellular device". Moreover under point 11.5 of the cellular price list it is very much prohibited to make a private cellular connection commercially or publicly available. However, the price list of cellular contracts specifically states that using your own device as a modem or personal Hotspot for personal and private use is permitted.
The next biggest cellular provider, Vodafone, also states in their mobile price list that they don't allow making the personal connection publicly available. A personal hotspot and especially tethering is on all mentioned contracts allowed. For example, the "Vodafone Red 2016 S" with 2 GB up to the "Vodafone Young 2020 XL" with unlimited data encourage their users to share their data with another personal device
The third-largest provider, Telefonica O2, generally sells cheaper contracts than the larger providers. With their "o2 free unlimited contract", they explicitly stated that stationary non-battery-operated WiFi access points aren't allowed to be used the contract. Therefore, the German society of consumer rights sued Telefonica O2. This clause conflicts with net neutrality, which was confirmed by the European Court of Justice. Germany's highest justice court also confirmed the illegality of contract clauses that would forbid WiFi hotspots, tethering and in this case cellular routers.
Wi-Fi sharing
"Wi-Fi sharing" or "Wi-Fi repeating" is a form of tethering through wireless LAN but with a separate use case similar to a wireless repeater/extender. It allows a compatible device to tether its active Wi-Fi connection, without the involvement of cellular networks. It can be useful for example when travelling with multiple devices and not needing to register every device on a public network. Samsung and LG have released smartphones with this ability starting with the Galaxy S7 and V20. It is called Wi-Fi sharing on Samsung Galaxy and One UI. Google have also added this feature for the first time on the Pixel 3.
Microsoft Windows computers also allow the sharing of an active Wi-Fi (or Ethernet) connection through tethering. See also Internet Connection Sharing (ICS).
See also
Internet Connection Sharing
Mobile broadband
Mobile Internet device (MID)
Mobile modems and routers
Open Garden
Smartbook
Smartphone
References
Wireless networking
Mobile telecommunications
Net neutrality | Tethering | [
"Technology",
"Engineering"
] | 2,132 | [
"Mobile telecommunications",
"Wireless networking",
"Net neutrality",
"Computer networks engineering"
] |
14,854,086 | https://en.wikipedia.org/wiki/Dune%20%28mathematics%20software%29 | DUNE (Distributed and Unified Numerics Environment) is a modular C++ library for the solution of partial differential equations using grid-based methods.
The DUNE library is divided into modules. In version 2.9 are the core modules
general classes and infrastructure: dune-common,
geometry classes: dune-geometry,
grid interface: dune-grid,
linear algebra classes: dune-istl,
local ansatz functions: dune-localfunctions.
In addition, there are several further modules, including some which have been developed by third parties.
History
The development of DUNE started in 2002 on the initiative of Prof. Bastian (Heidelberg University), Dr. Ohlberger (during his habilitation at the University of Freiburg), and Prof. Rumpf (then University of Duisburg-Essen). The aim was a development model which was not attached to a single university, in order to make the project attractive for a wide audience. For the same reason a license was chosen which allows DUNE together with proprietary libraries. While most of the developers still have a university background, others are providing commercial support for DUNE.
Goals
What sets DUNE apart from other finite element programs is that right from the start the main design goal of DUNE was to allow the coupling of new and legacy codes efficiently. DUNE is primarily a set of abstract interfaces, which embody concepts from scientific computing. These are mainly intended to be used in finite element and finite volume applications, but also finite difference methods are possible.
The central interface is the grid interface. It describes structured and unstructured grids of arbitrary dimension, both with manifold and non-manifold structure. Seven different implementations of the grid interface exist. Four of these are encapsulations of existing grid managers. It is hence possible to directly compare different grid implementations. Functionality for parallel programming is described too.
Implementation
Various C++ techniques such as template programming, generic programming, C++ template metaprogramming, and static polymorphism are used. These are well-known in other areas of software development and are slowly making their way into scientific computing. They allow the compiler to eliminate most of the overhead introduced by the extra layer of abstraction. A high level of standard conformance is required for this from the compiler.
References
External links
DUNE webpage.
Scientific publications about DUNE.
Bibliography
Numerical software
Numerical linear algebra
Scientific simulation software
C++ libraries
Finite element software for Linux
Free software programmed in C++ | Dune (mathematics software) | [
"Mathematics"
] | 496 | [
"Numerical software",
"Mathematical software"
] |
14,855,633 | https://en.wikipedia.org/wiki/Composition%20ring | In mathematics, a composition ring, introduced in , is a commutative ring (R, 0, +, −, ·), possibly without an identity 1 (see non-unital ring), together with an operation
such that, for any three elements one has
It is not generally the case that , nor is it generally the case that (or ) has any algebraic relationship to and .
Examples
There are a few ways to make a commutative ring R into a composition ring without introducing anything new.
Composition may be defined by for all f,g. The resulting composition ring is rather uninteresting.
Composition may be defined by for all f,g. This is the composition rule for constant functions.
If R is a boolean ring, then multiplication may double as composition: for all f,g.
More interesting examples can be formed by defining a composition on another ring constructed from R.
The polynomial ring R[X] is a composition ring where for all .
The formal power series ring R also has a substitution operation, but it is only defined if the series g being substituted has zero constant term (if not, the constant term of the result would be given by an infinite series with arbitrary coefficients). Therefore, the subset of R formed by power series with zero constant coefficient can be made into a composition ring with composition given by the same substitution rule as for polynomials. Since nonzero constant series are absent, this composition ring does not have a multiplicative unit.
If R is an integral domain, the field R(X) of rational functions also has a substitution operation derived from that of polynomials: substituting a fraction g1/g2 for X into a polynomial of degree n gives a rational function with denominator , and substituting into a fraction is given by
However, as for formal power series, the composition cannot always be defined when the right operand g is a constant: in the formula given the denominator should not be identically zero. One must therefore restrict to a subring of R(X) to have a well-defined composition operation; a suitable subring is given by the rational functions of which the numerator has zero constant term, but the denominator has nonzero constant term. Again this composition ring has no multiplicative unit; if R is a field, it is in fact a subring of the formal power series example.
The set of all functions from R to R under pointwise addition and multiplication, and with given by composition of functions, is a composition ring. There are numerous variations of this idea, such as the ring of continuous, smooth, holomorphic, or polynomial functions from a ring to itself, when these concepts makes sense.
For a concrete example take the ring , considered as the ring of polynomial maps from the integers to itself. A ring endomorphism
of is determined by the image under of the variable , which we denote by
and this image can be any element of . Therefore, one may consider the elements as endomorphisms and assign , accordingly. One easily verifies that satisfies the above axioms. For example, one has
This example is isomorphic to the given example for R[X] with R equal to , and also to the subring of all functions formed by the polynomial functions.
See also
Composition operator
Polynomial decomposition
Carleman matrix
References
Algebraic structures
Ring theory | Composition ring | [
"Mathematics"
] | 692 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
14,858,275 | https://en.wikipedia.org/wiki/Interposer | An interposer is an electrical interface routing between one socket or connection to another. The purpose of an interposer is to spread a connection to a wider pitch or to reroute a connection to a different connection.
An interposer can be made of either silicon or organic (printed circuit board-like) material.
Interposer comes from the Latin word "interpōnere", meaning "to put between". They are often used in BGA packages, multi-chip modules and high bandwidth memory.
A common example of an interposer is an integrated circuit die to BGA, such as in the Pentium II. This is done through various substrates, both rigid and flexible, most commonly FR4 for rigid, and polyimide for flexible. Silicon and glass are also evaluated as an integration method. Interposer stacks are also a widely accepted, cost-effective alternative to 3D ICs. There are already several products with interposer technology in the market, notably the AMD Fiji/Fury GPU, and the Xilinx Virtex-7 FPGA. In 2016, CEA Leti demonstrated their second generation 3D-NoC technology which combines small dies ("chiplets"), fabricated at the FDSOI 28 nm node, on a 65 nm CMOS interposer.
Another example of an interposer is the adapter used to plug a SATA drive into a SAS backplane with redundant ports. While SAS drives have two ports that can be used to connect to redundant paths or storage controllers, SATA drives only have a single port. Directly, they can only connect to a single controller or path. SATA drives can be connected to nearly all SAS backplanes without adapters, but using an interposer with a port switching logic allows providing path redundancy.
See also
Die preparation
Integrated circuit
Semiconductor fabrication
References
Integrated circuits | Interposer | [
"Technology",
"Engineering"
] | 386 | [
"Computer engineering",
"Integrated circuits"
] |
14,861,214 | https://en.wikipedia.org/wiki/Central%20Interstate%20Low-Level%20Radioactive%20Waste%20Compact | The Central Interstate Low Level Radioactive Waste Compact is made up of the states of Louisiana, Arkansas, Oklahoma, and Kansas. The compact was established by the "Compact Law" and the "Low-Level Radioactive Waste Policy Amendments of the 1985."
The Central Interstate Low Level Radioactive Waste Compact and US Ecology purchased land 2 miles west of Butte, Nebraska in the early 1990s with the intention of placing a dump site there. There was extensive controversy and the dump site was eventually removed from consideration.
Citizens and factions throughout Boyd County, where Butte is located, fought for over 15 years over the placement of a disposal site in this area. Nebraska governors Kay Orr and Ben Nelson were heavily involved on different sides of the issue.
Nebraska was officially removed from the compact after a series of long court battles that ended in 2004. The state of Nebraska had to pay a settlement and there have been attempts made to sell the compact's land just outside Butte.
References
Radioactive waste
United States interstate compacts | Central Interstate Low-Level Radioactive Waste Compact | [
"Physics",
"Chemistry",
"Technology"
] | 197 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Radioactivity",
"Nuclear physics",
"Hazardous waste",
"Radioactive waste"
] |
12,168,753 | https://en.wikipedia.org/wiki/Null%20coalescing%20operator | The null coalescing operator is a binary operator that is part of the syntax for a basic conditional expression in several programming languages, such as (in alphabetical order): C# since version 2.0, Dart since version 1.12.0, PHP since version 7.0.0, Perl since version 5.10 as logical defined-or, PowerShell since 7.0.0, and Swift as nil-coalescing operator.
While its behavior differs between implementations, the null coalescing operator generally returns the result of its left-most operand if it exists and is not null, and otherwise returns the right-most operand. This behavior allows a default value to be defined for cases where a more specific value is not available.
In contrast to the ternary conditional if operator used as x ? x : y, but like the binary Elvis operator used as x ?: y, the null coalescing operator is a binary operator and thus evaluates its operands at most once, which is significant if the evaluation of x has side-effects.
Examples by languages
ATS
As with most languages in the ML family, ATS uses algebraic data types to represent the absence of a value instead of null. A linearly-typed optional dataviewtype could be defined as
dataviewtype option_vt (a: t@ype, bool) =
| None_vt(a, false)
| Some_vt(a, true) of a
viewtypedef Option_vt(a: t@ype) = [b:bool] option_vt(a, b)
A function with a provided default value can then pattern match on the optional value to coalesce:
fn {a:t@ype} value (default: a, opt: Option_vt a): a =
case+ opt of
| ~None_vt() => default
| ~Some_vt(x) => x
Which can for example be used like:
value<int>(42, None_vt) // returns 42
value<int>(42, Some_vt(7)) // returns 7
If one wanted to then define an infix operator:
fn {a:t@ype} flipped_value (opt: Option_vt a, default: a): a =
value(default, opt)
infixr 0 ??
#define ?? flipped_value
Which can for example be used like:
None_vt{int} ?? 42 // returns 42
Some_vt{int}(7) ?? 42 // returns 7
Bourne-like shells
In Bourne shell (and derivatives), "If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted":
#supplied_title='supplied title' # Uncomment this line to use the supplied title
title=${supplied_title:-'Default title'}
echo "$title" # prints: Default title
C#
In C#, the null coalescing operator is ??.
It is most often used to simplify expressions as follows:
possiblyNullValue ?? valueIfNull
For example, if one wishes to implement some C# code to give a page a default title if none is present, one may use the following statement:
string pageTitle = suppliedTitle ?? "Default Title";
instead of the more verbose
string pageTitle = (suppliedTitle != null) ? suppliedTitle : "Default Title";
or
string pageTitle;
if (suppliedTitle != null)
{
pageTitle = suppliedTitle;
}
else
{
pageTitle = "Default Title";
}
The three forms result in the same value being stored into the variable named pageTitle.
suppliedTitle is referenced only once when using the ?? operator, and twice in the other two code examples.
The operator can also be used multiple times in the same expression:
return some_Value ?? some_Value2 ?? some_Value3;
Once a non-null value is assigned to number, or it reaches the final value (which may or may not be null), the expression is completed.
If, for example, a variable should be changed to another value if its value evaluates to null, since C# 8.0 the ??= null coalescing assignment operator can be used:
some_Value ??= some_Value2;
Which is a more concise version of:
some_Value = some_Value ?? some_Value2;
In combination with the null-conditional operator ?. or the null-conditional element access operator ?[] the null coalescing operator can be used to provide a default value if an object or an object's member is null. For example, the following will return the default title if either the page object is null or page is not null but its Title property is:
string pageTitle = page?.Title ?? "Default Title";
CFML
As of ColdFusion 11, Railo 4.1, CFML supports the null coalescing operator as a variation of the ternary operator, ?:. It is functionally and syntactically equivalent to its C# counterpart, above. Example:
possiblyNullValue ?: valueIfNull
F#
The null value is not normally used in F# for values or variables. However null values can appear for example when F# code is called from C#.
F# does not have a built-in null coalescing operator but one can be defined as required as a custom operator:
let (|?) lhs rhs = (if lhs = null then rhs else lhs)
This custom operator can then be applied as per C#'s built-in null coalescing operator:
let pageTitle = suppliedTitle |? "Default Title"
Freemarker
Missing values in Apache FreeMarker will normally cause exceptions. However, both missing and null values can be handled, with an optional default value:
${missingVariable!"defaultValue"}
or, to leave the output blank:
${missingVariable!}
Haskell
Types in Haskell can in general not be null. Representation of computations that may or may not return a meaningful result is represented by the generic Maybe type, defined in the standard library as
data Maybe a = Nothing | Just a
The null coalescing operator replaces null pointers with a default value. The Haskell equivalent is a way of extracting a value from a Maybe by supplying a default value. This is the function fromMaybe.
fromMaybe :: a -> Maybe a -> a
fromMaybe defaultValue x =
case x of
Nothing -> defaultValue
Just value -> value
Some example usage follows.
fromMaybe 0 (Just 3) -- returns 3
fromMaybe "" Nothing -- returns ""
JavaScript
JavaScript's nearest operator is ??, the "nullish coalescing operator", which was added to the standard in ECMAScript's 11th edition. In earlier versions, it could be used via a Babel plugin, and in TypeScript. It evaluates its left-hand operand and, if the result value is not "nullish" (null or undefined), takes that value as its result; otherwise, it evaluates the right-hand operand and takes the resulting value as its result.
In the following example, a will be assigned the value of b if the value of b is not null or undefined, otherwise it will be assigned 3.
const a = b ?? 3;
Before the nullish coalescing operator, programmers would use the logical OR operator (||). But where ?? looks specifically for null or undefined, the || operator looks for any falsy value: null, undefined, "", 0, NaN, and of course, false.
In the following example, a will be assigned the value of b if the value of b is truthy, otherwise it will be assigned 3.
const a = b || 3;
Kotlin
Kotlin uses the ?: operator. This is an unusual choice of symbol, given that ?: is typically used for the Elvis operator, not null coalescing, but it was inspired by Groovy (programming language) where null is considered false.
val title = suppliedTitle ?: "Default title"
Objective-C
In Obj-C, the nil coalescing operator is ?:. It can be used to provide a default for nil references:
id value = valueThatMightBeNil ?: valueIfNil;
This is the same as writing
id value = valueThatMightBeNil ? valueThatMightBeNil : valueIfNil;
Perl
In Perl (starting with version 5.10), the operator is // and the equivalent Perl code is:
$possibly_null_value // $value_if_null
The possibly_null_value is evaluated as null or not-null (in Perl terminology, undefined or defined). On the basis of the evaluation, the expression returns either value_if_null when possibly_null_value is null, or possibly_null_value otherwise. In the absence of side-effects this is similar to the way ternary operators (?: statements) work in languages that support them. The above Perl code is equivalent to the use of the ternary operator below:
defined($possibly_null_value) ? $possibly_null_value : $value_if_null
This operator's most common usage is to minimize the amount of code used for a simple null check.
Perl additionally has a //= assignment operator, where $a //= $b is largely equivalent to: $a = $a // $b
This operator differs from Perl's older || and ||= operators in that it considers definedness, not truth. Thus they behave differently on values that are false but defined, such as 0 or "" (a zero-length string):
$a = 0;
$b = 1;
$c = $a // $b; # $c = 0
$c = $a || $b; # $c = 1
PHP
PHP 7.0 introduced a null-coalescing operator with the ?? syntax. This checks strictly for NULL or a non-existent variable/array index/property. In this respect, it acts similarly to PHP's isset() pseudo-function:
$name = $request->input['name'] ?? $request->query['name'] ?? 'default name';
/* Equivalent to */
if (isset($request->input['name'])) {
$name = $request->input['name'];
} elseif (isset($request->query['name'])) {
$name = $request->query['name'];
} else {
$name = 'default name';
}$user = $this->getUser() ?? $this->createGuestUser();
/* Equivalent to */
$user = $this->getUser();
if ($user === null) {
$user = $this->createGuestUser();
} $pageTitle = $title ?? 'Default Title';
/* Equivalent to */
$pageTitle = isset($title) ? $title : 'Default Title';
Version 7.4 of PHP will add the Null Coalescing Assignment Operator with the ??= syntax:
// The following lines are doing the same
$this->request->data['comments']['user_id'] = $this->request->data['comments']['user_id'] ?? 'value';
// Instead of repeating variables with long names, the equal coalesce operator is used
$this->request->data['comments']['user_id'] ??= 'value';
Python
Python does not have a null coalescing operator. Its functionality can be mimicked using a conditional expression:
now() if time is None else time
There was a proposal to add null-coalescing-type operators in Python 3.8, but that proposal has been deferred.
Related functionality
Python's operator provides a related, but different behavior. The difference is that also returns the right hand term if the first term is defined, but has a value that evaluates to in a boolean context:
42 or "something" # returns 42
0 or "something" # returns "something"
False or "something" # returns "something"
"" or "something" # returns "something"
[] or "something" # returns "something"
dict() or "something" # returns "something"
None or "something" # returns "something"
A true null coalescing operator would only return in the very last case, and would return the false-ish values (, , , , ) in the other examples.
PowerShell
Since PowerShell 7, the ?? null coalescing operator provides this functionality.
$myVar = $null
$x = $myVar ?? "something" # assigns "something"
R
Since R version 4.4.0 %||% operator is included in base R (previously it was a feature of some packages like rlang).
Ruby
Ruby does not have a null-coalescing operator, but its || and ||= operators work the same way except on Booleans. Ruby conditionals have only two false-like values: false and nil (where 'false' is not the same as 0). Ruby's || operator evaluates to its first operand when true-like. In comparison, Perl/Javascript, which also have the latter property, have other false-like values (0 and empty string), which make || differ from a null-coalescing operator in many more cases (numbers and strings being two of the most frequently used data types). This is what led Perl/Javascript to add a separate operator while Ruby hasn't.
Rust
While there's no null in Rust, tagged unions are used for the same purpose. For example, Result<T, E> or Option<T>.
Any type implementing the Try trait can be unwrapped.
unwrap_or() serves a similar purpose as the null coalescing operator in other languages. Alternatively, unwrap_or_else() can be used to use the result of a function as a default value.
// Option
// An Option can be either Some(value) or None
Some(1).unwrap_or(0); // evaluates to 1
None.unwrap_or(0); // evaluates to 0
None.unwrap_or_else(get_default); // evaluates to the result of calling the function get_default
// Result
// A Result can be either Ok(value) or Err(error)
Ok(1).unwrap_or(0); // evaluates to 1
Err("oh no").unwrap_or(1); // evaluates to 1
SQL
In Oracle's PL/SQL, the NVL() function provides the same outcome:
NVL(possibly_null_value, 'value if null');
In SQL Server/Transact-SQL there is the ISNULL function that follows the same prototype pattern:
ISNULL(possibly_null_value, 'value if null');
Attention should be taken to not confuse ISNULL with IS NULL – the latter serves to evaluate whether some contents are defined to be NULL or not.
The ANSI SQL-92 standard includes the COALESCE function implemented in Oracle, SQL Server, PostgreSQL, SQLite and MySQL. The COALESCE function returns the first argument that is not null. If all terms are null, returns null.
COALESCE(possibly_null_value[, possibly_null_value, ...]);
The difference between ISNULL and COALESCE is that the type returned by ISNULL is the type of the leftmost value while COALESCE returns the type of the first non-null value.
Swift
In Swift, the nil coalescing operator is ??. It is used to provide a default when unwrapping an optional type:
optionalValue ?? valueIfNil
For example, if one wishes to implement some Swift code to give a page a default title if none is present, one may use the following statement:
var suppliedTitle: String? = ...
var pageTitle: String = suppliedTitle ?? "Default Title"
instead of the more verbose
var pageTitle: String = (suppliedTitle != nil) ? suppliedTitle! : "Default Title";
VB.NET
In VB.NET the If operator/keyword achieves the null coalescing operator effect.
Dim pageTitle = If(suppliedTitle, "Default Title")
which is a more concise way of using its variation
Dim pageTitle = If(suppliedTitle <> Nothing, suppliedTitle, "Default Title")
See also
?: (conditional)
Elvis operator (binary ?:)
Null-conditional operator
Operator (computer programming)
References
Conditional constructs
Operators (programming)
Binary operations | Null coalescing operator | [
"Mathematics"
] | 3,709 | [
"Binary operations",
"Mathematical relations",
"Binary relations"
] |
12,169,570 | https://en.wikipedia.org/wiki/Magnetization%20transfer | Magnetization transfer (MT), in NMR and MRI, refers to the transfer of nuclear spin polarization and/or spin coherence from one population of nuclei to another population of nuclei, and to techniques that make use of these phenomena. There is some ambiguity regarding the precise definition of magnetization transfer, however the general definition given above encompasses all more specific notions. NMR active nuclei, those with non-zero spin, can be energetically coupled to one another under certain conditions. The mechanisms of nuclear-spin energy-coupling have been extensively characterized and are described in the following articles: Angular momentum coupling, Magnetic dipole–dipole interaction, J-coupling, Residual dipolar coupling, Nuclear Overhauser effect, Spin–spin relaxation, and Spin saturation transfer. Alternatively, some nuclei in a chemical system are labile and exchange between non-equivalent environments. A more specific example of this case is presented in the section Chemical Exchange Magnetization transfer.
In either case, magnetization transfer techniques probe the dynamic relationship between two or more distinguishable nuclei populations, in so far as energy exchange between the populations can be induced and measured in an idealized NMR experiment.
Chemical Exchange Magnetization transfer
In magnetic resonance imaging or NMR of macromolecular samples, such as protein solutions, at least two types of water molecules, free (bulk) and bound (hydration), are present. Bulk water molecules have many mechanical degrees of freedom, and motion of such molecules thus exhibits statistically averaged behavior. Because of this uniformity, most free water protons have resonance frequencies very near the average Larmor frequency of all such protons. On a properly acquired NMR spectrum this is seen as a narrow Lorentzian line (at 4.8 ppm, 20 C). Bulk water molecules are also relatively far from magnetic field perturbing macromolecules, such that free water protons experience a more homogeneous magnetic field, which results in slower transverse magnetization dephasing and a longer T2*. Conversely, hydration water molecules are mechanically constrained by extensive interactions with the local macromolecules and hence magnetic field inhomogeneities are not averaged out, which leads to broader resonance lines. This results in faster dephasing of the magnetization that produces the NMR signal and much shorter T2 values (<200 μs). Because the T2 values are so short, the NMR signal from the protons of bound water is not typically observed in MRI.
However, using an off-resonance saturation pulse to irradiate protons in the bound (hydration) population can have a detectable effect on the NMR signal of the mobile (free) proton pool. When a population of spins is saturated, such that the magnitude of the macroscopic magnetization vector approaches zero, there is no remaining spin polarization with which to produce an NMR signal. Longitudinal relaxation refers to the return of longitudinal spin polarization, which occurs at a rate described by T1. While the number of hydration water molecules may be insufficient to produce an observable signal, exchange of water molecules between the hydration and bulk population allows characterization of the hydration population, and measurement of the rate at which molecules are exchanging between bulk and bound sites. Such experiments are often termed saturation transfer or chemical exchange saturation transfer (CEST), because the signal of the bulk water is observed to decrease when the hydration population is saturated. Considering these techniques from the opposite perspective, that magnetization (i.e. spin polarization) is being transferred from the bulk water to the spin-saturated hydration population, allows one to conceptually unify chemical exchange methods with other techniques that transfer magnetization between nuclei populations. Since the extent of signal decay depends on the exchange rate between free and hydration water, MT can be used to provide an alternative contrast method in addition to T1,T2, and proton density differences.
MT is believed to be a nonspecific indicator of the structural integrity of the tissue being imaged.
An extension of MT, the magnetization transfer ratio (MTR) has been used in neuroradiology to highlight abnormalities in brain structures. (The MTR is (Mo-Mt)/Mo.)
A systematic modulation of the precise frequency offset for the saturation pulse can be plotted against the free-water signal to form a "Z-spectrum". This technique is often referred to as "Z-spectroscopy".
See also
Magnetic resonance imaging
Magnetic resonance spectroscopy
References
External links
The Role of Nonconventional MRI Techniques in Demyelinating Disorders
Magnetic Resonance Findings in Amyotropic Lateral Sclerosis Using a Spin Echo Magnetization Transfer Sequence
Wolff SD & Balaban RS. Magnetization transfer contrast (MTC) and tissue water proton relaxation in vivo. Magnetic Resonance in Medicine. 1989;10(1):135-144.
Mehta RC, Pike GB, Enzmann DR. Magnetization transfer magnetic resonance imaging: a clinical review. Topics in Magnetic Resonance Imaging. 1996;8(4):214-30.
Tanabe JL, Ezekiel F, Jagust WJ, et al. Magnetization Transfer Ratio of White Matter Hyperintensities in Subcortical Ischemic Vascular Dementia. AJNR Am J Neuroradiol. 1999;20(5):839–844.
Symms M, Jäger HR, Schmierer K, Yousry TA. A review of structural magnetic resonance neuroimaging. J Neurol Neurosurg Psychiatry. 2004 Sep;75(9):1235-44. Review.
Lepage M, McMahon K, Galloway GJ, De Deene Y, Back SÅJ, Baldock C, 2002. Magnetization transfer imaging for polymer gel dosimetry. Phys. Med. Biol. 47 1881–1890.
Magnetic resonance imaging
Nuclear magnetic resonance | Magnetization transfer | [
"Physics",
"Chemistry"
] | 1,213 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Nuclear physics"
] |
12,170,265 | https://en.wikipedia.org/wiki/Spark%20ionization | Spark ionization (also known as spark source ionization) is a method used to produce gas phase ions from a solid sample. The prepared solid sample is vaporized and partially ionized by an intermittent discharge or spark. This technique is primarily used in the field of mass spectrometry. When incorporated with a mass spectrometer the complete instrument is referred to as a spark ionization mass spectrometer or as a spark source mass spectrometer (SSMS).
History
The use of spark ionization for analysis of impurities in solids was indicated by Dempster's work in 1935. Metals were a class of material that could not be previously ionized by thermal ionization (the method formerly used for ionizing solid sample). Spark ion sources were not commercially produced until after 1954 when Hannay demonstrated its capability for analysis of trace impurities (sub-part per million detection sensitivity) in semiconducting materials. The prototype spark source instrument was the MS7 mass spectrometer produced by Metropolitan-Vickers Electrical Company, Ltd. in 1959. Commercial production of spark source instruments continued throughout the 50s, 60s, and 70s, but they were phased out when other trace element detection techniques with improved resolution and accuracy were invented (circa 1960s). Successors of the spark ion source for trace element analysis are the laser ion source, glow discharge ion source, and inductively coupled plasma ion source. Today, very few laboratories use spark ionization worldwide.
How it works
The spark ion source consists of a vacuum chamber containing the electrodes, which is called the spark housing. The tips of the electrodes are composed of or containing the sample and are electrically connected to the power supply. Extraction electrodes create an electric field that accelerate the generated ions through the exit slit.
Ion sources
For spark ionization, there exist two ion sources: the low-voltage direct-current (DC) arc source and the high-voltage radio-frequency (rf) spark source. The arc source has better reproducibility and the ions produced have a narrower energy spread compared to the spark source; however, the spark source has the ability to ionize both conducting and non-conducting samples while the arc source can only ionize conducting samples.
In the low-voltage DC arc source, a high voltage is applied to the two conducting electrodes to initiate the spark, followed by application of a low-voltage direct current to maintain an arc between the spark gap. The duration of the arc is usually only a few hundred microseconds to prevent overheating of the electrodes, and it repeated 50-100 times per second. This method can only be used to ionize conducting samples, e.g. metals.
The high-voltage rf spark source is the one that was used in commercial SSMS instruments due to its ability to ionize both conducting and non-conducting materials. Typically, samples are physically incorporated into two conductive electrodes between which an intermittent (1 MHz) high-voltage (50-100 kV using a Tesla transformer) electric spark is produced, ionizing the material at the tips of the pin-shaped electrodes. When the pulsed current is applied to the electrodes under ultra-high vacuum, a spark discharge plasma occurs in the spark gap in which ions are generated via electron impact. Within the discharge plasma, the sample evaporates, atomizes, and ionizes via electron impact. The total ion current may be optimized by adjusting the distance between the electrodes. This mode of ionization can be used to ionize conducting, semi-conducting, and non-conducting samples.
Sample preparation
Conducting and semi-conducting samples may be directly analyzed after being formed into electrodes. Non-conductive samples are first powdered, mixed with a conducting powder (usually high purity graphite or silver), homogenized, and then formed into electrodes. Even liquids can be analyzed if they are frozen or after impregnating a conducting powder. Sample homogeneity is important for reproducibility.
Spark source mass spectrometry (SSMS)
The rf spark source creates ions with a wide energy spread (2-3 kV), which necessitates a double focusing mass analyzer. Mass analyzers are typically Mattauch-Herzog geometry, which achieve velocity and directional focusing onto a plane with either photosensitive plates for ion detection or linear channeltron detector arrays. SSMS has several unique features that make it a useful technique for various applications. Merits of SSMS include high sensitivity with detection limits in the ppb range, simultaneous detection of all elements in a sample, and simple sample preparation. However, the rf spark ion current is discontinuous and erratic, which results in fair resolution and accuracy when standards are not implemented. Other drawbacks include expensive equipment, long analysis time, and the need for highly trained personnel to analyze the spectrum.
Applications of SSMS
Spark source mass spectrometry has been used for trace analysis and multielement analysis applications for highly conducting, semiconducting, and nonconducting materials. Some examples of SSMS applications are the trace element analysis of high-purity materials, multielement analysis of elements in technical alloys, geochemical and cosmochemical samples, biological samples, industrial stream samples, and radioactive material.
References
Ion source | Spark ionization | [
"Physics"
] | 1,084 | [
"Ion source",
"Mass spectrometry",
"Spectrum (physical sciences)"
] |
12,170,296 | https://en.wikipedia.org/wiki/Glaser%20coupling | The Glaser coupling is a type of coupling reaction. It is by far one of the oldest coupling reactions and is based on copper compounds like copper(I) chloride or copper(I) bromide and an additional oxidant like air. The base used in the original research paper is ammonia and the solvent is water or an alcohol.
The reaction was first reported by in 1869. He suggested the following process on his way to diphenylbutadiyne:
CuCl + PhC2H + NH3 → PhC2Cu + NH4Cl
4PhC2Cu + O2 → 2PhC2C2Ph + 2Cu2O
Modifications
Eglinton reaction
In the related Eglinton reaction two terminal alkynes are coupled by a copper(II) salt such as cupric acetate.
2R-\!{\equiv}\!-H ->[\ce{Cu(OAc)2}][\ce{pyridine}] R-\!{\equiv}\!-\!{\equiv}\!-R
The oxidative coupling of alkynes has been used to synthesize a number of natural products. The stoichiometry is represented by this highly simplified scheme:
Such reactions proceed via copper(I)-alkyne complexes.
This methodology was used in the synthesis of cyclooctadecanonaene. Another example is the synthesis of diphenylbutadiyne from phenylacetylene.
Hay coupling
The Hay coupling is variant of the Glaser coupling. It relies on the TMEDA complex of copper(I) chloride to activate the terminal alkyne. Oxygen (air) is used in the Hay variant to oxidize catalytic amounts of Cu(I) to Cu(II) throughout the reaction, as opposed to a stoichiometric amount of Cu(II) used in the Eglington variant. The Hay coupling of trimethylsilylacetylene gives the butadiyne derivative.
Scope
In 1882 Adolf von Baeyer used the method to prepare 1,4-bis(2-nitrophenyl)butadiyne, en route to indigo dye.
Shortly afterwards, Baeyer reported a different route to indigo, now known as the Baeyer–Drewson indigo synthesis.
See also
Cadiot–Chodkiewicz coupling - Another alkyne coupling reaction catalysed by copper(I).
Sonogashira coupling - Pd/Cu catalysed coupling of an alkyne and an aryl or vinyl halide
Castro–Stephens coupling - A cross-coupling reaction between a copper(I) acetylide and an aryl halide
Fritsch–Buttenberg–Wiechell rearrangement - can also form diynes
References
Carbon-carbon bond forming reactions
Name reactions | Glaser coupling | [
"Chemistry"
] | 593 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
12,171,241 | https://en.wikipedia.org/wiki/Saturation%20vapor%20density | The saturation vapor density (SVD) is the maximum density of water vapor in air at a given temperature. The concept is related to saturation vapor pressure (SVP). It can be used to calculate exact quantity of water vapor in the air from a relative humidity (RH = % local air humidity measured / local total air humidity possible ) Given an RH percentage, the density of water in the air is given by . Alternatively, RH can be found by . As relative humidity is a dimensionless quantity (often expressed in terms of a percentage), vapor density can be stated in units of grams or kilograms per cubic meter.
For low temperatures (below approximately 400 K), SVD can be approximated from the SVP by the ideal gas law: where is the SVP, is the volume, is the number of moles, is the gas constant and is the temperature in kelvins. The number of moles is related to density by , where is the mass of water present and is the molar mass of water (18.01528 grams/mole). Thus, we get = = density.
The values shown at hyperphysics-sources indicate that the saturated vapor density is 4.85 g/m3 at 273 K, at which the saturated vapor pressure is 4.58 mm of Hg or 610.616447 Pa (760 mm of Hg ≈ 1 atm = 1.01325 * 105 Pa).
References
Atmospheric thermodynamics
Thermodynamic properties | Saturation vapor density | [
"Physics",
"Chemistry",
"Mathematics"
] | 311 | [
"Thermodynamic properties",
"Quantity",
"Thermodynamics",
"Physical quantities"
] |
9,023,027 | https://en.wikipedia.org/wiki/Exponential%20polynomial | In mathematics, exponential polynomials are functions on fields, rings, or abelian groups that take the form of polynomials in a variable and an exponential function.
Definition
In fields
An exponential polynomial generally has both a variable x and some kind of exponential function E(x). In the complex numbers there is already a canonical exponential function, the function that maps x to ex. In this setting the term exponential polynomial is often used to mean polynomials of the form P(x, ex) where P ∈ C[x, y] is a polynomial in two variables.
There is nothing particularly special about C here; exponential polynomials may also refer to such a polynomial on any exponential field or exponential ring with its exponential function taking the place of ex above. Similarly, there is no reason to have one variable, and an exponential polynomial in n variables would be of the form P(x1, ..., xn, ex1, ..., exn), where P is a polynomial in 2n variables.
For formal exponential polynomials over a field K we proceed as follows. Let W be a finitely generated Z-submodule of K and consider finite sums of the form
where the fi are polynomials in K[X] and the exp(wi X) are formal symbols indexed by wi in W subject to exp(u + v) = exp(u) exp(v).
In abelian groups
A more general framework where the term 'exponential polynomial' may be found is that of exponential functions on abelian groups. Similarly to how exponential functions on exponential fields are defined, given a topological abelian group G a homomorphism from G to the additive group of the complex numbers is called an additive function, and a homomorphism to the multiplicative group of nonzero complex numbers is called an exponential function, or simply an exponential. A product of additive functions and exponentials is called an exponential monomial, and a linear combination of these is then an exponential polynomial on G.
Properties
Ritt's theorem states that the analogues of unique factorization and the factor theorem hold for the ring of exponential polynomials.
Applications
Exponential polynomials on R and C often appear in transcendental number theory, where they appear as auxiliary functions in proofs involving the exponential function. They also act as a link between model theory and analytic geometry. If one defines an exponential variety to be the set of points in Rn where some finite collection of exponential polynomials vanish, then results like Khovanskiǐ's theorem in differential geometry and Wilkie's theorem in model theory show that these varieties are well-behaved in the sense that the collection of such varieties is stable under the various set-theoretic operations as long as one allows the inclusion of the image under projections of higher-dimensional exponential varieties. Indeed, the two aforementioned theorems imply that the set of all exponential varieties forms an o-minimal structure over R.
Exponential polynomials also appear in the characteristic equation associated with linear delay differential equations.
Notes
See also
Quasi-polynomial
Polynomials | Exponential polynomial | [
"Mathematics"
] | 618 | [
"Polynomials",
"Algebra"
] |
9,023,486 | https://en.wikipedia.org/wiki/Aircraft%20principal%20axes | An aircraft in flight is free to rotate in three dimensions: yaw, nose left or right about an axis running up and down; pitch, nose up or down about an axis running from wing to wing; and roll, rotation about an axis running from nose to tail. The axes are alternatively designated as vertical, lateral (or transverse), and longitudinal respectively. These axes move with the vehicle and rotate relative to the Earth along with the craft. These definitions were analogously applied to spacecraft when the first crewed spacecraft were designed in the late 1950s.
These rotations are produced by torques (or moments) about the principal axes. On an aircraft, these are intentionally produced by means of moving control surfaces, which vary the distribution of the net aerodynamic force about the vehicle's center of gravity. Elevators (moving flaps on the horizontal tail) produce pitch, a rudder on the vertical tail produces yaw, and ailerons (flaps on the wings that move in opposing directions) produce roll. On a spacecraft, the movements are usually produced by a reaction control system consisting of small rocket thrusters used to apply asymmetrical thrust on the vehicle.
Principal axes
Normal axis, or yaw axis — an axis drawn from top to bottom, and perpendicular to the other two axes, parallel to the fuselage or frame station.
Transverse axis, lateral axis, or pitch axis — an axis running from the pilot's left to right in piloted aircraft, and parallel to the wings of a winged aircraft, parallel to the buttock line.
Longitudinal axis, or roll axis — an axis drawn through the body of the vehicle from tail to nose in the normal direction of flight, or the direction the pilot faces, similar to a ship's waterline.
Normally, these axes are represented by the letters X, Y and Z in order to compare them with some reference frame, usually named x, y, z. Normally, this is made in such a way that the X is used for the longitudinal axis, but there are other possibilities to do it.
Vertical axis (yaw)
The yaw axis has its origin at the center of gravity and is directed towards the bottom of the aircraft, perpendicular to the wings and to the fuselage reference line. Motion about this axis is called yaw. A positive yawing motion moves the nose of the aircraft to the right. The rudder is the primary control of yaw.
The term yaw was originally applied in sailing, and referred to the motion of an unsteady ship rotating about its vertical axis. Its etymology is uncertain.
Lateral axis (pitch)
The pitch axis (also called transverse or lateral axis), passes through an aircraft from wingtip to wingtip. Rotation about this axis is called pitch. Pitch changes the vertical direction that the aircraft's nose is pointing (a positive pitching motion raises the nose of the aircraft and lowers the tail). The elevators are the primary control surfaces for pitch.
Longitudinal axis (roll)
The roll axis (or longitudinal axis) has its origin at the center of gravity and is directed forward, parallel to the fuselage reference line. Motion about this axis is called roll. An angular displacement about this axis is called bank. A positive rolling motion lifts the left wing and lowers the right wing. The pilot rolls by increasing the lift on one wing and decreasing it on the other. This changes the bank angle. The ailerons are the primary control of bank. The rudder also has a secondary effect on bank.
Reference planes
The principal axes of rotation imply three reference planes, each perpendicular to an axis:
Normal plane, or yaw plane
Transverse plane, lateral plane, or pitch plane
Longitudinal plane, or roll plane
The three planes define the aircraft's center of gravity.
Relationship with other systems of axes
These axes are related to the principal axes of inertia, but are not the same. They are geometrical symmetry axes, regardless of the mass distribution of the aircraft.
In aeronautical and aerospace engineering intrinsic rotations around these axes are often called Euler angles, but this conflicts with existing usage elsewhere. The calculus behind them is similar to the Frenet–Serret formulas. Performing a rotation in an intrinsic reference frame is equivalent to right-multiplying its characteristic matrix (the matrix that has the vectors of the reference frame as columns) by the matrix of the rotation.
History
The first aircraft to demonstrate active control about all three axes was the Wright brothers' 1902 glider.
See also
Aerodynamics
Aircraft flight control system
Euler angles
Fixed-wing aircraft
Flight control surfaces
Flight dynamics
Moving frame
Panning (camera)
Six degrees of freedom
Screw theory
Triad method
References
External links
Pitch, Roll, Yaw
Yaw Axis Control as a Means of Improving V/STOL Aircraft Performance.
3D fast walking simulation of biped robot by yaw axis moment compensation
Flight control system for a hybrid aircraft in the yaw axis
Motion Imagery Standards Board (MISB)
Aerodynamics
Attitude control
Line (geometry) | Aircraft principal axes | [
"Chemistry",
"Mathematics",
"Engineering"
] | 1,001 | [
"Attitude control",
"Aerodynamics",
"Aerospace engineering",
"Line (geometry)",
"Fluid dynamics"
] |
9,023,795 | https://en.wikipedia.org/wiki/General%20Relativity%20%28book%29 | General Relativity is a graduate textbook and reference on Albert Einstein's general theory of relativity written by the gravitational physicist Robert Wald.
Overview
First published by the University of Chicago Press in 1984, the book, a tome of almost 500 pages, covers many aspects of the general theory of relativity. It is divided into two parts. Part I covers the fundamentals of the subject and Part II the more advanced topics such as causal structure, and quantum effects. The book uses the abstract index notation for tensors. It treats spinors, the variational-principle formulation, the initial-value formulation, (exact) gravitational waves, singularities, Penrose diagrams, Hawking radiation, and black-hole thermodynamics.
It is aimed at beginning graduate students and researchers. To this end, most of the materials in Part I is geared towards an introductory course on the subject while Part II covers a wide range of advanced topics for a second term or further study. The essential mathematical methods for the formulation of general relativity are presented in Chapters 2 and 3 while more advanced techniques are discussed in Appendices A to C. Wald believes that this is the best way forward because putting all the mathematical techniques at the beginning of the book would prove to be a major obstruction for students while developing these mathematical tools as they get used would mean they are too scattered to be useful. While the Hamiltonian formalism is often presented in conjunction with the initial-value formulation, Wald's coverage of the latter is independent of the former, which is thus relegated to the appendix, alongside the Lagrangian formalism.
This book uses the sign convention for reasons of technical convenience. However, there is one important exception. In Chapter 13 – and only in Chapter 13 –, the sign convention is switched to because it is easier to treat spinors this way. Moreover, this is the most common sign convention used in the literature.
Most of the book uses geometrized units, meaning the fundamental natural constants (Newton's gravitational constant) and (the speed of light in vacuum) are set equal to one, except when predictions that can be tested are made.
Table of Contents
Part I: Fundamentals
Chapter 1: Introduction
Chapter 2: Manifolds and Tensor Fields
Chapter 3: Curvature
Chapter 4: Einstein's Equation
Chapter 5: Homogeneous, Isotropic Cosmology
Chapter 6: The Schwarzschild Solution
Part II: Advanced Topics
Chapter 7: Methods for Solving Einstein's Equation
Chapter 8: Causal Structure
Chapter 9: Singularities
Chapter 10: Initial Value Formulation
Chapter 11: Asymptotic Flatness
Chapter 12: Black Holes
Chapter 13: Spinors
Chapter 14: Quantum Effects in Strong Gravitational Fields
Appendices
A. Topological Spaces
B. Differential Forms, Integration, and Frobenius's Theorem
C. Maps of Manifolds, Lie Derivatives, and Killing Fields
D. Conformal Transformations
E. Lagrangian and Hamiltonian Formulations of Einstein's Equation
F. Units and Dimensions.
References
Index
Versions
Wald, Robert M. General Relativity. University of Chicago Press, 1984. (paperback). (hardcover).
Wald, Robert M. General Relativity. University of Chicago Press, 2010. . Reprint.
Assessment
According to Daniel Finley, a professor at the University of New Mexico, this textbook offers good physics intuition. However, the author did not use the most modern mathematical methods available, and his treatment of cosmology is now outdated. Finley believes that the abstract index notation is difficult to learn, though convenient for those who have mastered it.
Theoretical physicist James W. York wrote that General Relativity is a sophisticated yet concise book on the subject that should be appealing to the mathematically inclined, as a high level of rigor is maintained throughout the book. However, he believed the material on linearized gravity is too short, and recommended Gravitation by Charles Misner, Kip Thorne, and John Archibald Wheeler, and Gravitation and Cosmology by Steven Weinberg as supplements.
Hans C. Ohanian, who taught and researched gravitation at the Rensselaer Polytechnic Institute, opined that General Relativity provides a modern introduction to the subject with emphasis on tensor and topological methods and offers some "sharp insights." However, its quality is very variable. Topics such as geodetic motion in the Schwarzschild metric, the Krushkal extension, and energy extraction from black holes, are handled well while empirical tests of Einstein's theory are barely scratched and the treatment of advanced topics, including cosmology, is just too brief to be useful to students. Due to its heavy use of higher mathematics, it may not be suitable for an introductory course.
Lee Smolin argued that General Relativity bridges the gap between the presentation of the material in older textbooks and the literature. For example, while the early pioneers of the subject, including Einstein himself, employed coordinate-based methods, researchers since the mid-1960s have switched to coordinate-free formulations, of which Wald's text is entirely based. Its style is uniformly clear and economic, if too brief at times. Topics that deserve more attention include gravitational radiation and cosmology. However, this book can be supplemented by those by Misner, Thorne, and Wheeler, and by Weinberg. Smolin was teaching a course on general relativity to undergraduates as well as graduate students at Yale University using this book and felt satisfied with the results. He also found it useful as a reference to refresh his memory.
See also
List of books on general relativity
Gravitation (textbook)
Einstein Gravity in a Nutshell (textbook)
Classical Mechanics (textbook)
Classical Electrodynamics (textbook)
References
Further reading
Carroll, Sean M (2004). Spacetime and Geometry: An Introduction to General Relativity. Addison Wesley. .
Poisson, Eric (2004). A Relativist's Toolkit, The Mathematics of Black-Hole Mechanics. Cambridge University Press. .
External links
Official University of Chicago Press website
General relativity
Physics textbooks
Relativity articles
1984 non-fiction books | General Relativity (book) | [
"Physics"
] | 1,226 | [
"General relativity",
"Theory of relativity"
] |
9,024,777 | https://en.wikipedia.org/wiki/List%20of%20UN%20numbers%201901%20to%202000 | UN numbers from UN1901 to UN2000 as assigned by the United Nations Committee of Experts on the Transport of Dangerous Goods are as follows:
UN 1901 to UN 2000
n.o.s. = not otherwise specified meaning a collective entry to which substances, mixtures, solutions or articles may be assigned if a) they are not mentioned by name in 3.2 Dangerous Goods List AND b) they exhibit chemical, physical and/or dangerous properties corresponding to the Class, classification code, packing group and the name and description of the n.o.s.entry
See also
Lists of UN numbers
References
External links
ADR Dangerous Goods, cited on 26 April 2015.
UN Dangerous Goods List from 2015, cited on 26 April 2015.
UN Dangerous Goods List from 2013, cited on 26 April 2015.
Lists of UN numbers | List of UN numbers 1901 to 2000 | [
"Chemistry",
"Technology"
] | 166 | [
"Lists of UN numbers"
] |
9,025,001 | https://en.wikipedia.org/wiki/List%20of%20UN%20numbers%202201%20to%202300 | UN numbers from UN2201 to UN2300 as assigned by the United Nations Committee of Experts on the Transport of Dangerous Goods are as follows:
UN 2201 to UN 2300
See also
Lists of UN numbers
References
External links
ADR Dangerous Goods, cited on 7 May 2015.
UN Dangerous Goods List from 2015, cited on 7 May 2015.
UN Dangerous Goods List from 2013, cited on 7 May 2015.
Lists of UN numbers | List of UN numbers 2201 to 2300 | [
"Chemistry",
"Technology"
] | 88 | [
"Lists of UN numbers"
] |
9,025,771 | https://en.wikipedia.org/wiki/Anytime%20algorithm | In computer science, an anytime algorithm is an algorithm that can return a valid solution to a problem even if it is interrupted before it ends. The algorithm is expected to find better and better solutions the longer it keeps running.
Most algorithms run to completion: they provide a single answer after performing some fixed amount of computation. In some cases, however, the user may wish to terminate the algorithm prior to completion. The amount of computation required may be substantial, for example, and computational resources might need to be reallocated. Most algorithms either run to completion or they provide no useful solution information. Anytime algorithms, however, are able to return a partial answer, whose quality depends on the amount of computation they were able to perform. The answer generated by anytime algorithms is an approximation of the correct answer.
Names
An anytime algorithm may be also called an "interruptible algorithm". They are different from contract algorithms, which must declare a time in advance; in an anytime algorithm, a process can just announce that it is terminating.
Goals
The goal of anytime algorithms are to give intelligent systems the ability to make results of better quality in return for turn-around time. They are also supposed to be flexible in time and resources. They are important because artificial intelligence or AI algorithms can take a long time to complete results. This algorithm is designed to complete in a shorter amount of time. Also, these are intended to have a better understanding that the system is dependent and restricted to its agents and how they work cooperatively. An example is the Newton–Raphson iteration applied to finding the square root of a number. Another example that uses anytime algorithms is trajectory problems when you're aiming for a target; the object is moving through space while waiting for the algorithm to finish and even an approximate answer can significantly improve its accuracy if given early.
What makes anytime algorithms unique is their ability to return many possible outcomes for any given input. An anytime algorithm uses many well defined quality measures to monitor progress in problem solving and distributed computing resources. It keeps searching for the best possible answer with the amount of time that it is given. It may not run until completion and may improve the answer if it is allowed to run longer.
This is often used for large decision set problems. This would generally not provide useful information unless it is allowed to finish. While this may sound similar to dynamic programming, the difference is that it is fine-tuned through random adjustments, rather than sequential.
Anytime algorithms are designed so that it can be told to stop at any time and would return the best result it has found so far. This is why it is called an interruptible algorithm. Certain anytime algorithms also maintain the last result, so that if they are given more time, they can continue from where they left off to obtain an even better result.
Decision trees
When the decider has to act, there must be some ambiguity. Also, there must be some idea about how to solve this ambiguity. This idea must be translatable to a state to action diagram.
Performance profile
The performance profile estimates the quality of the results based on the input and the amount of time that is allotted to the algorithm. The better the estimate, the sooner the result would be found. Some systems have a larger database that gives the probability that the output is the expected output. It is important to note that one algorithm can have several performance profiles. Most of the time performance profiles are constructed using mathematical statistics using representative cases. For example, in the traveling salesman problem, the performance profile was generated using a user-defined special program to generate the necessary statistics. In this example, the performance profile is the mapping of time to the expected results. This quality can be measured in several ways:
certainty: where probability of correctness determines quality
accuracy: where error bound determines quality
specificity: where the amount of particulars determine quality
Algorithm prerequisites
Initial behavior: While some algorithms start with immediate guesses, others take a more calculated approach and have a start up period before making any guesses.
Growth direction: How the quality of the program's "output" or result, varies as a function of the amount of time ("run time")
Growth rate: Amount of increase with each step. Does it change constantly, such as in a bubble sort or does it change unpredictably?
End condition: The amount of runtime needed
References
Further reading
Artificial intelligence engineering
Search algorithms | Anytime algorithm | [
"Engineering"
] | 889 | [
"Artificial intelligence engineering",
"Software engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.