id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
29,271,473
https://en.wikipedia.org/wiki/RTV%20silicone
RTV silicone (room-temperature-vulcanizing silicone) is a type of silicone rubber that cures at room temperature. It is available as a one-component product, or mixed from two components (a base and curative). Manufacturers provide it in a range of hardnesses from very soft to medium—usually from 15 to 40 Shore A. RTV silicones can be cured with a catalyst consisting of either platinum or a tin compound such as dibutyltin dilaurate. Applications include low-temperature over-molding, making molds for reproducing, and lens applications for some optically clear grades. It is also used widely in the automotive industry as an adhesive and sealant, for example to create gaskets in place. Chemistry RTV silicones are made from a mixtures of silicone polymers, fillers, and organoreactive silane catalysts. Silicones are formed from a Si–O bond, but can have a wide variety of side chains. The silicone polymers are often made by reacting dimethyl dichlorosilane with water. Fillers such as acetic acid can provide a fast cure time, while oxides and nitrides can provide better thermal conductivity. Tack-free times are typically on the order of minutes, with cure times on the order of hours. One-component silicone One-part silicones make use of moisture in the atmosphere to cure from the outside towards the center. The time to cure will decrease with an increase in temperature, humidity and surface-area-to-volume ratio. Two-component silicone Two-part silicones use moisture in the second component as well as a cross-linker such as active alkoxy to cure the silicone in a process called condensation curing. Two-part silicones can also be platinum catylized in a "addition" reaction. Other reactive species to facilitate cross-linking include acetoxy, amine, octoate, and ketoxime. Applications To produce the material, a user mixes silicone rubber with the curing agent or vulcanizing agent. Usually, the mixing ratio is a few percent. For RTV silicone to reproduce surface textures, the original must be clean. Vacuum de-airing removes entrained air bubbles from the mixed silicone and catalyst to ensure optimal tensile strength, which affects reproduction times. In casting and mold-making, RTV silicone rubber reproduces fine details and is suitable for a variety of industrial and art-related applications including prototypes, furniture, sculpture, and architectural elements. RTV silicone rubber can be used to cast materials including wax, gypsum, low-melt alloys/metals, and urethane, epoxy, or polyester resins (without using a release agent). A more recent innovation is the ability to 3D print RTV silicones. RTV silicones' industrial applications include aviation, aerospace, consumer electronics, and microelectronics. Some aviation and aerospace product applications are cockpit instruments, engine electronics potting, and engine gasketing. RTV silicones are used for their ability to withstand mechanical and thermal stress. Features Good characteristics of easy operation Light viscosity and good flow-ability Low shrinkage Favorable tension No deformation Favorable hardness High-temperature resistance, acid and alkali resistance, and ageing resistance Advantages and disadvantages RTV silicone rubber has excellent release properties compared to mold rubbers, which is especially an advantage when doing production casting of resins (polyurethane, polyester, and epoxy). No release agent is required, obviating post-production cleanup. Silicones also exhibit good chemical resistance and high-temperature resistance (205 °C, 400 °F and higher). For this reason, silicone molds are suitable for casting low-melt metals and alloys (e.g. zinc, tin, pewter, and Wood's metal). RTV silicone rubbers are, however, generally expensive – especially platinum-cure. They are also sensitive to substances (sulfur-containing modelling clay such as Plastilina, for example) that may prevent the silicone from curing (referred to as cure inhibition). Silicones are usually very thick (high viscosity), and must be vacuum degassed prior to pouring, to minimize bubble entrapment. If making a brush-on rubber mold, the curing time factor between coats is long (longer than urethanes or polysulfides, shorter than latex). Silicone components (A+B) must be mixed accurately by weight (scale required) or else they do not work. Tin-catalyst silicone shrinks somewhat and does not have a long shelf life. Acetoxysilane-based RTV releases acetic acid during the curing process. The locally released acetic acid can attack solder joints, detaching solder from copper wire. The locally released acetic acid can discolor the plating on mirror backs years after installation, making this type of RTV unsuitable for use as a mirror adhesive. References Elastomers Sculpture materials Silicones
RTV silicone
[ "Chemistry" ]
1,064
[ "Synthetic materials", "Elastomers" ]
38,792,002
https://en.wikipedia.org/wiki/Liquid%20contact%20indicator
A liquid contact indicator (LCI) is a small indicator that turns from white into another color, typically red, after contact with water. These indicators are small adhesives that are placed on several points within electronic devices such as laptops and smartphones. In case of a defective device, service personnel can check whether the device might have suffered from contact with water, to protect from warranty fraud. Liquid contact indicators are also known by other names such as water damage tape, water damage sticker, water contact indicator tape, liquid submersion indicator. Purpose The main purpose of the liquid contact indicator is to have a lead to the cause of a defect in electronic devices. The manufactuer will not conduct a repair under warranty for a device with an activated LCI. Still there can be reasons for doubt. Longer, but not extreme exposure of a device in a humid environment, can trigger a LCI. In theory water can reach the LCI(s), but still the electronics underneath it are not touched, for instance when a small drop of rain falls into the headphone connector. A user should be able to use a device in normal circumstances. For instance a smartphone is normally used while travelling, quite often outside. It can rain, or start to rain outside. A device should not break down immediately. Still such circumstances could trigger the LCI. So, a liquid contact indicator can be triggered, without pointing to liquids causing a defect. Forgery In the simplest form, liquid contact indicators are good for a first lead to the cause of defects. LCIs can be replaced, they are readily available in online electronic stores. But the other interest, use in warranty claims, make them prone to potential misuse. Therefore, manufacturers introduced LCIs that are harder to reproduce, even with small holographic details. Placement Liquid contact indicators are placed on several places in electronic devices. For example, underneath the keyboard in a notebook and on several places on its mainboard. Sometimes the liquid contact indicators are placed in such a way that they can be inspected from the outside. For instance there is an LCI in the SIM-card slot of Apple iPhones and in the dock connectors and headphone jacks of iPods made since 2006. In a Samsung Galaxy smartphone there is a LCI underneath the battery cover near the battery contacts. References Explanation of a Water Damage Indicator Sensors Mobile phones Portable electronics
Liquid contact indicator
[ "Technology", "Engineering" ]
490
[ "Sensors", "Measuring instruments" ]
38,793,054
https://en.wikipedia.org/wiki/Spectrum%20commons%20theory
The Spectrum Commons theory states that the telecommunication radio spectrum should be directly managed by its users rather than regulated by governmental or private institutions. Spectrum management is the process of regulating the use of radio frequencies to promote efficient use and gain a net social benefit. The theory of Spectrum Commons argues that there are new methods and strategies that will allow almost complete open access to this currently regulated commons with unlimited number of persons to share it without causing interference. This would eliminate the need for both a centralized, governmental management of the spectrum and the allocation of specific portions of the spectrum to private actors. The Spectrum debate The Spectrum Commons theory was developed to open up the spectrum to everyone. Users can share a spectrum as a commons without prior authorization from higher governance or regime. Proponents of spectrum commons theory believe government allocation of the spectrum is inefficient, and to be a true commons one must open up the spectrum to the users and minimize both government and private control. The promise of the commons approach as one technologist, George Gilder once put it, "You can use the spectrum as much as you want as long as you don't collide with anyone else or pollute it with high-powered noise or other nuisances." The most basic characteristic of spectrum commons theory is the unlimited access to spectrum resources, but as most modern theorists point out, there is a need for some constraint of those resources. A commons by definition is a resource that is owned or controlled jointly by a group of individuals. In order for a commons to be viable, someone must control the resource and set orderly sharing rules to govern its use. The radio spectrum is a shared resource that perhaps most strikingly affects the well being of society. Its use is governed by a set of rules and narrow restrictions, designed to limit interference, whose origins go back nearly a century. While in recent years some of those rules have been replaced by more flexible market like arrangements, the fundamental approach of this institution remains essentially unchanged. The early days of radio communication had no regulations, and everyone could use the spectrum without limitation. When a particular spectrum was filled up or overused, it created harmful interference. In order to manage the spectrum and prevent harmful interference, the NRA began to regulate the use of the spectrum. The period without regulation only lasted a few years, but this concept guided Spectrum Commons Theory. In the 1950s, economist Ronald Coase pointed out that the radio spectrum was no scarcer than wood or wheat, yet government did not routinely ration those items. Coase instead proposed the private ownership of, and a market in, spectrum, which would lead to a better allocation of the resource and avoid rent-seeking behavior by would be users of the spectrum. In the late 1990s, it seemed like the property rights view might carry the day as Congress finally allowed the FCC to auction licenses to use spectrum. Radio spectrum is doled out to users by what the Federal Communications Commission calls a “command-and-control” process. The [FCC] first carves out a block of spectrum and decides to what use it will be put (e.g., television, mobile telephony). Then, the agency gives away, at no charge, the right to use the spectrum to applicants it deems appropriate. The FCC makes its choices based largely on a public record generated by a regulatory proceeding. The rationale for such a system has been that the radio spectrum is a scarce resource, that there are more people who would like to use it than there is space available, and thus that the government must apportion it lest there be chaos. Types of Commons Complete Open Commons Spectrum Commons Theory although conceptually tries to focus on functioning as a completely free and open environment, facts points to this idea as flawed. Complete open commons, is a regime under which anyone has access to an unowned resource without limitation; no one controls access to the resource under open access. As previously mentioned however, in order for a commons to be viable, someone must control the resource and set orderly sharing rules to govern its use. While it is true that access to a commons can be open, this does not mean there is no central rule-setting authority. Complete open commons is not a feasible regime for spectrum because, as a scarce resource, it will be subject to tragedy. Even given new spectrum-sharing technologies, a controller is still needed because these technologies require standards setting and enforcement in order to function. Market Based Commons Economists, who have long been skeptical about the ability of government agencies to allocate resources efficiently by “picking winners,” have preponderantly favored a market approach to the allocation of resources generally, and to the allocation of the spectrum in particular. As early as 1959, Ronald Coase wrote that spectrum was a fixed factor of production, like land or labor, and should be treated in the same way, with its use determined by the pricing system and awarded to the highest bidder. Coase concluded that government allocation of spectrum-use rights was not necessary to prevent interference and that, in fact, by preempting market allocation of spectrum, regulation was the source of extreme inefficiency. Economists since Coase have favored a market-based approach if there is profit to be made from the charge of an entrance fee to such a park, then private enterprise and the profit motive can be relied upon to lead firms to carry out the necessary arrangements. And if entry into the commons is sufficiently beneficial to the entrants, there will indeed be profits to be made by giving them the opportunity to do so. Supercommons Another way to expand on the Spectrum Commons Theory is looking at it as a supercommons. As Werbach points out, a supercommons can operate alongside the property and commons regimes, which are just different configurations of usage rights associated with spectrum. In other words, the commons would be the baseline, with property encompassed within it, rather than the reverse. Bandwidth would not need to be infinite to justify a fundamental reconceptualization of the spectrum debate. Even with real-world scarcity and transaction-cost constraints, a default rule allowing unfettered wireless communication would most effectively balance interests to maximize capacity. The initial legal rule for this spectrum should be universal access. Anyone would be permitted to transmit anywhere, at any time, in any manner, so long as they did not impose an excessive burden on others. Modern Examples Propagate Network's Swarm Logic Software Which enables different communicate with one another and to choose nonconflicting frequencies or access points that will adjust their power levels to eliminate overlap. If this technology were able to reach a critical mass of adoption, even in localized areas, it could conceivably minimize those transaction costs necessary to adapt to neighboring uses of commons access spectrum. For neighboring buildings with scores of Wi-Fi transmitters, such technologies could prove very important, ensuring that different signals did not overlap and interfere with each other-thereby slowing data transmission and possibly triggering the destructive cycle of behavior noted above. Moreover, a logical extension of the swarm logic software is a function that could enable neighbors to identify those who deviated from accepted social norms in using commons access spectrum and, concomitantly, lower enforcement costs. Indeed, collective efforts-such as the Broadband Access Network Coordination ("BANC")-have already taken root to facilitate joint and controlled efforts to limit interference. References Radio spectrum Wireless networking Radio resource management
Spectrum commons theory
[ "Physics", "Technology", "Engineering" ]
1,505
[ "Radio spectrum", "Spectrum (physical sciences)", "Wireless networking", "Computer networks engineering", "Electromagnetic spectrum" ]
1,471,697
https://en.wikipedia.org/wiki/Non-linear%20sigma%20model
In quantum field theory, a nonlinear σ model describes a field that takes on values in a nonlinear manifold called the target manifold  T. The non-linear σ-model was introduced by , who named it after a field corresponding to a spinless meson called σ in their model. This article deals primarily with the quantization of the non-linear sigma model; please refer to the base article on the sigma model for general definitions and classical (non-quantum) formulations and results. Description The target manifold T is equipped with a Riemannian metric g. is a differentiable map from Minkowski space M (or some other space) to T. The Lagrangian density in contemporary chiral form is given by where we have used a + − − − metric signature and the partial derivative is given by a section of the jet bundle of T×M and is the potential. In the coordinate notation, with the coordinates , a = 1, ..., n where n is the dimension of T, In more than two dimensions, nonlinear σ models contain a dimensionful coupling constant and are thus not perturbatively renormalizable. Nevertheless, they exhibit a non-trivial ultraviolet fixed point of the renormalization group both in the lattice formulation and in the double expansion originally proposed by Kenneth G. Wilson. In both approaches, the non-trivial renormalization-group fixed point found for the O(n)-symmetric model is seen to simply describe, in dimensions greater than two, the critical point separating the ordered from the disordered phase. In addition, the improved lattice or quantum field theory predictions can then be compared to laboratory experiments on critical phenomena, since the O(n) model describes physical Heisenberg ferromagnets and related systems. The above results point therefore to a failure of naive perturbation theory in describing correctly the physical behavior of the O(n)-symmetric model above two dimensions, and to the need for more sophisticated non-perturbative methods such as the lattice formulation. This means they can only arise as effective field theories. New physics is needed at around the distance scale where the two point connected correlation function is of the same order as the curvature of the target manifold. This is called the UV completion of the theory. There is a special class of nonlinear σ models with the internal symmetry group G *. If G is a Lie group and H is a Lie subgroup, then the quotient space G/H is a manifold (subject to certain technical restrictions like H being a closed subset) and is also a homogeneous space of G or in other words, a nonlinear realization of G. In many cases, G/H can be equipped with a Riemannian metric which is G-invariant. This is always the case, for example, if G is compact. A nonlinear σ model with G/H as the target manifold with a G-invariant Riemannian metric and a zero potential is called a quotient space (or coset space) nonlinear model. When computing path integrals, the functional measure needs to be "weighted" by the square root of the determinant of g, Renormalization This model proved to be relevant in string theory where the two-dimensional manifold is named worldsheet. Appreciation of its generalized renormalizability was provided by Daniel Friedan. He showed that the theory admits a renormalization group equation, at the leading order of perturbation theory, in the form being the Ricci tensor of the target manifold. This represents a Ricci flow, obeying Einstein field equations for the target manifold as a fixed point. The existence of such a fixed point is relevant, as it grants, at this order of perturbation theory, that conformal invariance is not lost due to quantum corrections, so that the quantum field theory of this model is sensible (renormalizable). Further adding nonlinear interactions representing flavor-chiral anomalies results in the Wess–Zumino–Witten model, which augments the geometry of the flow to include torsion, preserving renormalizability and leading to an infrared fixed point as well, on account of teleparallelism ("geometrostasis"). O(3) non-linear sigma model A celebrated example, of particular interest due to its topological properties, is the O(3) nonlinear -model in 1 + 1 dimensions, with the Lagrangian density where n̂=(n1, n2, n3) with the constraint n̂⋅n̂=1 and =1,2. This model allows for topological finite action solutions, as at infinite space-time the Lagrangian density must vanish, meaning n̂ = constant at infinity. Therefore, in the class of finite-action solutions, one may identify the points at infinity as a single point, i.e. that space-time can be identified with a Riemann sphere. Since the n̂-field lives on a sphere as well, the mapping is in evidence, the solutions of which are classified by the second homotopy group of a 2-sphere: These solutions are called the O(3) Instantons. This model can also be considered in 1+2 dimensions, where the topology now comes only from the spatial slices. These are modelled as R^2 with a point at infinity, and hence have the same topology as the O(3) instantons in 1+1 dimensions. They are called sigma model lumps. See also Sigma model Chiral model Little Higgs Skyrmion, a soliton in non-linear sigma models Polyakov action WZW model Fubini–Study metric, a metric often used with non-linear sigma models Ricci flow Scale invariance References External links Quantum field theory Mathematical physics
Non-linear sigma model
[ "Physics", "Mathematics" ]
1,189
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical physics" ]
1,471,806
https://en.wikipedia.org/wiki/Composite%20field
In quantum field theory, a composite field is a field defined in terms of other more "elementary" fields. It might describe a composite particle (bound state) or it might not. It might be local, or it might be nonlocal. Noether fields are often composite fields and they are local. In the generalized LSZ formalism, composite fields, which are usually nonlocal, are used to model asymptotic bound states. See also Fermionic field Bosonic field Auxiliary field References Quantum field theory
Composite field
[ "Physics" ]
109
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]
1,472,404
https://en.wikipedia.org/wiki/Metabolic%20ecology
Metabolic ecology is a field of ecology aiming to understand constraints on metabolic organization as important for understanding almost all life processes. Main focus is on the metabolism of individuals, emerging intra- and inter-specific patterns, and the evolutionary perspective. Two main metabolic theories that have been applied in ecology are Kooijman's Dynamic energy budget (DEB) theory and the West, Brown, and Enquist (WBE) metabolic scaling theory. Both theories have an individual-based metabolic underpinning but have fundamentally different assumptions. Metabolic Scaling Theory is based more in first principles and makes several simplifying assumptions to better reveal the generalities of the role of metabolism in shaping organismal form and function and its impact on ecology and evolution. In many ways, DEB is a more parameterized species-level version of the WBE theory. Models of individual's metabolism follow the energy uptake and allocation, and can focus on mechanisms and constraints of energy transport (transport models), or dynamic use of stored metabolites (energy budget models). References Ecology Metabolism
Metabolic ecology
[ "Chemistry", "Biology" ]
216
[ "Biochemistry", "Ecology", "Metabolism", "Cellular processes" ]
1,473,011
https://en.wikipedia.org/wiki/Bloomery
A bloomery is a type of metallurgical furnace once used widely for smelting iron from its oxides. The bloomery was the earliest form of smelter capable of smelting iron. Bloomeries produce a porous mass of iron and slag called a bloom. The mix of slag and iron in the bloom, termed sponge iron, is usually consolidated and further forged into wrought iron. Blast furnaces, which produce pig iron, have largely superseded bloomeries. Process A bloomery consists of a pit or chimney with heat-resistant walls made of earth, clay, or stone. Near the bottom, one or more pipes (made of clay or metal) enter through the side walls. These pipes, called tuyeres, allow air to enter the furnace, either by natural draught or forced with bellows or a trompe. An opening at the bottom of the bloomery may be used to remove the bloom, or the bloomery can be tipped over and the bloom removed from the top. The first step taken before the bloomery can be used is the preparation of the charcoal and the iron ore. Charcoal is nearly pure carbon, which, when burned, both produces the high temperature needed for the smelting process and provides the carbon monoxide needed for reduction of the metal. The ore is broken into small pieces and usually roasted in a fire, to make rock-based ores easier to break up, bake out some impurities, and (to a lesser extent) to remove any moisture in the ore. Any large impurities (as silica) in the ore can be removed as it is crushed. The desired particle size depends primarily on which of several ore types may be available, which will also have a relationship to the layout and operation of the furnace, of which a number of regional, historic/traditional forms exist. Natural iron ores can vary considerably in oxide form ( / / ), and importantly in relative iron content. Since slag from previous blooms may have a high iron content, it can also be broken up and may be recycled into the bloomery with the new ore. In operation, after the bloomery is heated typically with a wood fire, shifting to burning sized charcoal, iron ore and additional charcoal are introduced through the top. Again, traditional methods vary, but normally smaller charges of ore are added at the start of the main smelting sequence, increasing to larger amounts as the smelt progresses. Overall, a typical ratio of total charcoal to ore added is in a roughly one-to-one ratio. Inside the furnace, carbon monoxide from the incomplete combustion of the charcoal reduces the iron oxides in the ore to metallic iron without melting the ore; this allows the bloomery to operate at lower temperatures than the melting temperature of the ore. As the desired product of a bloomery is iron that is easily forgeable, it requires a low carbon content. The temperature and ratio of charcoal to iron ore must be carefully controlled to keep the iron from absorbing too much carbon and thus becoming unforgeable. Cast iron occurs when the iron absorbs 2% to 4% carbon. Because the bloomery is self-fluxing, the addition of limestone is not required to form a slag. The small particles of iron produced in this way fall to the bottom of the furnace, where they combine with molten slag, often consisting of fayalite, a compound of silicon, oxygen, and iron mixed with other impurities from the ore. The hot liquid slag, running to the bottom of the furnace, cools against the base and lower side walls of the furnace, effectively forming a bowl still containing fluid slag. As the individual iron particles form, they fall into this bowl and sinter together under their own weight, forming a spongy mass referred to as the bloom. Because the bloom is typically porous, and its open spaces can be full of slag, the extracted mass must be beaten with heavy hammers to both compress voids and drive out any molten slag remaining. This process may require several additional heating and compaction cycles, working at high 'welding' temperatures. Iron treated this way is said to be wrought (worked), and the resulting iron, with reduced amounts of slag, is called wrought iron or bar iron. Because of the creation process, individual blooms can often have differing carbon contents between the original top and bottom surfaces, differences that will also be somewhat blended together through the flattening, folding, and hammer-welding sequences. Intentionally producing blooms that are coated in steel (i.e. iron with a higher carbon content) by manipulating the charge of and air flow to the bloomery is also possible. As the era of modern commercial steelmaking began, the word "bloom" was extended to another sense referring to an intermediate-stage piece of steel, of a size comparable to many traditional iron blooms, that was ready to be further worked into billet. History The onset of the Iron Age in most parts of the world coincides with the first widespread use of the bloomery. While earlier examples of iron are found, their high nickel content indicates that this is meteoric iron. Other early samples of iron may have been produced by accidental introduction of iron ore in copper-smelting operations. Iron appears to have been smelted in the Middle East as early as 3000 BC, but coppersmiths, not being familiar with iron, did not put it to use until much later. In the West, iron began to be used around 1200 BC. East Asia China has long been considered the exception to the general use of bloomeries. The Chinese are thought to have skipped the bloomery process completely, starting with the blast furnace and the finery forge to produce wrought iron; by the fifth century BC, metalworkers in the southern state of Wu had invented the blast furnace and the means to both cast iron and to decarburize the carbon-rich pig iron produced in a blast furnace to a low-carbon, wrought iron-like material. Recent evidence, however, shows that bloomeries were used earlier in ancient China, migrating in from the west as early as 800 BC, before being supplanted by the locally developed blast furnace. Supporting this theory was the discovery of "more than ten" iron-digging implements found in the tomb of Duke Jing of Qin (d. 537 BCE), whose tomb is located in Fengxiang County, Shaanxi (a museum exists on the site today). Sub-Saharan Africa The earliest records of bloomery-type furnaces in East Africa are discoveries of smelted iron and carbon in Nubia in ancient Sudan dated at least to the seventh to the sixth century BC. The ancient bloomeries that produced metal tools for the Nubians and Kushites produced a surplus for sale. All traditional sub-Saharan African iron-smelting processes are variants of the bloomery process. There is considerable discussion about the origins of iron metallurgy in Africa. Smelting in bloomery type furnaces in West Africa and forging of tools appeared in the Nok culture of central Nigeria by at least 550 BC and possibly several centuries earlier. Also, evidence indicates iron smelting with bloomery-style furnaces dated to 750 BC in Opi (Augustin Holl 2009) and Lejja dated to 2,000 BC (Pamela Eze-Uzomaka 2009), both sites in the Nsukka region of southeast Nigeria in what is now Igboland. The site of Gbabiri, in the Central African Republic, has also yielded evidence of iron metallurgy, from a reduction furnace and blacksmith workshop, with earliest dates of 896–773 and 907–796 BC, respectively. South Asia During a hydroelectric plant project, in the southern foothills of the Central Highlands, Samanalawewa, in Sri Lanka, a wind-driven furnace was found in an excavation site. Such furnaces were powered by the monsoon winds and have been dated to 300 BC using radiocarbon-dating techniques. These ancient Lankan furnaces might have produced the best-quality steel for legendary Damascus swords as referred in earlier Syrian records. Field trials using replica furnaces confirmed that this furnace type uses a wind-based air-supply principle that is distinct from either forced or natural draught, and show also that they are capable of producing high-carbon steel. Wrought iron was used in the construction of monuments such as the iron pillar of Delhi, built in the third century AD during the Gupta Empire. The latter was built using a towering series of disc-shaped iron blooms. Similar to China, high-carbon steel was eventually used in India, although cast iron was not used for architecture until modern times. Early to Medieval Europe Early European bloomeries were relatively small, primarily due to the mechanical limits of human-powered bellows and the amount of force possible to apply with hand-driven sledge hammers. Those known archaeologically from the pre-Roman Iron Age tend to be in the 2 kg range, produced in low shaft furnaces. Roman-era production often used furnaces tall enough to create a natural draft effect (into the range of 200 cm tall), and increasing bloom sizes into the range of 10–15 kg. Contemporary experimenters had routinely made blooms using Northern European-derived "short-shaft" furnaces with blown air supplies in the 5–10 kg range The use of waterwheels, spreading around the turn of the first millennium and used to power more massive bellows, allowed the bloomery to become larger and hotter, with associated trip hammers allowing the consolidation forging of the larger blooms created. Progressively larger bloomeries were constructed in the late 14th century, with a capacity of about 15 kg on average, though exceptions did exist. European average bloom sizes quickly rose to 300 kg, where they levelled off until the demise of the bloomery. As a bloomery's size is increased, the iron ore is exposed to burning charcoal for a longer time. When combined with the strong air blast required to penetrate the large ore and charcoal stack, this may cause part of the iron to melt and become saturated with carbon in the process, producing unforgeable pig iron, which requires oxidation to be reduced into cast iron, steel, and iron. This pig iron was considered a waste product detracting from the largest bloomeries' yield, and early blast furnaces, identical in construction, but dedicated to the production of molten iron, were not built until the 14th century. Bloomery type furnaces typically produced a range of iron products from very low-carbon iron to steel containing around 0.2–1.5% carbon. The master smith had to select pieces of low-carbon iron, carburize them, and pattern-weld them together to make steel sheets. Even when applied to a noncarburized bloom, this pound, fold, and weld process resulted in a more homogeneous product and removed much of the slag. The process had to be repeated up to 15 times when high-quality steel was needed, as for a sword. The alternative was to carburize the surface of a finished product. Each welding's heat oxidises some carbon, so the master smith had to make sure enough carbon was in the starting mixture. In England and Wales, despite the arrival of the blast furnace in the Weald in about 1491, bloomery forges, probably using waterpower for the hammer and the bellows, were operating in the West Midlands region beyond 1580. In Furness and Cumberland, they operated into the early 17th century and the last one in England (near Garstang) did not close until about 1770. One of the oldest-known blast furnaces in Europe has been found in Lapphyttan in Sweden, carbon-14 dated to be from the 12th century. The oldest bloomery in Sweden, also found in the same area, has been carbon-14 dated to 700 BCE. Bloomeries survived in Spain and southern France as Catalan forges into the mid-19th century, and in Austria as the to 1775. The Americas Iron smelting was unknown in pre-Columbian America. Excavations at L'Anse aux Meadows, Newfoundland, have found considerable evidence for the processing of bog iron and the production of iron in a bloomery by the Norse. The cluster of Viking Age (–1022 AD) at L'Anse aux Meadows are situated on a raised marine terrace, between a sedge peat bog and the ocean. Estimates from the smaller amount of slag recovered archaeologically suggest 15 kg of slag was produced during what appears to have been a single smelting attempt. By comparing the iron content of the primary bog iron ore found in the purpose built 'furnace hut' with the iron remaining in that slag, an estimated 3 kg iron bloom was produced. At a yield of at best 20% from what is a good iron rich ore, this suggests the workers processing the ore had not been particularly skilled. This supports the idea that iron processing knowledge was widespread and not restricted to major centers of trade and commerce. Archaeologists also found 98 nail, and importantly, ship rivet fragments, at the site as well as considerable evidence for woodworking – which points to boat or possibly ship repairs being undertaken at the site. (An important consideration remains that a potential 3 kg raw bloom most certainly does not make enough refined bar to manufacture the 3 kg of recovered nails and rivets.) In the Spanish colonization of the Americas, bloomeries or "Catalan forges" were part of "self-sufficiency" at some of the missions, , and . As part of the Franciscan Spanish missions in Alta California, the "Catalan forges" at Mission San Juan Capistrano from the 1790s are the oldest existing facilities of their kind in the present day state of California. The bloomeries' sign proclaims the site as being "part of Orange County's first industrial complex". The archaeology at Jamestown Virginia (circa 1610–1615) had recovered the remains of a simple short-shaft bloomery furnace, likely intended as yet another "resource test" like the one in Vinland much earlier. The English settlers of the Thirteen Colonies were prevented by law from manufacture; for a time, the British sought to situate most of the skilled artisanry at domestic locations. In fact, this was one of the problems that led to the revolution. The Falling Creek Ironworks was the first in the United States. The Neabsco Iron Works is an example of the early Virginian effort to form a workable American industry. The earliest iron forge in colonial Pennsylvania was Thomas Rutter's bloomery near Pottstown, founded in 1716. In the Adirondacks, New York, new bloomeries using the hot blast technique were built in the 19th century. See also Double hammer Tatara (furnace) Direct reduction Direct reduction (blast furnace) References External links Technology and archaeology of the earliest iron smelting and smithing Experimental Iron Smelting (at the Wareham Forge) Viking-Era Norse techniques by DARC WIRG experimental bloomery Precursors of the blast furnace Roger Smith's article on bloomery construction How Stuff Works Early use of iron in China The Catalan process for the direct production of malleable iron and its spread to Europe and the Americas PDF by Estanislau Tomàs (retrieved 23 March 2010) A Practical Treatise on the Smelting and Smithing of Bloomery Iron https://hmsjournal.org/index.php/home/article/view/268/257 An Update on "A Practical Treatise" https://www.researchgate.net/publication/285737243_An_American_bloomery_in_Sussex#fullTextFileContent Industrial furnaces Steelmaking Iron Archaeometallurgy Smelting Iron Age Europe
Bloomery
[ "Chemistry", "Materials_science" ]
3,250
[ "Smelting", "Metallurgical processes", "Metallurgy", "Steelmaking", "Archaeometallurgy", "Industrial furnaces" ]
1,473,331
https://en.wikipedia.org/wiki/Artificial%20womb
An artificial womb or artificial uterus is a device that would allow for extracorporeal pregnancy, by growing a fetus outside the body of an organism that would normally carry the fetus to term. An artificial uterus, as a replacement organ, would have many applications. It could be used to assist male or female couples in the development of a fetus. This can potentially be performed as a switch from a natural uterus to an artificial uterus, thereby moving the threshold of fetal viability to a much earlier stage of pregnancy. In this sense, it can be regarded as a neonatal incubator with very extended functions. It could also be used for the initiation of fetal development. An artificial uterus could also help make fetal surgery procedures at an early stage an option instead of having to postpone them until term of pregnancy. In 2016, scientists published two studies regarding human embryos developing for thirteen days within an ecto-uterine environment. In 2017, fetal researchers at the Children's Hospital of Philadelphia published a study showing they had grown premature lamb fetuses for four weeks in an extra-uterine life support system. A 14-day rule prevents human embryos from being kept in artificial wombs longer than 14 days; this rule has been codified into law in twelve countries. In 2021, The Washington Post reported that "the International Society for Stem Cell Research relaxed a historical '14-day rule' that said researchers could grow natural embryos for only 14 days in the laboratory, allowing researchers to seek approval for longer studies"; but the article nonetheless specified that: "[h]uman embryo models are banned from being implanted into a uterus." Components An artificial uterus, sometimes referred to as an "exowomb", would have to provide nutrients and oxygen to nurture a fetus, as well as dispose of waste material. The scope of an artificial uterus, or "artificial uterus system" to emphasize a broader scope, may also include the interface serving the function otherwise provided by the placenta, an amniotic tank functioning as the amniotic sac, as well as an umbilical cord. Nutrition, oxygen supply and waste disposal A woman may still supply nutrients and dispose of waste products if the artificial uterus is connected to her. She may also provide immune protection against diseases by passing of IgG antibodies to the embryo or fetus. Artificial supply and disposal have the potential advantage of allowing the fetus to develop in an environment that is not influenced by the presence of disease, environmental pollutants, alcohol, or drugs which a human may have in the circulatory system. There is no risk of an immune reaction towards the embryo or fetus that could otherwise arise from insufficient gestational immune tolerance. Some individual functions of an artificial supplier and disposer include: Waste disposal may be performed through dialysis. For oxygenation of the embryo or fetus, and removal of carbon dioxide, extracorporeal membrane oxygenation (ECMO) is a functioning technique, having successfully kept goat fetuses alive for up to 237 hours in amniotic tanks. ECMO is currently a technique used in selected neonatal intensive care units to treat term infants with selected medical problems that result in the infant's inability to survive through gas exchange using the lungs. However, the cerebral vasculature and germinal matrix are poorly developed in fetuses, and subsequently, there is an unacceptably high risk for intraventricular hemorrhage (IVH) if administering ECMO at a gestational age less than 32 weeks. Liquid ventilation has been suggested as an alternative method of oxygenation, or at least providing an intermediate stage between the womb and breathing in open air. For artificial nutrition, current techniques are problematic. Total parenteral nutrition, as studied on infants with severe short bowel syndrome, has a 5-year survival of approximately 20%. Issues related to hormonal stability also remain to be addressed. Theoretically, animal suppliers and disposers may be used, but when involving an animal's uterus the technique may rather be in the scope of interspecific pregnancy. Uterine wall In a normal uterus, the myometrium of the uterine wall functions to expel the fetus at the end of a pregnancy, and the endometrium plays a role in forming the placenta. An artificial uterus may include components of equivalent function. Methods have been considered to connect an artificial placenta and other "inner" components directly to an external circulation. Interface (artificial placenta) An interface between the supplier and the embryo or fetus may be entirely artificial, e.g. by using one or more semipermeable membranes such as is used in extracorporeal membrane oxygenation (ECMO). There is also potential to grow a placenta using human endometrial cells. In 2002, it was announced that tissue samples from cultured endometrial cells removed from a human donor had successfully grown. The tissue sample was then engineered to form the shape of a natural uterus, and human embryos were then implanted into the tissue. The embryos correctly implanted into the artificial uterus' lining and started to grow. However, the experiments were halted after six days to stay within the permitted legal limits of in vitro fertilisation (IVF) legislation in the United States. A human placenta may theoretically be transplanted inside an artificial uterus, but the passage of nutrients across this artificial uterus remains an unsolved issue. Amniotic tank (artificial amniotic sac) The main function of an amniotic tank would be to fill the function of the amniotic sac in physically protecting the embryo or fetus, optimally allowing it to move freely. It should also be able to maintain an optimal temperature. Lactated Ringer's solution can be used as a substitute for amniotic fluid. Umbilical cord Theoretically, in case of premature removal of the fetus from the natural uterus, the natural umbilical cord could be used, kept open either by medical inhibition of physiological occlusion, by anti-coagulation as well as by stenting or creating a bypass for sustaining blood flow between the mother and fetus. Research and development The use of artificial wombs was first termed ectogenesis by British-Indian pioneer JBS Haldane in 1923. Specifying related terminology, one paper determines:"A potential, though remotely possible, application of the technology is ‘complete ectogenesis’ – complete gestation outside the human body. This will greatly affect the substantiated human involvement during gestation, making it an extracorporeal event and thus completely transforming the conventional notion of pregnancy." Emanuel M. Greenberg (USA) Emanuel M. Greenberg wrote various papers on the topic of the artificial womb and its potential use in the future. On 22 July 1954 Emanuel M. Greenberg filed a patent on the design for an artificial womb. The patent included two images of the design for an artificial womb. The design itself included a tank to place the fetus filled with amniotic fluid, a machine connecting to the umbilical cord, blood pumps, an artificial kidney, and a water heater. He was granted the patent on 15 November 1955. Juntendo University (Japan) In 1996, Juntendo University in Tokyo developed the extra-uterine fetal incubation (EUFI). The project was led by Yoshinori Kuwabara, who was interested in the development of immature newborns. The system was developed using fourteen goat fetuses that were then placed into artificial amniotic fluid under the same conditions of a mother goat. Kuwabara and his team succeeded in keeping the goat fetuses in the system for three weeks. The system, however, ran into several problems and was not ready for human testing. Kuwabara remained hopeful that the system would be improved and would later be used on human fetuses. Children's Hospital of Philadelphia In 2017, researchers at the Children's Hospital of Philadelphia were able to further develop the extra-uterine system. The study uses fetal lambs which are then placed in a plastic bag filled with artificial amniotic fluid. The system consist in 3 main components: a pumpless arteriovenous circuit, a closed sterile fluid environment and an umbilical vascular access. Regarding the pumpless arteriovenous circuit, the blood flow is driven exclusively by the fetal heart, combined with a very low resistance oxygenator to most closely mimic the normal fetal/placental circulation. The closed sterile fluid environment is important to ensure sterility. Scientists developed a technique for umbilical cord vessel cannulation that maintains a length of native umbilical cord (5–10 cm) between the cannula tips and the abdominal wall, to minimize decannulation events and the risk of mechanical obstruction. The umbilical cord of the lambs are attached to a machine outside of the bag designed to act like a placenta and provide oxygen and nutrients and also remove any waste. The researchers kept the machine "in a dark, warm room where researchers can play the sounds of the mother's heart for the lamb fetus." The system succeeded in helping the premature lamb fetuses develop normally for a month. Indeed, scientists have run 8 lambs with maintenance of stable levels of circuit flow equivalent to the normal flow to the placenta. Specifically, they have run 5 fetuses from 105 to 108 days of gestation for 25–28 days, and 3 fetuses from 115 to 120 days of gestation for 20–28 days. The longest runs were terminated at 28 days due to animal protocol limitations rather than any instability, suggesting that support of these early gestational animals could be maintained beyond 4 weeks. Alan Flake, a fetal surgeon at the Children's Hospital of Philadelphia hopes to move testing to premature human fetuses, but this could take anywhere from three to five years to become a reality. Flake, who led the study, calls the possibility of their technology recreating a full pregnancy a "pipe dream at this point" and does not personally intend to create the technology to do so. Colossal Biosciences In 2021, a start-up company founded by Benn Lamm and George Church, Colossal Biosciences began research and development of artificial animal wombs to further de-extinction and conservation efforts for species such as the woolly mammoth and northern white rhinoceros. An artificial womb is also necessary in reviving a species with no suitable living surrogate species such as Steller's sea cow or elephant bird Colossal in collaboration with the University of Melbourne have also developed artificial marsupial pouches as part of their de-extinction project of the thylacine. Eindhoven University of Technology (NL) Since 2016, researchers of TU/e and partners aim to develop an artificial womb, which is an adequate substitute for the protective environment of the maternal womb in case of premature birth, preventing health complications. The artificial womb and placenta will provide a natural environment for the baby with the goal to ease the transition to newborn life. The perinatal life support (PLS) system will be developed using breakthrough technology: a manikin will mimic the infant during testing and training, advanced monitoring and computational modeling will provide clinical guidance. The consortium of 3 European universities working on the project consists out of Aachen, Milaan and Eindhoven. In 2019 this consortium was granted a subsidy of 3 million euros, and a second grant of 10 million is in progress. Together, the PLS partners provide joint medical, engineering, and mathematical expertise to develop and validate the Perinatal Life Support system using breakthrough simulation technologies. The interdisciplinary consortium will push the development of these technologies forward and combine them to establish the first ex vivo fetal maturation system for clinical use. This project, coordinated by the Eindhoven University of Technology brings together world-leading experts in obstetrics, neonatology, industrial design, mathematical modelling, ex vivo organ support, and non-invasive fetal monitoring. This consortium is led by professor Frans van de Vosse and Professor and doctor Guid Oei. in 2020 the spin off Juno Perinatal Healthcare has been set up by engineers Jasmijn Kok and Lyla Kok, assuring valorisation of the research done. More information about the spin off can be found here; More information about the project of the technical universities and its researchers can be found here: Weizmann Institute of Science (Israel) In 2021, the Weizmann Institute of Science in Israel built a mechanical uterus and grew mouse embryos outside the uterus for several days. This device was also used in 2022 to nurture mouse stem cells for over a week and grow synthetic embryos from stem cells. Philosophical considerations Bioethics The development of artificial uteri and ectogenesis raises bioethical and legal considerations, and also has important implications for reproductive rights and the abortion debate. Implementing artificial wombs would require advanced technology and significant costs, potentially limiting access for people in developing countries or with fewer resources. Artificial uteri may expand the range of fetal viability, raising questions about the role that fetal viability plays within abortion law. Within severance theory, for example, abortion rights only include the right to remove the fetus, and do not always extend to the termination of the fetus. If transferring the fetus from a woman's womb to an artificial uterus is possible, the choice to terminate a pregnancy in this way could provide an alternative to aborting the fetus. A 2007 essay theorizes that children who develop in an artificial uterus may lack "some essential bond with their mothers that other children have". Gender equality In the 1970 book The Dialectic of Sex, feminist Shulamith Firestone wrote that differences in biological reproductive roles are a source of gender inequality. Firestone singled out pregnancy and childbirth, making the argument that an artificial womb would free "women from the tyranny of their reproductive biology." Arathi Prasad argues in her column on The Guardian in her article "How artificial wombs will change our ideas of gender, family and equality" that "[i]t will ... give men an essential tool to have a child entirely without a woman, should they choose. It will ask us to question concepts of gender and parenthood." She furthermore argues for the benefits for same-sex couples, saying: "It might also mean that the divide between mother and father can be dispensed with: a womb outside a woman’s body would serve women, trans women and male same-sex couples equally without prejudice." It could even be a solution for women with absolute uterine infertility. See also References Further reading Bulletti FM, Sciorio R, Palagiano A, Bulletti C. (2023). "The artificial uterus: on the way to ectogenesis." Zygote. 31(5):457-467. doi:10.1017/S0967199423000175 Johnson Dahlke, Laura. Outer Origin: A Discourse on Ectogenesis and the Value of Human Experience. Eugene, Oregon: Pickwick Publications, 2024, Hypothetical technology Obstetrics Uterus Womb
Artificial womb
[ "Physics", "Biology" ]
3,155
[ "Materials", "Artificial organs", "Matter", "Conservation and restoration materials" ]
1,473,481
https://en.wikipedia.org/wiki/Reference%20electrode
A reference electrode is an electrode that has a stable and well-known electrode potential. The overall chemical reaction taking place in a cell is made up of two independent half-reactions, which describe chemical changes at the two electrodes. To focus on the reaction at the working electrode, the reference electrode is standardized with constant (buffered or saturated) concentrations of each participant of the redox reaction. There are many ways reference electrodes are used. The simplest is when the reference electrode is used as a half-cell to build an electrochemical cell. This allows the potential of the other half cell to be determined. An accurate and practical method to measure an electrode's potential in isolation (absolute electrode potential) has yet to be developed. Aqueous reference electrodes Common reference electrodes and potential with respect to the standard hydrogen electrode (SHE): Standard hydrogen electrode (SHE) (E = 0.000 V) activity of H+ = 1 Molar Normal hydrogen electrode (NHE) (E ≈ 0.000 V) concentration H+ = 1 Molar Reversible hydrogen electrode (RHE) (E = 0.000 V - 0.0591 × pH) at 25 °C Saturated calomel electrode (SCE) (E = +0.241 V saturated) Copper–copper(II) sulfate electrode (CSE) (E = +0.314 V) Silver chloride electrode (E = +0.197 V in saturated KCl) Silver chloride electrode (E = +0.210 V in 3.0 mol KCl/kg) Silver chloride electrode (E = +0.22249 V in 3.0 mol KCl/L) pH-electrode (in case of pH buffered solutions, see buffer solution) Palladium-hydrogen electrode Dynamic hydrogen electrode (DHE) Mercury-mercurous sulfate electrode (MSE) (E = +0.64 V in sat'd K2SO4, E = +0.68 V in 0.5 M H2SO4) Nonaqueous reference electrodes While it is convenient to compare between solvents to qualitatively compare systems, this is not quantitatively meaningful. Much as pKa are related between solvents, but not the same, so is the case with E°. While the SHE might seem to be a reasonable reference for nonaqueous work as it turns out the platinum is rapidly poisoned by many solvents including acetonitrile causing uncontrolled drifts in potential. Both the SCE and saturated Ag/AgCl are aqueous electrodes based around saturated aqueous solution. While for short periods it may be possible to use such aqueous electrodes as references with nonaqueous solutions the long-term results are not trustworthy. Using aqueous electrodes introduces undefined, variable, and unmeasurable junction potentials to the cell in the form of a liquid-liquid junction as well as different ionic composition between the reference compartment and the rest of the cell. The best argument against using aqueous reference electrodes with nonaqueous systems, as mentioned earlier, is that potentials measured in different solvents are not directly comparable. For instance, the potential for the Fc0/+ couple is sensitive to solvent. A quasi-reference electrode (QRE) avoids the issues mentioned above. A QRE with ferrocene or another internal standard, such as cobaltocene or decamethylferrocene, referenced back to ferrocene is ideal for nonaqueous work. Since the early 1960s ferrocene has been gaining acceptance as the standard reference for nonaqueous work for a number of reasons, and in 1984, IUPAC recommended ferrocene (0/1+) as a standard redox couple. The preparation of the QRE electrode is simple, allowing for a fresh reference to be prepared with each set of experiments. Since QREs are made fresh, there is also no concern with improper storage or maintenance of the electrode. QREs are also more affordable than other reference electrodes. To make a quasi-reference electrode (QRE): Insert a piece of silver wire into concentrated HCl then allow the wire to dry on a lint-free cleaning cloth. This forms an insoluble layer of AgCl on the surface of the electrode and gives you an Ag/AgCl wire. Repeat dipping every few months or if the QRE starts to drift. Obtain a Vycor glass frit (4 mm diameter) and glass tubing of similar diameter. Attach Vycor glass frit to the glass tubing with heat shrink Teflon tubing. Rinse then fill the clean glass tube with supporting electrolyte solution and insert Ag/AgCl wire. The ferrocene (0/1+) couple should lie around 400 mV versus this Ag/AgCl QRE in an acetonitrile solution. This potential will vary up to 200 mV with specific undefined conditions, thus adding an internal standard such as ferrocene at some point during the experiment is always necessary. Pseudo reference electrodes A pseudo reference electrode is a term that is not well defined and borders on having multiple meanings since pseudo and quasi are often used interchangeably. They are a class of electrodes named pseudo-reference electrodes because they do not maintain a constant potential but vary predictably with conditions. If the conditions are known, the potential can be calculated and the electrode can be used as a reference. Most electrodes work over a limited range of conditions, such as pH or temperature, outside of this range the electrodes behavior becomes unpredictable. The advantage of a pseudo-reference electrode is that the resulting variation is factored into the system allowing researchers to accurately study systems over a wide range of conditions. Yttria-stabilized zirconia (YSZ) membrane electrodes were developed with a variety of redox couples, e.g., Ni/NiO. Their potential depends on pH. When the pH value is known, these electrodes can be employed as a reference with notable applications at elevated temperatures. See also Auxiliary electrode Cyclic voltammetry Table of standard electrode potentials Working electrode References Further reading . Electrodes
Reference electrode
[ "Chemistry" ]
1,278
[ "Electrochemistry", "Electrodes" ]
1,473,750
https://en.wikipedia.org/wiki/Limiting%20factor
A limiting factor is a variable of a system that causes a noticeable change in output or another measure of a type of system. The limiting factor is in a pyramid shape of organisms going up from the producers to consumers and so on. A factor not limiting over a certain domain of starting conditions may yet be limiting over another domain of starting conditions, including that of the factor. Overview The identification of a factor as limiting is possible only in distinction to one or more other factors that are non-limiting. Disciplines differ in their use of the term as to whether they allow the simultaneous existence of more than one limiting factor (which may then be called "co-limiting"), but they all require the existence of at least one non-limiting factor when the terms are used. There are several different possible scenarios of limitation when more than one factor is present. The first scenario, called single limitation occurs when only one factor, the one with maximum demand, limits the System. Serial co-limitation is when one factor has no direct limiting effects on the system, but must be present to increase the limitation of a second factor. A third scenario, independent limitation, occurs when two factors both have limiting effects on the system but work through different mechanisms. Another scenario, synergistic limitation, occurs when both factors contribute to the same limitation mechanism, but in different ways. In 1905 Frederick Blackman articulated the role of limiting factors as follows: "When a process is conditioned as to its rapidity by several separate factors the rate of the process is limited by the pace of the slowest factor." In terms of the magnitude of a function, he wrote, "When the magnitude of a function is limited by one of a set of possible factors, increase of that factor, and of that one alone, will be found to bring about an increase of the magnitude of the function." Ecology In population ecology, a regulating factor, also known as a limiting factor, is something that keeps a population at equilibrium (neither increasing nor decreasing in size over time). Common limiting factor resources are environmental features that limit the growth, abundance, or distribution of an organism or a population of organisms in an ecosystem. The concept of limiting factors is based on Liebig's Law of the Minimum, which states that growth is controlled not by the total amount of resources available, but by the scarcest resource. In other words, a factor is limiting if a change in the factor produces increased growth, abundance, or distribution of an organism when other factors necessary to the organism's life do not. Limiting factors may be physical or biological. Limiting factors are not limited to the condition of the species. Some factors may be increased or reduced based on circumstances. An example of a limiting factor is sunlight in the rain forest, where growth is limited to all plants on the forest floor unless more light becomes available. This decreases the number of potential factors that could influence a biological process, but only one is in effect at any one place and time. This recognition that there is always a single limiting factor is vital in ecology, and the concept has parallels in numerous other processes. The limiting factor also causes competition between individuals of a species population. For example, space is a limiting factor. Many predators and prey need a certain amount of space for survival: food, water, and other biological needs. If the population of a species is too high, they start competing for those needs. Thus the limiting factors hold down population in an area by causing some individuals to seek better prospects elsewhere and others to stay and starve. Some other limiting factors in biology include temperature and other weather related factors. Species can also be limited by the availability of macro- and micronutrients. There has even been evidence of co-limitation in prairie ecosystems. A study published in 2017 showed that sodium (a micronutrient) had no effect on its own, but when in combination with nitrogen and phosphorus (macronutrients), it did show positive effects, which is evidence of serial co-limitation. Oceanography In oceanography, a prime example of a limiting factor is a limiting nutrient. Nutrient availability in freshwater and marine environments plays a critical role in determining what organisms survive and thrive. Nutrients are the building blocks of all living organisms, as they support biological activity. They are required to make proteins, DNA, membranes, organelles, and exoskeletons. The major elements that constitute >95% of organic matter mass are carbon, hydrogen, nitrogen, oxygen, sulfur, and phosphorus. Minor elements are iron, manganese, cobalt, zinc and copper. These minor elements are often only present in trace amounts but they are key as co-limiting factors as parts of enzymes, transporters, vitamins and amino acids. Within aquatic environments, nitrogen and phosphorus are leading contenders for most limiting nutrients. Discovery of the Redfield ratio was a major insight that helped understand the relationship between nutrient availability in seawater and their relative abundance in organisms. Redfield was able to notice elemental consistencies between carbon, nitrogen and phosphorus when looking at larger organisms living in the ocean (C:N:P = 106:16:1). He also observed consistencies in nutrients within the water column; nitrate to phosphate ratio was 16:1. The overarching idea was that the environment fundamentally influences the organisms that grow in it and the growing organisms fundamentally influence the environment. Redfield's opening statement in his 1934 paper explains "It is now well recognized that the growth of plankton in the surface layers of the sea is limited in part by the quantities of phosphate and nitrate available for their use and that the changes in the relative quantities of certain substances in seawater are determined in their relative proportions by biological activity". Deviations from Redfield can be used to infer elemental limitations. Limiting nutrients can be discussed in terms of dissolved nutrients, suspended particles and sinking particles, among others. When discussing dissolved nutrient stoichiometry, large deviations from the original Redfield ratio can determine if an environment is phosphorus limited or nitrogen limited. When discussing suspended particle stoichiometry, higher N:P ratios are noted in oligotrophic waters (environments dominated by cyanobacteria; low latitudes/equator) and lower N:P ratios are noted in nutrient rich ecosystems (environments dominated by diatoms; high latitudes/poles). Many areas are severely nitrogen limited, but phosphorus limitation has also been observed. In many instances trace metals or co-limitation occur. Co-limitations refer to where two or more nutrients simultaneously limit a process. Pinpointing a single limiting factor can be challenging, as nutrient demand varies between organisms, life cycles, and environmental conditions (e.g. thermal stress can increase demand on nutrients for biological repairs). Business and technology AllBusiness.com defines a limiting (constraining) factor as an "item that restricts or limits production or sale of a given product". The examples provided include: "limited machine hours and labor-hours and shortage of materials and skilled labor. Other limiting factors may be cubic feet of display or warehouse space, or working capital." The term is also frequently used in technology literature. The analysis of limiting business factors is part of the program evaluation and review technique, critical path analysis, and theory of constraints as presented in The Goal. Chemistry In stoichiometry of a chemical reaction to produce a chemical product, it may be observed or predicted that with amounts supplied in specified proportions, one of the reactants will be consumed by the reaction before the others. The supply of this reagent thus limits the amount of product. This limiting reagent determines the theoretical yield of the reaction. The other reactants are said to be non-limiting or in excess. This distinction makes sense only when the chemical equilibrium so favors the products to cause the complete consumption of one of the reactants. In studies of reaction kinetics, the rate of progress of the reaction may be limited by the concentration of one of the reactants or catalyst. In multi-step reactions, a step may be rate-limiting in terms of the production of the final product. In vivo, in an organism or an ecologic system, such factors as those may be rate-limiting, or in the overall analysis of a multi-step process including biologic, geologic, hydrologic, or atmospheric transport and chemical reactions, transport of a reactant may be limiting. See also Abiotic component Bateman's principle Biotic component Bottleneck (software) Chemical kinetics Competition (biology) Competitive exclusion principle Ecology Population ecology Resource (biology) Shelter References Further reading Raghothama, K. G. & Karthikeyan, A. S. (2005). "Phosphate acquisition", Plant and Soil 274: 37-49. Taylor, W. A. (1934). "Significance of extreme or intermittent conditions in distribution of species and management of natural resources, with a restatement of Liebig's Law of the minimum", Ecology 15: 374-379. Shelford, V. E. (1952). Paired factors and master factors in environmental relations. Illinois Acad. Sci. Trans., 45: 155-160 Sundareshwar, P. V., Morris, J. T., Koepfler, E. K., and Fornwalt, B. (2003). "Phosphorus limitation of coastal ecosystem processes", Science 299: 563-565. Chemical kinetics Ecological theories Natural resources Production economics Systems ecology
Limiting factor
[ "Chemistry", "Environmental_science" ]
1,942
[ "Chemical kinetics", "Environmental social science", "Chemical reaction engineering", "Systems ecology" ]
1,473,870
https://en.wikipedia.org/wiki/Trace%20gas
Trace gases are gases that are present in small amounts within an environment such as a planet's atmosphere. Trace gases in Earth's atmosphere are gases other than nitrogen (78.1%), oxygen (20.9%), and argon (0.934%) which, in combination, make up 99.934% of its atmosphere (not including water vapor). Abundance, sources and sinks The abundance of a trace gas can range from a few parts per trillion (ppt) by volume to several hundred parts per million by volume (ppmv). When a trace gas is added into the atmosphere, that process is called a source. There are two possible types of sources - natural or anthropogenic. Natural sources are caused by processes that occur in nature. In contrast, anthropogenic sources are caused by human activity. Some sources of a trace gas are biogenic processes, outgassing from solid Earth, ocean emissions, industrial emissions, and in situ formation. A few examples of biogenic sources include photosynthesis, animal excrements, termites, rice paddies, and wetlands. Volcanoes are the main source for trace gases from solid earth. The global ocean is also a source of several trace gases, in particular sulfur-containing gases. In situ trace gas formation occurs through chemical reactions in the gas-phase. Anthropogenic sources are caused by human related activities such as fossil fuel combustion (e.g. in transportation), fossil fuel mining, biomass burning, and industrial activity. In contrast, a sink is when a trace gas is removed from the atmosphere. Some of the sinks of trace gases are chemical reactions in the atmosphere, mainly with the OH radical, gas-to-particle conversion forming aerosols, wet deposition and dry deposition. Other sinks include microbiological activity in soils. Below is a chart of several trace gases including their abundances, atmospheric lifetimes, sources, and sinks.   Trace gases – taken at pressure 1 atm The Intergovernmental Panel on Climate Change (IPCC) states that "no single atmospheric lifetime can be given" for CO2. This is mostly due to the high rate of growth and large cumulative magnitude of the disturbances to Earth's carbon cycle by the geologic extraction and burning of fossil carbon. As of year 2014, fossil CO2 emitted as a theoretical 10 to 100 GtC pulse on top of the existing atmospheric concentration was expected to be 50% removed by land vegetation and ocean sinks in less than about a century. A substantial fraction (20-35%) was also projected to remain in the atmosphere for centuries to millennia, where fractional persistence increases with pulse size. Thus CO2 lifetime effectively increases as more fossil carbon is extracted by humans. Mixing and lifetime The overall abundance of man-made trace gases in Earth's atmosphere is growing. Most originate from industrial activity in the more populated northern hemisphere. Time-series data from measurement stations around the world indicate that it typically takes 1–2 years for their concentrations to become well-mixed throughout the troposphere. The residence time of a trace gas depends on the abundance and rate of removal. The Junge (empirical) relationship describes the relationship between concentration fluctuations and residence time of a gas in the atmosphere. It can expressed as fc = b/τr, where fc is the coefficient of variation, τr is the residence time in years, and b is an empirical constant, which Junge originally gave as 0.14 years. As residence time increases, the concentration variability decreases. This implies that the most reactive gases have the most concentration variability because of their shorter lifetimes. In contrast, more inert gases are non-variable and have longer lifetimes. When measured far from their sources and sinks, the relationship can be used to estimate tropospheric residence times of gases. Trace greenhouse gases A few examples of the major greenhouse gases are water, carbon dioxide, methane, nitrous oxide, ozone, and CFCs. These gases can absorb infrared radiation from the Earth's surface as it passes through the atmosphere. The most influential greenhouse gas is water vapor. It frequently occurs in high concentrations, may transition to and from an aerosol (clouds), and is thus not generally classified as a trace gas. Regionally, water vapor can trap up to 80 percent of outgoing IR radiation. Globally, water vapor is responsible for about half of Earth's total greenhouse effect. The second most important greenhouse gas, and the most important trace gas affected by man-made sources, is carbon dioxide. It contributes about 20% of Earth's total greenhouse effect. The reason that greenhouse gases can absorb infrared radiation is their molecular structure. For example, carbon dioxide has two basic modes of vibration that create a strong dipole moment, which causes its strong absorption of infrared radiation. In contrast, the most abundant gases (,, and ) in the atmosphere are not greenhouse gases. This is because they cannot absorb infrared radiation as they do not have vibrations with a dipole moment. For instance, the triple bonds of atmospheric dinitrogen make for a symmetric molecule with vibrational energy states that are almost totally unaffected at infrared frequencies. Below is a table of some of the major trace greenhouse gases, their man-made sources, and an estimate of the relative contribution of those sources to the enhanced greenhouse effect that influences global warming. Key Greenhouse Gases and Sources References External links A description of atmospheric trace gases On trace gases and their role Gases Microscale meteorology
Trace gas
[ "Physics", "Chemistry" ]
1,118
[ "Statistical mechanics", "Gases", "Phases of matter", "Matter" ]
1,474,305
https://en.wikipedia.org/wiki/Synapsin
The synapsins are a family of proteins that have long been implicated in the regulation of neurotransmitter release at synapses. Specifically, they are thought to be involved in regulating the number of synaptic vesicles available for release via exocytosis at any one time. Synapsins are present in invertebrates and vertebrates and are strongly conserved across all species. They are expressed in highest concentration in the nervous system, although they also express in other body systems such as the reproductive organs, including both eggs and spermatozoa. Synapsin function also increases as the organism matures, reaching its peak at sexual maturity. Current studies suggest the following hypothesis for the role of synapsin: synapsins bind synaptic vesicles to components of the cytoskeleton which prevents them from migrating to the presynaptic membrane and releasing neurotransmitter. During an action potential, synapsins are phosphorylated by PKA (cAMP dependent protein kinase), releasing the synaptic vesicles and allowing them to move to the membrane and release their neurotransmitter. Gene knockout studies in mice (where the mouse is unable to produce synapsin) have had some surprising results. Consistently, knockout studies have shown that mice lacking one or more synapsins have defects in synaptic transmission induced by high‐frequency stimulation, suggesting that the synapsins may be one of the factors boosting release probability in synapses at high firing rates, such as by aiding the recruitment of vesicles from the reserve pool. Furthermore, mice lacking all three synapsins are prone to seizures, and experience learning defects. These results suggest that while synapsins are not essential for synaptic function, they do serve an important modulatory role. Lastly, observed effects seemed to vary between inhibitory and excitatory synapses, suggesting synapsins may play a slightly different role in each type. Family members Humans and most other vertebrates possess three genes encoding three different synapsin proteins. Each gene in turn is alternatively spliced to produce at least two different protein isoforms for a total of six isoforms: Different neuron terminals will express varying amounts of each of these synapsin proteins and collectively these synapsins will comprise 1% of the total expressed protein at any one time. Synapsin Ia has been implicated in bipolar disorder and schizophrenia. References Molecular neuroscience Protein families Peripheral membrane proteins
Synapsin
[ "Chemistry", "Biology" ]
523
[ "Protein families", "Molecular neuroscience", "Protein classification", "Molecular biology" ]
1,474,467
https://en.wikipedia.org/wiki/Compton%20wavelength
The Compton wavelength is a quantum mechanical property of a particle, defined as the wavelength of a photon whose energy is the same as the rest energy of that particle (see mass–energy equivalence). It was introduced by Arthur Compton in 1923 in his explanation of the scattering of photons by electrons (a process known as Compton scattering). The standard Compton wavelength of a particle of mass is given by where is the Planck constant and is the speed of light. The corresponding frequency is given by and the angular frequency is given by The CODATA 2022 value for the Compton wavelength of the electron is . Other particles have different Compton wavelengths. Reduced Compton wavelength The reduced Compton wavelength (barred lambda, denoted below by ) is defined as the Compton wavelength divided by : where is the reduced Planck constant. Role in equations for massive particles The inverse reduced Compton wavelength is a natural representation for mass on the quantum scale, and as such, it appears in many of the fundamental equations of quantum mechanics. The reduced Compton wavelength appears in the relativistic Klein–Gordon equation for a free particle: It appears in the Dirac equation (the following is an explicitly covariant form employing the Einstein summation convention): The reduced Compton wavelength is also present in Schrödinger's equation, although this is not readily apparent in traditional representations of the equation. The following is the traditional representation of Schrödinger's equation for an electron in a hydrogen-like atom: Dividing through by and rewriting in terms of the fine-structure constant, one obtains: Distinction between reduced and non-reduced The reduced Compton wavelength is a natural representation of mass on the quantum scale and is used in equations that pertain to inertial mass, such as the Klein–Gordon and Schrödinger's equations. Equations that pertain to the wavelengths of photons interacting with mass use the non-reduced Compton wavelength. A particle of mass has a rest energy of . The Compton wavelength for this particle is the wavelength of a photon of the same energy. For photons of frequency , energy is given by which yields the Compton wavelength formula if solved for . Limitation on measurement The Compton wavelength expresses a fundamental limitation on measuring the position of a particle, taking into account quantum mechanics and special relativity. This limitation depends on the mass of the particle. To see how, note that we can measure the position of a particle by bouncing light off it – but measuring the position accurately requires light of short wavelength. Light with a short wavelength consists of photons of high energy. If the energy of these photons exceeds , when one hits the particle whose position is being measured the collision may yield enough energy to create a new particle of the same type. This renders moot the question of the original particle's location. This argument also shows that the reduced Compton wavelength is the cutoff below which quantum field theory – which can describe particle creation and annihilation – becomes important. The above argument can be made a bit more precise as follows. Suppose we wish to measure the position of a particle to within an accuracy . Then the uncertainty relation for position and momentum says that so the uncertainty in the particle's momentum satisfies Using the relativistic relation between momentum and energy , when exceeds then the uncertainty in energy is greater than , which is enough energy to create another particle of the same type. But we must exclude this greater energy uncertainty. Physically, this is excluded by the creation of one or more additional particles to keep the momentum uncertainty of each particle at or below . In particular the minimum uncertainty is when the scattered photon has limit energy equal to the incident observing energy. It follows that there is a fundamental minimum for : Thus the uncertainty in position must be greater than half of the reduced Compton wavelength . Relationship to other constants Typical atomic lengths, wave numbers, and areas in physics can be related to the reduced Compton wavelength for the electron and the electromagnetic fine-structure constant The Bohr radius is related to the Compton wavelength by: The classical electron radius is about 3 times larger than the proton radius, and is written: The Rydberg constant, having dimensions of linear wavenumber, is written: This yields the sequence: For fermions, the reduced Compton wavelength sets the cross-section of interactions. For example, the cross-section for Thomson scattering of a photon from an electron is equal to which is roughly the same as the cross-sectional area of an iron-56 nucleus. For gauge bosons, the Compton wavelength sets the effective range of the Yukawa interaction: since the photon has no mass, electromagnetism has infinite range. The Planck mass is the order of mass for which the Compton wavelength and the Schwarzschild radius are the same, when their value is close to the Planck length (). The Schwarzschild radius is proportional to the mass, whereas the Compton wavelength is proportional to the inverse of the mass. The Planck mass and length are defined by: Geometrical interpretation A geometrical origin of the Compton wavelength has been demonstrated using semiclassical equations describing the motion of a wavepacket. In this case, the Compton wavelength is equal to the square root of the quantum metric, a metric describing the quantum space: . See also de Broglie wavelength Planck–Einstein relation References External links Length Scales in Physics: the Compton Wavelength Atomic physics Foundational quantum physics de:Compton-Effekt#Compton-Wellenlänge
Compton wavelength
[ "Physics", "Chemistry" ]
1,101
[ "Foundational quantum physics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
1,474,542
https://en.wikipedia.org/wiki/Animal%20embryonic%20development
In developmental biology, animal embryonic development, also known as animal embryogenesis, is the developmental stage of an animal embryo. Embryonic development starts with the fertilization of an egg cell (ovum) by a sperm cell (spermatozoon). Once fertilized, the ovum becomes a single diploid cell known as a zygote. The zygote undergoes mitotic divisions with no significant growth (a process known as cleavage) and cellular differentiation, leading to development of a multicellular embryo after passing through an organizational checkpoint during mid-embryogenesis. In mammals, the term refers chiefly to the early stages of prenatal development, whereas the terms fetus and fetal development describe later stages. The main stages of animal embryonic development are as follows: The zygote undergoes a series of cell divisions (called cleavage) to form a structure called a morula. The morula develops into a structure called a blastula through a process called blastulation. The blastula develops into a structure called a gastrula through a process called gastrulation. The gastrula then undergoes further development, including the formation of organs (organogenesis). The embryo then transforms into the next stage of development, the nature of which varies among different animal species (examples of possible next stages include a fetus and a larva). Fertilization and the zygote The egg cell is generally asymmetric, having an animal pole (future ectoderm). It is covered with protective envelopes, with different layers. The first envelope – the one in contact with the membrane of the egg – is made of glycoproteins and is known as the vitelline membrane (zona pellucida in mammals). Different taxa show different cellular and acellular envelopes englobing the vitelline membrane. Fertilization is the fusion of gametes to produce a new organism. In animals, the process involves a sperm fusing with an ovum, which eventually leads to the development of an embryo. Depending on the animal species, the process can occur within the body of the female in internal fertilization, or outside in the case of external fertilization. The fertilized egg cell is known as the zygote. To prevent more than one sperm fertilizing the egg (polyspermy), fast block and slow block to polyspermy are used. Fast block, the membrane potential rapidly depolarizing and then returning to normal, happens immediately after an egg is fertilized by a single sperm. Slow block begins in the first few seconds after fertilization and is when the release of calcium causes the cortical reaction, in which various enzymes are released from cortical granules in the eggs plasma membrane, causing the expansion and hardening of the outside membrane, preventing more sperm from entering. Cleavage and morula Cell division with no significant growth, producing a cluster of cells that is the same size as the original zygote, is called cleavage. At least four initial cell divisions occur, resulting in a dense ball of at least sixteen cells called the morula. In the early mouse embryo, the sister cells of each division remain connected during interphase by microtubule bridges. The different cells derived from cleavage, up to the blastula stage, are called blastomeres. Depending mostly on the amount of yolk in the egg, the cleavage can be holoblastic (total) or meroblastic (partial). Holoblastic cleavage occurs in animals with little yolk in their eggs, such as humans and other mammals who receive nourishment as embryos from the mother, via the placenta or milk, such as might be secreted from a marsupium. Meroblastic cleavage occurs in animals whose eggs have more yolk (i.e. birds and reptiles). Because cleavage is impeded in the vegetal pole, there is an uneven distribution and size of cells, being more numerous and smaller at the animal pole of the zygote. In holoblastic eggs, the first cleavage always occurs along the vegetal-animal axis of the egg, and the second cleavage is perpendicular to the first. From here the spatial arrangement of blastomeres can follow various patterns, due to different planes of cleavage, in various organisms: The end of cleavage is known as midblastula transition and coincides with the onset of zygotic transcription. In amniotes, the cells of the morula are at first closely aggregated, but soon they become arranged into an outer or peripheral layer, the trophoblast, which does not contribute to the formation of the embryo proper, and an inner cell mass, from which the embryo is developed. Fluid collects between the trophoblast and the greater part of the inner cell-mass, and thus the morula is converted into a vesicle, called the blastodermic vesicle. The inner cell mass remains in contact, however, with the trophoblast at one pole of the ovum; this is named the embryonic pole, since it indicates the location where the future embryo will develop. Formation of the blastula After the seventh cleavage has produced 128 cells, the morula becomes a blastula. The blastula is usually a spherical layer of cells (the blastoderm) surrounding a fluid-filled or yolk-filled cavity the blastocoel. Mammals at this stage form a structure called the blastocyst, characterized by an inner cell mass that is distinct from the surrounding blastula. The blastocyst is similar in structure to the blastula but their cells have different fates. In the mouse, primordial germ cells arise from the inner cell mass (the epiblast) as a result of extensive genome-wide reprogramming. Reprogramming involves global DNA demethylation facilitated by the DNA base excision repair pathway as well as chromatin reorganization, and results in cellular totipotency. Before gastrulation, the cells of the trophoblast become differentiated into two layers: The outer layer forms a syncytium (i.e., a layer of protoplasm studded with nuclei, but showing no evidence of subdivision into cells), termed the syncytiotrophoblast, while the inner layer, the cytotrophoblast, consists of well-defined cells. As already stated, the cells of the trophoblast do not contribute to the formation of the embryo proper; they form the ectoderm of the chorion and play an important part in the development of the placenta. On the deep surface of the inner cell mass, a layer of flattened cells, called the endoderm, is differentiated and quickly assumes the form of a small sac, called the yolk sac. Spaces appear between the remaining cells of the mass and, by the enlargement and coalescence of these spaces, a cavity called the amniotic cavity is gradually developed. The floor of this cavity is formed by the embryonic disk, which is composed of a layer of prismatic cells – the embryonic ectoderm, derived from the inner cell mass and lying in apposition with the endoderm. Formation of the germ layers The embryonic disc becomes oval and then pear-shaped, the wider end being directed forward. Towards the narrow, posterior end, an opaque primitive streak, is formed and extends along the middle of the disc for about half of its length; at the anterior end of the streak there is a knob-like thickening termed the primitive node or knot, (known as Hensen's knot in birds). A shallow groove, the primitive groove, appears on the surface of the streak, and the anterior end of this groove communicates by means of an aperture, the blastopore, with the yolk sac. The primitive streak is produced by a thickening of the axial part of the ectoderm, the cells of which multiply, grow downward, and blend with those of the subjacent endoderm. From the sides of the primitive streak a third layer of cells, the mesoderm, extends laterally between the ectoderm and endoderm; the caudal end of the primitive streak forms the cloacal membrane. The blastoderm now consists of three layers, an outer ectoderm, a middle mesoderm, and an inner endoderm; each has distinctive characteristics and gives rise to certain tissues of the body. For many mammals, it is sometime during formation of the germ layers that implantation of the embryo in the uterus of the mother occurs. Formation of the gastrula During gastrulation cells migrate to the interior of the blastula, subsequently forming two (in diploblastic animals) or three (triploblastic) germ layers. The embryo during this process is called a gastrula. The germ layers are referred to as the ectoderm, mesoderm and endoderm. In diploblastic animals only the ectoderm and the endoderm are present.* Among different animals, different combinations of the following processes occur to place the cells in the interior of the embryo: Epiboly – expansion of one cell sheet over other cells Ingression – migration of individual cells into the embryo (cells move with pseudopods) Invagination – infolding of cell sheet into embryo, forming the mouth, anus, and archenteron. Delamination – splitting or migration of one sheet into two sheets Involution – inturning of cell sheet over the basal surface of an outer layer Polar proliferation – Cells at the polar ends of the blastula/gastrula proliferate, mostly at the animal pole. Other major changes during gastrulation: Heavy RNA transcription using embryonic genes; up to this point the RNAs used were maternal (stored in the unfertilized egg). Cells start major differentiation processes, losing their totipotentiality. In most animals, a blastopore is formed at the point where cells are migrating inward. Two major groups of animals can be distinguished according to the blastopore's fate. In deuterostomes the anus forms from the blastopore, while in protostomes it develops into the mouth. Formation of the early nervous system – neural groove, tube and notochord In front of the primitive streak, two longitudinal ridges, caused by a folding up of the ectoderm, make their appearance, one on either side of the middle line formed by the streak. These are named the neural folds; they commence some little distance behind the anterior end of the embryonic disk, where they are continuous with each other, and from there gradually extend backward, one on either side of the anterior end of the primitive streak. Between these folds is a shallow median groove, the neural groove. The groove gradually deepens as the neural folds become elevated, and ultimately the folds meet and coalesce in the middle line and convert the groove into a closed tube, the neural tube or canal, the ectodermal wall of which forms the rudiment of the nervous system. After the coalescence of the neural folds over the anterior end of the primitive streak, the blastopore no longer opens on the surface but into the closed canal of the neural tube, and thus a transitory communication, the neurenteric canal, is established between the neural tube and the primitive digestive tube. The coalescence of the neural folds occurs first in the region of the hind brain, and from there extends forward and backward; toward the end of the third week, the front opening (anterior neuropore) of the tube finally closes at the anterior end of the future brain, and forms a recess that is in contact, for a time, with the overlying ectoderm; the hinder part of the neural groove presents for a time a rhomboidal shape, and to this expanded portion the term sinus rhomboidalis has been applied. Before the neural groove is closed, a ridge of ectodermal cells appears along the prominent margin of each neural fold; this is termed the neural crest or ganglion ridge, and from it the spinal and cranial nerve ganglia and the ganglia of the sympathetic nervous system are developed. By the upward growth of the mesoderm, the neural tube is ultimately separated from the overlying ectoderm. The cephalic end of the neural groove exhibits several dilatations that, when the tube is closed, assume the form of the three primary brain vesicles, and correspond, respectively, to the future forebrain (prosencephalon), midbrain (mesencephalon), and hindbrain (rhombencephalon) (Fig. 18). The walls of the vesicles are developed into the nervous tissue and neuroglia of the brain, and their cavities are modified to form its ventricles. The remainder of the tube forms the spinal cord (medulla spinalis); from its ectodermal wall the nervous and neuroglial elements of the spinal cord are developed, while the cavity persists as the central canal. Formation of the early septum The extension of the mesoderm takes place throughout the whole of the embryonic and extra-embryonic areas of the ovum, except in certain regions. One of these is seen immediately in front of the neural tube. Here the mesoderm extends forward in the form of two crescentic masses, which meet in the middle line so as to enclose behind them an area that is devoid of mesoderm. Over this area, the ectoderm and endoderm come into direct contact with each other and constitute a thin membrane, the buccopharyngeal membrane, which forms a septum between the primitive mouth and pharynx. Early formation of the heart and other primitive structures In front of the buccopharyngeal area, where the lateral crescents of mesoderm fuse in the middle line, the pericardium is afterward developed, and this region is therefore designated the pericardial area. A second region where the mesoderm is absent, at least for a time, is that immediately in front of the pericardial area. This is termed the proamniotic area, and is the region where the proamnion is developed; in humans, however, it appears that a proamnion is never formed. A third region is at the hind end of the embryo, where the ectoderm and endoderm come into apposition and form the cloacal membrane. Somitogenesis Somitogenesis is the process by which somites (primitive segments) are produced. These segmented tissue blocks differentiate into skeletal muscle, vertebrae, and dermis of all vertebrates. Somitogenesis begins with the formation of somitomeres (whorls of concentric mesoderm) marking the future somites in the presomitic mesoderm (unsegmented paraxial). The presomitic mesoderm gives rise to successive pairs of somites, identical in appearance that differentiate into the same cell types but the structures formed by the cells vary depending upon the anteroposterior (e.g., the thoracic vertebrae have ribs, the lumbar vertebrae do not). Somites have unique positional values along this axis and it is thought that these are specified by the Hox homeotic genes. Toward the end of the second week after fertilization, transverse segmentation of the paraxial mesoderm begins, and it is converted into a series of well-defined, more or less cubical masses, also known as the somites, which occupy the entire length of the trunk on either side of the middle line from the occipital region of the head. Each segment contains a central cavity (known as a [myocoel), which, however, is soon filled with angular and spindle-shape cells. The somites lie immediately under the ectoderm on the lateral aspect of the neural tube and notochord, and are connected to the lateral mesoderm by the intermediate cell mass. Those of the trunk may be arranged in the following groups, viz.: cervical 8, thoracic 12, lumbar 5, sacral 5, and coccygeal from 5 to 8. Those of the occipital region of the head are usually described as being four in number. In mammals, somites of the head can be recognized only in the occipital region, but a study of the lower vertebrates leads to the belief that they are present also in the anterior part of the head and that, altogether, nine segments are represented in the cephalic region. Organogenesis At some point after the different germ layers are defined, organogenesis begins. The first stage in vertebrates is called neurulation, where the neural plate folds forming the neural tube (see above). Other common organs or structures that arise at this time include the heart and somites (also above), but from now on embryogenesis follows no common pattern among the different taxa of the animalia. In most animals organogenesis, along with morphogenesis, results in a larva. The hatching of the larva, which must then undergo metamorphosis, marks the end of embryonic development. See also Cdx2 gene Collective cell migration Drosophila embryogenesis Enterocoely Homeobox genes Human embryogenesis Leech embryogenesis Parthenogenesis Plant embryogenesis Schizocoely References External links Cellular Darwinism Embryogenesis & MMPs, PMAP The Proteolysis Map-animation Development of the embryo (retrieved November 20, 2007) Video of embryogenesis of the frog Xenopus laevis from shortly after fertilization until the hatching of the tadpole; acquired by MRI (DOI of paper) Embryology Developmental biology
Animal embryonic development
[ "Biology" ]
3,776
[ "Behavior", "Developmental biology", "Reproduction" ]
1,475,064
https://en.wikipedia.org/wiki/Zygote%20intrafallopian%20transfer
Zygote intra fallopian transfer (ZIFT) is an infertility treatment used when a blockage in the fallopian tubes prevents the normal binding of sperm to the egg. Egg cells are removed from a woman's ovaries, and in vitro fertilised. The resulting zygote is placed into the fallopian tube by the use of laparoscopy. The procedure is a spin-off of the gamete intrafallopian transfer (GIFT) procedure. The pregnancy and implantation rates in ZIFT cycles are 52.3 and 23.2% which were higher than what was observed in IVF cycles which were 17.5 and 9.7%. Procedure The average ZIFT cycle takes five weeks-six weeks to complete. First, the female must take a fertility medication clomiphene to stimulate egg production in the ovaries. The doctor will monitor the growth of the ovarian follicles, and once they are mature, the woman will receive an injection containing human chorionic gonadotropins (HCG or hCG). The eggs will be harvested approximately 36 hours later, usually by transvaginal ovum retrieval. After fertilization in the laboratory, the resulting early embryos or zygotes are placed into the woman's fallopian tubes using a laparoscope. Indications ZIFT has been used in infertility situations where at least one of the fallopian tubes is normal and other treatments have failed; however, the need for two interventions and the fact that IVF results are equal or better (as of 2004), leaves few indications for this procedure. Accordingly, the number of ZIFTs performed has been declining. References Assisted reproductive technology Fertility medicine Fertility Female genital procedures
Zygote intrafallopian transfer
[ "Biology" ]
374
[ "Assisted reproductive technology", "Medical technology" ]
1,475,409
https://en.wikipedia.org/wiki/Trendelenburg%20position
In the Trendelenburg position, the body is lain supine, or flat on the back on a 15–30 degree incline with the feet elevated above the head. The reverse Trendelenburg position, similarly, places the body supine on an incline but with the head now being elevated. The Trendelenburg position is used in surgery, especially of the abdomen and genitourinary system. It allows better access to the pelvic organs as gravity pulls the intra-abdominal organs away from the pelvis. Evidence does not support its use in hypovolaemic shock, with concerns for negative effects on the lungs and brain. The position was named for the German surgeon Friedrich Trendelenburg (1844–1924). Current uses The Trendelenburg position can be used to treat a venous air embolism by placing the right ventricular outflow tract inferior to the right ventricular cavity, causing the air to migrate superiorly into a position within the right ventricle from which air is less likely to embolise. Most recently, the reverse Trendelenburg position has been used in minimally invasive glaucoma surgery, also known as MIGS. This position is commonly used for a superior sitting surgeon that uses a combination of downward patient tilt, of approximately 30 to 35 degrees, microscope tilt towards themselves at the same angle and an intraoperative goniolens or prisms that allows them to visualise the inferior trabecular meshwork. Some joysticking of the globe may be required with an appropriate goniolens to bring the meshwork into view. The Trendelenburg position along with the Valsalva maneuver, termed as modified-Valsalva maneuver, can also be used for the cardioversion of supraventricular tachycardia. The Trendelenburg position is helpful in surgical reduction of an abdominal hernia. The Trendelenburg position is also used when placing a central venous catheter in the internal jugular or subclavian vein. The Trendelenburg position uses gravity to assist in the filling and distension of the upper central veins, as well as the external jugular vein. It plays no role in the placement of a femoral central venous catheter. The Trendelenburg position can also be used in respiratory patients to create better perfusion. The Trendelenburg position has occasionally been used to produce symptomatic relief from septum posticum cysts of the subarachnoid space in the spinal cord, but does not bring about any long-term benefits. The Trendelenburg position may be used for drainage images during endoscopic retrograde cholangiopancreatography. The Trendelenburg position is reasonable in those with a cord prolapse who are unable to achieve a knee-to-chest position. It is a temporary measure until a cesarean section can be performed. If a patient in a Fowler's position or semi-Fowlers position has sunk too far down into the bed, they may temporarily be put in a Trendelenburg position while staff reposition them. This does not have a direct therapeutic action but rather provides a mechanical advantage Controversial uses People with hypotension (low blood pressure) have historically been placed in the Trendelenburg position in hopes of increasing blood flow to the brain. A 2005 review found the "Literature on the position was scarce, lacked strength, and seemed to be guided by 'expert opinion.'" A 2008 meta-analysis found adverse consequences to the use of the Trendelenburg position and recommended it be avoided. However, the passive leg raising test is a useful clinical guide to fluid resuscitation and can be used for effective autotransfusion. The Trendelenburg position used to be the standard first aid position for shock. The Trendelenburg position can also be used in the treatment of scuba divers with decompression sickness or arterial gas embolism. Many experienced divers still believe this position is appropriate, but current scuba first aid professionals no longer advocate elevating the feet higher than the head. The Trendelenburg position in this case increases regurgitation and airway problems, causes the brain to swell, increases breathing difficulty, and has not been proven to be of any value. "Supine is fine" is a good, general rule for victims of submersion injuries unless they have fluid in the airway or are breathing, in which case they should be positioned in the recovery position. See also Fowler's position High Fowler's position Recovery position Semi-Fowler's position Trendelenburg gait Trendelenburg's sign References External links Illustration of position in comparison to other positions Human Kinetics' Victim Positioning According to The Canadian Journal of Anesthesia Surgery Gynaecology Human positions
Trendelenburg position
[ "Biology" ]
982
[ "Behavior", "Human positions", "Human behavior" ]
1,475,625
https://en.wikipedia.org/wiki/Nonylphenol
Nonylphenols are a family of closely related organic compounds composed of phenol bearing a 9 carbon-tail. Nonylphenols can come in numerous structures, all of which may be considered alkylphenols. They are used in manufacturing antioxidants, lubricating oil additives, laundry and dish detergents, emulsifiers, and solubilizers. They are used extensively in epoxy formulation in North America but its use has been phased out in Europe. These compounds are also precursors to the commercially important non-ionic surfactants alkylphenol ethoxylates and nonylphenol ethoxylates, which are used in detergents, paints, pesticides, personal care products, and plastics. Nonylphenol has attracted attention due to its prevalence in the environment and its potential role as an endocrine disruptor and xenoestrogen, due to its ability to act with estrogen-like activity. The estrogenicity and biodegradation heavily depends on the branching of the nonyl sidechain. Nonylphenol has been found to act as an agonist of the GPER (GPR30). Structure and basic properties Nonylphenols fall into the general chemical category of alkylphenols. The structure of NPs may vary. The nonyl group can be attached to the phenol ring at various locations, usually the 4- and, to lesser extent, the 2-positions, and can be either branched or linear. A branched nonylphenol, 4-nonylphenol, is the most widely produced and marketed nonylphenol. The mixture of nonylphenol isomers is a pale yellow liquid, although the pure compounds are colorless. The nonylphenols are moderately soluble in water but soluble in alcohol. Nonylphenol arises from the environmental degradation of nonylphenol ethoxylates, which are the metabolites of commercial detergents called alkylphenol ethoxylates. NPEs are a clear to light orange color liquid. Nonylphenol ethoxylates are nonionic in water, which means that they have no charge. Because of this property they are used as detergents, cleaners, emulsifiers, and a variety of other applications. They are amphipathic, meaning they have both hydrophilic and hydrophobic properties, which allows them to surround non-polar substances like oil and grease, isolating them from water. Production Nonylphenol can be produced industrially, naturally, and by the environmental degradation of alkylphenol ethoxylates. Industrially, nonylphenols are produced by the acid-catalyzed alkylation of phenol with a mixture of nonenes. This synthesis leads to a very complex mixture with diverse nonylphenols. Theoretically there are 211 constitutional isomers and this number rise to 550 isomers if we take the enantiomers into account. To make NPEs, manufacturers treat NP with ethylene oxide under basic conditions. Since its discovery in 1940, nonylphenol production has increased exponentially, and between 100 and 500 million pounds of nonylphenol are produced globally every year, meeting the definition of High Production Volume Chemicals. Nonylphenols are also produced naturally in the environment. One organism, the velvet worm, produces nonylphenol as a component of its defensive slime. The nonylphenol coats the ejection channel of the slime, stopping it from sticking to the organism when it is secreted. It also prolongs the drying process long enough for the slime to reach its target. Another surfactant called nonoxynol, which was once used as intravaginal spermicide and condom lubricant, was found to metabolize into free nonylphenol when administered to lab animals. Applications Nonylphenol is used in manufacturing antioxidants, lubricating oil additives, laundry and dish detergents, emulsifiers, and solubilizers. It can also be used to produce tris(4-nonyl-phenyl) phosphite (TNPP), which is an antioxidant used to protect polymers, such as rubber, Vinyl polymers, polyolefins, and polystyrenics in addition to being a stabilizer in plastic food packaging. Barium and calcium salts of nonylphenol are also used as heat stabilizers for polyvinyl chloride (PVC). Nonylphenol is also often used an intermediate in the manufacture of the non-ionic surfactants nonylphenol ethoxylates, which are used in detergents, paints, pesticides, personal care products, and plastics. Nonylphenol and nonylphenol ethoxylates are only used as components of household detergents outside of Europe. Nonyl Phenol, is used in many epoxy formulations mainly in North America. Prevalence in the environment Nonylphenol persists in aquatic environments and is moderately bioaccumulative. It is not readily biodegradable, and it can take months or longer to degrade in surface waters, soils, and sediments. Nonbiological degradation is negligible. Nonylphenol is partially removed during municipal wastewater treatment due to sorption to suspended solids and biotransformation. Many products that contain nonylphenol have "down-the-drain" applications, such as laundry and dish soap, so the contaminants are frequently introduced into the water supply. In sewage treatment plants, nonylphenol ethoxylate degrades into nonylphenol, which is found in river water and sediments as well as soil and groundwater. Nonylphenol photodegrades in sunlight, but its half-life in sediment is estimated to be more than 60 years. Although the concentration of nonylphenol in the environment is decreasing, it is still found at concentrations of 4.1 μg/L in river waters and 1 mg/kg in sediments. A major concern is that contaminated sewage sludge is frequently recycled onto agricultural land. The degradation of nonylphenol in soil depends on oxygen availability and other components in the soil. Mobility of nonylphenol in soil is low. Bioaccumulation is significant in water-dwelling organisms and birds, and nonylphenol has been found in internal organs of certain animals at concentrations of 10 to 1,000 times greater than the surrounding environment. Due to this bioaccumulation and persistence of nonylphenol, it has been suggested that nonylphenol could be transported over long distances and have a global reach that stretches far from the site of contamination. Nonylphenol is not persistent in air, as it is rapidly degraded by hydroxyl radicals. Environmental hazards Nonylphenol is considered to be an endocrine disruptor due to its ability to mimic estrogen and in turn disrupt the natural balance of hormones in affected organisms. The effect is weak because nonylphenols are not very close structural mimics of estradiol, but the levels of nonylphenol can be sufficiently high to compensate. The effects of nonylphenol in the environment are most applicable to aquatic species. Nonylphenol can cause endocrine disruption in fish by interacting with estrogen receptors and androgen receptors. Studies report that nonylphenol competitively displaces estrogen from its receptor site in rainbow trout. It has much less affinity for the estrogen receptor than estrogen in trout (5 x 10−5 relative binding affinity compared to estradiol) making it 100,000 times less potent than estradiol. Nonylphenol causes the feminization of aquatic organisms, decreases male fertility, and decreases survival in young fish. Studies show that male fish exposed to nonylphenol have lower testicular weight. Nonylphenol can disrupt steroidogenesis in the liver. One function of endogenous estrogen in fish is to stimulate the liver to make vitellogenin, which is a phospholipoprotein. Vitellogenin is released by the maturing female and sequestered by developing oocytes to produce the egg yolk. Males do not normally produce vitellogenin, but when exposed to nonylphenol they produce similar levels of vitellogenin to females. The concentration needed to induce vitellogenin production in fish is 10 ug/L for NP in water. Nonylphenol can also interfere with the level of FSH (follicle-stimulating hormone) being released from the pituitary gland. Concentrations of NP that inhibit reproductive development and function in fish also damages kidneys, decreases body weight, and induces stressed behavior. Human health hazards Alkylphenols like nonylphenol and bisphenol A have estrogenic effects in the body. They are known as xenoestrogens. Estrogenic substances and other endocrine disruptors are compounds that have hormone-like effects in both wildlife and humans. Xenoestrogens usually function by binding to estrogen receptors and acting competitively against natural estrogens. Nonylphenol has been shown to mimic the natural hormone 17β-estradiol, and it competes with the endogenous hormone for binding with the estrogen receptors ERα and ERβ. Nonylphenol was discovered to have hormone-like effects by accident because it contaminated other experiments in laboratories that were studying natural estrogens that were using polystyrene tubes. Effects in pregnant women Subcutaneous injections of nonylphenol in late pregnancy causes the expression of certain placental and uterine proteins, namely CaBP-9k, which suggest it can be transferred through the placenta to the fetus. It has also been shown to have a higher potency on the first trimester placenta than the endogenous estrogen 17β-estradiol. In addition, early prenatal exposure to low doses of nonylphenol cause an increase in apoptosis (programmed cell death) in placental cells. These “low doses” ranged from 10−13-10−9 M, which is lower than what is generally found in the environment. Nonylphenol has also been shown to affect cytokine signaling molecule secretions in the human placenta. In vitro cell cultures of human placenta during the first trimester were treated with nonylphenol, which increase the secretion of cytokines including interferon gamma, interleukin 4, and interleukin 10, and reduced the secretion of tumor necrosis factor alpha. This unbalanced cytokine profile at this part of pregnancy has been documented to result in implantation failure, pregnancy loss, and other complications. Effects on metabolism Nonylphenol has been shown to act as an obesity enhancing chemical or obesogen, though it has paradoxically been shown to have anti-obesity properties. Growing embryos and newborns are particularly vulnerable when exposed to nonylphenol because low-doses can disrupt sensitive processes that occur during these important developmental periods. Prenatal and perinatal exposure to nonylphenol has been linked with developmental abnormalities in adipose tissue and therefore in metabolic hormone synthesis and release (Merrill 2011). Specifically, by acting as an estrogen mimic, nonylphenol has generally been shown to interfere with hypothalamic appetite control. The hypothalamus responds to the hormone leptin, which signals the feeling of fullness after eating, and nonylphenol has been shown to both increase and decrease eating behavior by interfering with leptin signaling in the midbrain. Nonylphenol has been shown mimic the action of leptin on neuropeptide Y and anorectic POMC neurons, which has an anti-obesity effect by decreasing eating behavior. This was seen when estrogen or estrogen mimics were injected into the ventromedial hypothalamus. On the other hand, nonylphenol has been shown to increase food intake and have obesity enhancing properties by lowering the expression of these anorexigenic neurons in the brain. Additionally, nonylphenol affects the expression of ghrelin: an enzyme produced by the stomach that stimulates appetite. Ghrelin expression is positively regulated by estrogen signaling in the stomach, and it is also important in guiding the differentiation of stem cells into adipocytes (fat cells). Thus, acting as an estrogen mimic, prenatal and perinatal exposure to nonylphenol has been shown to increase appetite and encourage the body to store fat later in life. Finally, long-term exposure to nonylphenol has been shown to affect insulin signaling in the liver of adult male rats. Cancer Nonylphenol exposure has also been associated with breast cancer. It has been shown to promote the proliferation of breast cancer cells, due to its agonistic activity on ERα (estrogen receptor α) in estrogen-dependent and estrogen-independent breast cancer cells. Some argue that nonylphenol's suggested estrogenic effect coupled with its widespread human exposure could potentially influence hormone-dependent breast cancer disease. Human exposure and breakdown Exposure Diet seems the most significant source of exposure of nonylphenol to humans. For example, food samples were found with concentrations ranging from 0.1 to 19.4 μg/kg in a diet survey in Germany and a daily intake for an adult were calculated to be 7.5 μg/day. Another study calculated a daily intake for the more exposed group of infants in the range of 0.23-0.65 μg per kg bodyweight per day. In Taiwan, nonylphenol concentrations in food ranged from 5.8 to 235.8 μg/kg. Seafood in particular was found to have a high concentration of nonylphenol. One study conducted in Italian women showed that nonylphenol was one of the highest contaminants at a concentration of 32 ng/mL in breast milk when compared to other alkyl phenols, such as octylphenol, nonylphenol monoethoxylate, and two octylphenol ethoxylates. The study also found a positive correlation between fish consumption and the concentration of nonylphenol in breast milk. This is a large problem because breast milk is the main source of nourishment for newborns, who are in early stages of development where hormones are very influential. Elevated levels of endocrine disruptors in breast milk have been associated with negative effects on neurological development, growth, and memory function. Drinking water does not represent a significant source of exposure in comparison to other sources such as food packing materials, cleaning products, and various skin care products. Concentrations of nonylphenol in treated drinking water varied from 85 ng/L in Spain to 15 ng/L in Germany. Microgram amounts of nonylphenol have also been found in the saliva of patients with dental sealants. Breakdown When humans orally ingest nonylphenol, it is rapidly absorbed in the gastrointestinal tract. The metabolic pathways involved in its degradation are thought to involve glucuronide and sulfate conjugation, and the metabolites are then concentrated in fat. There is inconsistent data on bioaccumulation in humans, but nonylphenol has been shown to bioaccumulate in water-dwelling animals and birds. Nonylphenol is excreted in feces and in urine. Analytics There are standard GC-MS and HPLC protocols for the detection of nonylphenols within environmental sample matrices such as foodstuffs, drinking water and biological tissue. Industrially produced nonylphenol (the source most likely to be found in the environment) contains a mixture of structural isomers, and while these protocols are able to detect this mixture, they are typically unable to resolve the individual nonylphenol isomers within it. However, a methodological study has indicated that better isomeric resolution can be achieved in bulk nonylphenol samples using a GC-MS/MS (tandem mass-analyzer) system, suggesting that this technique could also improve the resolution of nonylphenol isomers in environmental sample analyses; further improvements in the resolution of nonylphenol isomers have been achieved through the use of two-dimensional GC at the separation stage, as part of a GC x GC-TOF-MS system. In contrast to environmental sample analyses, synthetic studies of nonylphenols have more control over sample state, concentration and preparation, simplifying the use of powerful structural identification techniques like NMR - capable of identifying the individual nonylphenol isomers. In a preliminary investigation of the relationship between nonylphenol sidechain branching patterns and estrogenic potential, the authors identified 211 possible structural isomers of p-nonylphenol alone, which expanded to 550 possible p-nonylphenol compounds when taking chiral C-atoms into consideration. Because stereochemical factors are thought to contribute to the biological activity of nonylphenols, analytical techniques sensitive to chirality, such as enantioselective HPLC and certain NMR protocols, are desirable in order to further study these relationships. Regulation The production and use of nonylphenol and nonylphenol ethoxylates is prohibited for certain situations in the European Union due to its effects on health and the environment. In Europe, due to environmental concerns, they also have been replaced by more expensive alcohol ethoxylates, which are less problematic for the environment due to their ability to degrade more quickly than nonylphenols. The European Union has also included NP on the list of priority hazardous substances for surface water in the Water Framework Directive. They are now implementing a drastic reduction policy of NP's in surface waterways. The Environmental quality standard for NP was proposed to be 0.3 ug/L. In 2013 nonylphenols were registered on the REACH candidate list. In the US, the EPA set criteria which recommends that nonylphenol concentration should not exceed 6.6 ug/L in fresh water and 1.7 ug/L in saltwater. In order to do so, the EPA is supporting and encouraging a voluntary phase-out of nonylphenol in industrial laundry detergents. Similarly, the EPA is documenting proposals for a "significant new use" rule, which would require companies to contact the EPA if they decided to add nonylphenol to any new cleaning and detergent products. They also plan to do more risk assessments to ascertain the effects of nonylphenol on human health and the environment. In other Asian and South American countries nonylphenol is still widely available in commercial detergents, and there is little regulation. References Endocrine disruptors Alkylphenols GPER agonists Xenoestrogens
Nonylphenol
[ "Chemistry" ]
4,028
[ "Endocrine disruptors" ]
5,330,368
https://en.wikipedia.org/wiki/Microbial%20metabolism
Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe's ecological niche, and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles. Types All microbial metabolisms can be arranged according to three principles: 1. How the organism obtains carbon for synthesizing cell mass: autotrophic – carbon is obtained from carbon dioxide () heterotrophic – carbon is obtained from organic compounds mixotrophic – carbon is obtained from both organic compounds and by fixing carbon dioxide 2. How the organism obtains reducing equivalents (hydrogen atoms or electrons) used either in energy conservation or in biosynthetic reactions: lithotrophic – reducing equivalents are obtained from inorganic compounds organotrophic – reducing equivalents are obtained from organic compounds 3. How the organism obtains energy for living and growing: phototrophic – energy is obtained from light chemotrophic – energy is obtained from external chemical compounds In practice, these terms are almost freely combined. Typical examples are as follows: chemolithoautotrophs obtain energy from the oxidation of inorganic compounds and carbon from the fixation of carbon dioxide. Examples: Nitrifying bacteria, sulfur-oxidizing bacteria, iron-oxidizing bacteria, Knallgas-bacteria photolithoautotrophs obtain energy from light and carbon from the fixation of carbon dioxide, using reducing equivalents from inorganic compounds. Examples: Cyanobacteria (water () as reducing equivalent = hydrogen donor), Chlorobiaceae, Chromatiaceae (hydrogen sulfide () as hydrogen donor), Chloroflexus (hydrogen () as reducing equivalent donor) chemolithoheterotrophs obtain energy from the oxidation of inorganic compounds, but cannot fix carbon dioxide (). Examples: some Thiobacilus, some Beggiatoa, some Nitrobacter spp., Wolinella (with as reducing equivalent donor), some Knallgas-bacteria, some sulfate-reducing bacteria chemoorganoheterotrophs obtain energy, carbon, and hydrogen for biosynthetic reactions from organic compounds. Examples: most bacteria, e. g. Escherichia coli, Bacillus spp., Actinomycetota photoorganoheterotrophs obtain energy from light, carbon and reducing equivalents for biosynthetic reactions from organic compounds. Some species are strictly heterotrophic, many others can also fix carbon dioxide and are mixotrophic. Examples: Rhodobacter, Rhodopseudomonas, Rhodospirillum, Rhodomicrobium, Rhodocyclus, Heliobacterium, Chloroflexus (alternatively to photolithoautotrophy with hydrogen) Heterotrophic microbial metabolism Some microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. Heterotrophic microbes live off of nutrients that they scavenge from living hosts (as commensals or parasites) or find in dead organic matter of all kind (saprophages). Microbial metabolism is the main contribution for the bodily decay of all organisms after death. Many eukaryotic microorganisms are heterotrophic by predation or parasitism, properties also found in some bacteria such as Bdellovibrio (an intracellular parasite of other bacteria, causing death of its victims) and Myxobacteria such as Myxococcus (predators of other bacteria which are killed and used by cooperating swarms of many single cells of Myxobacteria). Most pathogenic bacteria can be viewed as heterotrophic parasites of humans or the other eukaryotic species they affect. Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose, chitin or lignin which are generally indigestible to larger animals. Generally, the oxidative breakdown of large polymers to carbon dioxide (mineralization) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. Some organisms are even able to degrade more recalcitrant compounds such as petroleum compounds or pesticides, making them useful in bioremediation. Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway) for sugar metabolism and the citric acid cycle to degrade acetate, producing energy in the form of ATP and reducing power in the form of NADH or quinols. These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. A well-studied example is sugar metabolism via the keto-deoxy-phosphogluconate pathway (also called ED pathway) in Pseudomonas. Moreover, there is a third alternative sugar-catabolic pathway used by some bacteria, the pentose phosphate pathway. The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. It is also noteworthy that the mitochondrion, the small membrane-bound intracellular organelle that is the site of eukaryotic oxygen-using energy metabolism, arose from the endosymbiosis of a bacterium related to obligate intracellular Rickettsia, and also to plant-associated Rhizobium or Agrobacterium. Therefore, it is not surprising that all mitrochondriate eukaryotes share metabolic properties with these Pseudomonadota. Most microbes respire (use an electron transport chain), although oxygen is not the only terminal electron acceptor that may be used. As discussed below, the use of terminal electron acceptors other than oxygen has important biogeochemical consequences. Fermentation Fermentation is a specific type of heterotrophic metabolism that uses organic carbon instead of oxygen as a terminal electron acceptor. This means that these organisms do not use an electron transport chain to oxidize NADH to and therefore must have an alternative method of using this reducing power and maintaining a supply of for the proper functioning of normal metabolic pathways (e.g. glycolysis). As oxygen is not required, fermentative organisms are anaerobic. Many organisms can use fermentation under anaerobic conditions and aerobic respiration when oxygen is present. These organisms are facultative anaerobes. To avoid the overproduction of NADH, obligately fermentative organisms usually do not have a complete citric acid cycle. Instead of using an ATP synthase as in respiration, ATP in fermentative organisms is produced by substrate-level phosphorylation where a phosphate group is transferred from a high-energy organic compound to ADP to form ATP. As a result of the need to produce high energy phosphate-containing organic compounds (generally in the form of Coenzyme A-esters) fermentative organisms use NADH and other cofactors to produce many different reduced metabolic by-products, often including hydrogen gas (). These reduced organic compounds are generally small organic acids and alcohols derived from pyruvate, the end product of glycolysis. Examples include ethanol, acetate, lactate, and butyrate. Fermentative organisms are very important industrially and are used to make many different types of food products. The different metabolic end products produced by each specific bacterial species are responsible for the different tastes and properties of each food. Not all fermentative organisms use substrate-level phosphorylation. Instead, some organisms are able to couple the oxidation of low-energy organic compounds directly to the formation of a proton motive force or sodium-motive force and therefore ATP synthesis. Examples of these unusual forms of fermentation include succinate fermentation by Propionigenium modestum and oxalate fermentation by Oxalobacter formigenes. These reactions are extremely low-energy yielding. Humans and other higher animals also use fermentation to produce lactate from excess NADH, although this is not the major form of metabolism as it is in fermentative microorganisms. Special metabolic properties Methylotrophy Methylotrophy refers to the ability of an organism to use C1-compounds as energy sources. These compounds include methanol, methyl amines, formaldehyde, and formate. Several other less common substrates may also be used for metabolism, all of which lack carbon-carbon bonds. Examples of methylotrophs include the bacteria Methylomonas and Methylobacter. Methanotrophs are a specific type of methylotroph that are also able to use methane () as a carbon source by oxidizing it sequentially to methanol (), formaldehyde (), formate (), and carbon dioxide initially using the enzyme methane monooxygenase. As oxygen is required for this process, all (conventional) methanotrophs are obligate aerobes. Reducing power in the form of quinones and NADH is produced during these oxidations to produce a proton motive force and therefore ATP generation. Methylotrophs and methanotrophs are not considered as autotrophic, because they are able to incorporate some of the oxidized methane (or other metabolites) into cellular carbon before it is completely oxidized to (at the level of formaldehyde), using either the serine pathway (Methylosinus, Methylocystis) or the ribulose monophosphate pathway (Methylococcus), depending on the species of methylotroph. In addition to aerobic methylotrophy, methane can also be oxidized anaerobically. This occurs by a consortium of sulfate-reducing bacteria and relatives of methanogenic Archaea working syntrophically (see below). Little is currently known about the biochemistry and ecology of this process. Methanogenesis is the biological production of methane. It is carried out by methanogens, strictly anaerobic Archaea such as Methanococcus, Methanocaldococcus, Methanobacterium, Methanothermus, Methanosarcina, Methanosaeta and Methanopyrus. The biochemistry of methanogenesis is unique in nature in its use of a number of unusual cofactors to sequentially reduce methanogenic substrates to methane, such as coenzyme M and methanofuran. These cofactors are responsible (among other things) for the establishment of a proton gradient across the outer membrane thereby driving ATP synthesis. Several types of methanogenesis occur, differing in the starting compounds oxidized. Some methanogens reduce carbon dioxide () to methane () using electrons (most often) from hydrogen gas () chemolithoautotrophically. These methanogens can often be found in environments containing fermentative organisms. The tight association of methanogens and fermentative bacteria can be considered to be syntrophic (see below) because the methanogens, which rely on the fermentors for hydrogen, relieve feedback inhibition of the fermentors by the build-up of excess hydrogen that would otherwise inhibit their growth. This type of syntrophic relationship is specifically known as interspecies hydrogen transfer. A second group of methanogens use methanol () as a substrate for methanogenesis. These are chemoorganotrophic, but still autotrophic in using as only carbon source. The biochemistry of this process is quite different from that of the carbon dioxide-reducing methanogens. Lastly, a third group of methanogens produce both methane and carbon dioxide from acetate () with the acetate being split between the two carbons. These acetate-cleaving organisms are the only chemoorganoheterotrophic methanogens. All autotrophic methanogens use a variation of the reductive acetyl-CoA pathway to fix and obtain cellular carbon. Syntrophy Syntrophy, in the context of microbial metabolism, refers to the pairing of multiple species to achieve a chemical reaction that, on its own, would be energetically unfavorable. The best studied example of this process is the oxidation of fermentative end products (such as acetate, ethanol and butyrate) by organisms such as Syntrophomonas. Alone, the oxidation of butyrate to acetate and hydrogen gas is energetically unfavorable. However, when a hydrogenotrophic (hydrogen-using) methanogen is present the use of the hydrogen gas will significantly lower the concentration of hydrogen (down to 10−5 atm) and thereby shift the equilibrium of the butyrate oxidation reaction under standard conditions (ΔGº') to non-standard conditions (ΔG'). Because the concentration of one product is lowered, the reaction is "pulled" towards the products and shifted towards net energetically favorable conditions (for butyrate oxidation: ΔGº'= +48.2 kJ/mol, but ΔG' = -8.9 kJ/mol at 10−5 atm hydrogen and even lower if also the initially produced acetate is further metabolized by methanogens). Conversely, the available free energy from methanogenesis is lowered from ΔGº'= -131 kJ/mol under standard conditions to ΔG' = -17 kJ/mol at 10−5 atm hydrogen. This is an example of intraspecies hydrogen transfer. In this way, low energy-yielding carbon sources can be used by a consortium of organisms to achieve further degradation and eventual mineralization of these compounds. These reactions help prevent the excess sequestration of carbon over geologic time scales, releasing it back to the biosphere in usable forms such as methane and . Aerobic respiration Aerobic metabolism occurs in Bacteria, Archaea and Eucarya. Although most bacterial species are anaerobic, many are facultative or obligate aerobes. The majority of archaeal species live in extreme environments that are often highly anaerobic. There are, however, several cases of aerobic archaea such as Haiobacterium, Thermoplasma, Sulfolobus and Yymbaculum. Most of the known eukaryotes carry out aerobic metabolism within their mitochondria which is an organelle that had a symbiogenesis origin from prokarya . All aerobic organisms contain oxidases of the cytochrome oxidase super family, but some members of the Pseudomonadota (E. coli and Acetobacter) can also use an unrelated cytochrome bd complex as a respiratory terminal oxidase. Anaerobic respiration While aerobic organisms during respiration use oxygen as a terminal electron acceptor, anaerobic organisms use other electron acceptors. These inorganic compounds release less energy in cellular respiration, which leads to slower growth rates than aerobes. Many facultative anaerobes can use either oxygen or alternative terminal electron acceptors for respiration depending on the environmental conditions. Most respiring anaerobes are heterotrophs, although some do live autotrophically. All of the processes described below are dissimilative, meaning that they are used during energy production and not to provide nutrients for the cell (assimilative). Assimilative pathways for many forms of anaerobic respiration are also known. Denitrification – nitrate as electron acceptor Denitrification is the utilization of nitrate () as a terminal electron acceptor. It is a widespread process that is used by many members of the Pseudomonadota. Many facultative anaerobes use denitrification because nitrate, like oxygen, has a high reduction potential. Many denitrifying bacteria can also use ferric iron () and some organic electron acceptors. Denitrification involves the stepwise reduction of nitrate to nitrite (), nitric oxide (NO), nitrous oxide (), and dinitrogen () by the enzymes nitrate reductase, nitrite reductase, nitric oxide reductase, and nitrous oxide reductase, respectively. Protons are transported across the membrane by the initial NADH reductase, quinones, and nitrous oxide reductase to produce the electrochemical gradient critical for respiration. Some organisms (e.g. E. coli) only produce nitrate reductase and therefore can accomplish only the first reduction leading to the accumulation of nitrite. Others (e.g. Paracoccus denitrificans or Pseudomonas stutzeri) reduce nitrate completely. Complete denitrification is an environmentally significant process because some intermediates of denitrification (nitric oxide and nitrous oxide) are important greenhouse gases that react with sunlight and ozone to produce nitric acid, a component of acid rain. Denitrification is also important in biological wastewater treatment where it is used to reduce the amount of nitrogen released into the environment thereby reducing eutrophication. Denitrification can be determined via a nitrate reductase test. Sulfate reduction – sulfate as electron acceptor Dissimilatory sulfate reduction is a relatively energetically poor process used by many Gram-negative bacteria found within the Thermodesulfobacteriota, Gram-positive organisms relating to Desulfotomaculum or the archaeon Archaeoglobus. Hydrogen sulfide () is produced as a metabolic end product. For sulfate reduction electron donors and energy are needed. Electron donors Many sulfate reducers are organotrophic, using carbon compounds such as lactate and pyruvate (among many others) as electron donors, while others are lithotrophic, using hydrogen gas () as an electron donor. Some unusual autotrophic sulfate-reducing bacteria (e.g. Desulfotignum phosphitoxidans) can use phosphite () as an electron donor whereas others (e.g. Desulfovibrio sulfodismutans, Desulfocapsa thiozymogenes, Desulfocapsa sulfoexigens) are capable of sulfur disproportionation (splitting one compound into two different compounds, in this case an electron donor and an electron acceptor) using elemental sulfur (S0), sulfite (), and thiosulfate () to produce both hydrogen sulfide () and sulfate (). Energy for reduction All sulfate-reducing organisms are strict anaerobes. Because sulfate is energetically stable, before it can be metabolized it must first be activated by adenylation to form APS (adenosine 5'-phosphosulfate) thereby consuming ATP. The APS is then reduced by the enzyme APS reductase to form sulfite () and AMP. In organisms that use carbon compounds as electron donors, the ATP consumed is accounted for by fermentation of the carbon substrate. The hydrogen produced during fermentation is actually what drives respiration during sulfate reduction. Acetogenesis – carbon dioxide as electron acceptor Acetogenesis is a type of microbial metabolism that uses hydrogen () as an electron donor and carbon dioxide () as an electron acceptor to produce acetate, the same electron donors and acceptors used in methanogenesis (see above). Bacteria that can autotrophically synthesize acetate are called homoacetogens. Carbon dioxide reduction in all homoacetogens occurs by the acetyl-CoA pathway. This pathway is also used for carbon fixation by autotrophic sulfate-reducing bacteria and hydrogenotrophic methanogens. Often homoacetogens can also be fermentative, using the hydrogen and carbon dioxide produced as a result of fermentation to produce acetate, which is secreted as an end product. Other inorganic electron acceptors Ferric iron () is a widespread anaerobic terminal electron acceptor both for autotrophic and heterotrophic organisms. Electron flow in these organisms is similar to those in electron transport, ending in oxygen or nitrate, except that in ferric iron-reducing organisms the final enzyme in this system is a ferric iron reductase. Model organisms include Shewanella putrefaciens and Geobacter metallireducens. Since some ferric iron-reducing bacteria (e.g. G. metallireducens) can use toxic hydrocarbons such as toluene as a carbon source, there is significant interest in using these organisms as bioremediation agents in ferric iron-rich contaminated aquifers. Although ferric iron is the most prevalent inorganic electron acceptor, a number of organisms (including the iron-reducing bacteria mentioned above) can use other inorganic ions in anaerobic respiration. While these processes may often be less significant ecologically, they are of considerable interest for bioremediation, especially when heavy metals or radionuclides are used as electron acceptors. Examples include: Manganic ion () reduction to manganous ion () Selenate () reduction to selenite () and selenite reduction to inorganic selenium (Se0) Arsenate () reduction to arsenite () Uranyl ion () reduction to uranium dioxide () Organic terminal electron acceptors A number of organisms, instead of using inorganic compounds as terminal electron acceptors, are able to use organic compounds to accept electrons from respiration. Examples include: Fumarate reduction to succinate Trimethylamine N-oxide (TMAO) reduction to trimethylamine (TMA) Dimethyl sulfoxide (DMSO) reduction to dimethyl sulfide (DMS) Reductive dechlorination TMAO is a chemical commonly produced by fish, and when reduced to TMA produces a strong odor. DMSO is a common marine and freshwater chemical which is also odiferous when reduced to DMS. Reductive dechlorination is the process by which chlorinated organic compounds are reduced to form their non-chlorinated endproducts. As chlorinated organic compounds are often important (and difficult to degrade) environmental pollutants, reductive dechlorination is an important process in bioremediation. Chemolithotrophy Chemolithotrophy is a type of metabolism where energy is obtained from the oxidation of inorganic compounds. Most chemolithotrophic organisms are also autotrophic. There are two major objectives to chemolithotrophy: the generation of energy (ATP) and the generation of reducing power (NADH). Hydrogen oxidation Many organisms are capable of using hydrogen () as a source of energy. While several mechanisms of anaerobic hydrogen oxidation have been mentioned previously (e.g. sulfate reducing- and acetogenic bacteria), the chemical energy of hydrogen can be used in the aerobic Knallgas reaction: 2 H2 + O2 → 2 H2O + energy In these organisms, hydrogen is oxidized by a membrane-bound hydrogenase causing proton pumping via electron transfer to various quinones and cytochromes. In many organisms, a second cytoplasmic hydrogenase is used to generate reducing power in the form of NADH, which is subsequently used to fix carbon dioxide via the Calvin cycle. Hydrogen-oxidizing organisms, such as Cupriavidus necator (formerly Ralstonia eutropha), often inhabit oxic-anoxic interfaces in nature to take advantage of the hydrogen produced by anaerobic fermentative organisms while still maintaining a supply of oxygen. Sulfur oxidation Sulfur oxidation involves the oxidation of reduced sulfur compounds (such as sulfide ), inorganic sulfur (S), and thiosulfate () to form sulfuric acid (). A classic example of a sulfur-oxidizing bacterium is Beggiatoa, a microbe originally described by Sergei Winogradsky, one of the founders of environmental microbiology. Another example is Paracoccus. Generally, the oxidation of sulfide occurs in stages, with inorganic sulfur being stored either inside or outside of the cell until needed. This two step process occurs because energetically sulfide is a better electron donor than inorganic sulfur or thiosulfate, allowing for a greater number of protons to be translocated across the membrane. Sulfur-oxidizing organisms generate reducing power for carbon dioxide fixation via the Calvin cycle using reverse electron flow, an energy-requiring process that pushes the electrons against their thermodynamic gradient to produce NADH. Biochemically, reduced sulfur compounds are converted to sulfite () and subsequently converted to sulfate () by the enzyme sulfite oxidase. Some organisms, however, accomplish the same oxidation using a reversal of the APS reductase system used by sulfate-reducing bacteria (see above). In all cases the energy liberated is transferred to the electron transport chain for ATP and NADH production. In addition to aerobic sulfur oxidation, some organisms (e.g. Thiobacillus denitrificans) use nitrate () as a terminal electron acceptor and therefore grow anaerobically. Ferrous iron () oxidation Ferrous iron is a soluble form of iron that is stable at extremely low pHs or under anaerobic conditions. Under aerobic, moderate pH conditions ferrous iron is oxidized spontaneously to the ferric () form and is hydrolyzed abiotically to insoluble ferric hydroxide (). There are three distinct types of ferrous iron-oxidizing microbes. The first are acidophiles, such as the bacteria Acidithiobacillus ferrooxidans and Leptospirillum ferrooxidans, as well as the archaeon Ferroplasma. These microbes oxidize iron in environments that have a very low pH and are important in acid mine drainage. The second type of microbes oxidize ferrous iron at near-neutral pH. These micro-organisms (for example Gallionella ferruginea, Leptothrix ochracea, or Mariprofundus ferrooxydans) live at the oxic-anoxic interfaces and are microaerophiles. The third type of iron-oxidizing microbes are anaerobic photosynthetic bacteria such as Rhodopseudomonas, which use ferrous iron to produce NADH for autotrophic carbon dioxide fixation. Biochemically, aerobic iron oxidation is a very energetically poor process which therefore requires large amounts of iron to be oxidized by the enzyme rusticyanin to facilitate the formation of proton motive force. Like sulfur oxidation, reverse electron flow must be used to form the NADH used for carbon dioxide fixation via the Calvin cycle. Nitrification Nitrification is the process by which ammonia () is converted to nitrate (). Nitrification is actually the net result of two distinct processes: oxidation of ammonia to nitrite () by nitrosifying bacteria (e.g. Nitrosomonas) and oxidation of nitrite to nitrate by the nitrite-oxidizing bacteria (e.g. Nitrobacter). Both of these processes are extremely energetically poor leading to very slow growth rates for both types of organisms. Biochemically, ammonia oxidation occurs by the stepwise oxidation of ammonia to hydroxylamine () by the enzyme ammonia monooxygenase in the cytoplasm, followed by the oxidation of hydroxylamine to nitrite by the enzyme hydroxylamine oxidoreductase in the periplasm. Electron and proton cycling are very complex but as a net result only one proton is translocated across the membrane per molecule of ammonia oxidized. Nitrite oxidation is much simpler, with nitrite being oxidized by the enzyme nitrite oxidoreductase coupled to proton translocation by a very short electron transport chain, again leading to very low growth rates for these organisms. Oxygen is required in both ammonia and nitrite oxidation, meaning that both nitrosifying and nitrite-oxidizing bacteria are aerobes. As in sulfur and iron oxidation, NADH for carbon dioxide fixation using the Calvin cycle is generated by reverse electron flow, thereby placing a further metabolic burden on an already energy-poor process. In 2015, two groups independently showed the microbial genus Nitrospira is capable of complete nitrification (Comammox). Anammox Anammox stands for anaerobic ammonia oxidation and the organisms responsible were relatively recently discovered, in the late 1990s. This form of metabolism occurs in members of the Planctomycetota (e.g. "Candidatus Brocadia anammoxidans") and involves the coupling of ammonia oxidation to nitrite reduction. As oxygen is not required for this process, these organisms are strict anaerobes. Hydrazine ( – rocket fuel) is produced as an intermediate during anammox metabolism. To deal with the high toxicity of hydrazine, anammox bacteria contain a hydrazine-containing intracellular organelle called the anammoxasome, surrounded by highly compact (and unusual) ladderane lipid membrane. These lipids are unique in nature, as is the use of hydrazine as a metabolic intermediate. Anammox organisms are autotrophs although the mechanism for carbon dioxide fixation is unclear. Because of this property, these organisms could be used to remove nitrogen in industrial wastewater treatment processes. Anammox has also been shown to have widespread occurrence in anaerobic aquatic systems and has been speculated to account for approximately 50% of nitrogen gas production in the ocean. Manganese oxidation In July 2020 researchers report the discovery of chemolithoautotrophic bacterial culture that feeds on the metal manganese after performing unrelated experiments and named its bacterial species Candidatus Manganitrophus noduliformans and Ramlibacter lithotrophicus. Phototrophy Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates, lipids, and proteins. Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis. Phototrophic bacteria are found in the phyla "Cyanobacteria", Chlorobiota, Pseudomonadota, Chloroflexota, and Bacillota. Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth. Because chloroplasts were derived from a lineage of the Cyanobacteria, the general principles of metabolism in these endosymbionts can also be applied to chloroplasts. In addition to oxygenic photosynthesis, many bacteria can also photosynthesize anaerobically, typically using sulfide () as an electron donor to produce sulfate. Inorganic sulfur (), thiosulfate () and ferrous iron () can also be used by some organisms. Phylogenetically, all oxygenic photosynthetic bacteria are Cyanobacteria, while anoxygenic photosynthetic bacteria belong to the purple bacteria (Pseudomonadota), green sulfur bacteria (e.g., Chlorobium), green non-sulfur bacteria (e.g., Chloroflexus), or the heliobacteria (Low %G+C Gram positives). In addition to these organisms, some microbes (e.g. the Archaeon Halobacterium or the bacterium Roseobacter, among others) can utilize light to produce energy using the enzyme bacteriorhodopsin, a light-driven proton pump. However, there are no known Archaea that carry out photosynthesis. As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Pseudomonadota), thylakoid membranes ("Cyanobacteria"), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids, allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches. Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization. Biochemically, anoxygenic photosynthesis is very different from oxygenic photosynthesis. Cyanobacteria (and by extension, chloroplasts) use the Z scheme of electron flow in which electrons eventually are used to form NADH. Two different reaction centers (photosystems) are used and proton motive force is generated both by using cyclic electron flow and the quinone pool. In anoxygenic photosynthetic bacteria, electron flow is cyclic, with all electrons used in photosynthesis eventually being transferred back to the single reaction center. A proton motive force is generated using only the quinone pool. In heliobacteria, Green sulfur, and Green non-sulfur bacteria, NADH is formed using the protein ferredoxin, an energetically favorable reaction. In purple bacteria, NADH is formed by reverse electron flow due to the lower chemical potential of this reaction center. In all cases, however, a proton motive force is generated and used to drive ATP production via an ATPase. Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen (see below). Nitrogen fixation Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere, dinitrogen gas () is generally biologically inaccessible due to its high activation energy. Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia (), which is easily assimilated by all organisms. These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth. Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase, responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. Examples include: heterocyst formation (cyanobacteria e.g. Anabaena) where one cell does not photosynthesize but instead fixes nitrogen for its neighbors which in turn provide it with energy root nodule symbioses (e.g. Rhizobium) with plants that supply oxygen to the bacteria bound to molecules of leghaemoglobin anaerobic lifestyle (e.g. Clostridium pasteurianum) very fast metabolism (e.g. Azotobacter vinelandii) The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per fixed) and due to the extreme sensitivity of the nitrogenase to oxygen. See also Lipophilic bacteria, a minority of bacteria with lipid metabolism References Further reading Metabolism Trophic ecology
Microbial metabolism
[ "Chemistry", "Biology" ]
7,867
[ "Biochemistry", "Metabolism", "Cellular processes" ]
5,330,453
https://en.wikipedia.org/wiki/Tropomyosin%20receptor%20kinase%20A
Tropomyosin receptor kinase A (TrkA), also known as high affinity nerve growth factor receptor, neurotrophic tyrosine kinase receptor type 1, or TRK1-transforming tyrosine kinase protein is a protein that in humans is encoded by the NTRK1 gene. This gene encodes a member of the neurotrophic tyrosine kinase receptor (NTKR) family. This kinase is a membrane-bound receptor that, upon neurotrophin binding, phosphorylates itself (autophosphorylation) and members of the MAPK pathway. The presence of this kinase leads to cell differentiation and may play a role in specifying sensory neuron subtypes. Mutations in this gene have been associated with congenital insensitivity to pain with anhidrosis, self-mutilating behaviors, intellectual disability and/or cognitive impairment and certain cancers. Alternate transcriptional splice variants of this gene have been found, but only three have been characterized to date. Function and Interaction with NGF TrkA is the high affinity catalytic receptor for the neurotrophin, Nerve Growth Factor, or "NGF". As a kinase, TrkA mediates the multiple effects of NGF, which include neuronal differentiation, neural proliferation, nociceptor response, and avoidance of programmed cell death. The binding of NGF to TrkA leads to a ligand-induced dimerization, and a proposed mechanism by which this receptor and ligand interact is that two TrkA receptors associate with a single NGF ligand. This interaction leads to a cross linking dimeric complex where parts of the ligand-binding domains on TrkA are associated with their respective ligands. TrkA has five binding domains on its extracellular portion, and the domain TrkA-d5 folds into an immunoglobulin-like domain which is critical and adequate for the binding of NGF. After being immediately bound by NGF, the NGF/TrkA complex is brought from the synapse to the cell body through endocytosis where it then activates the NGF-dependent transcriptional program. Upon activation, the tyrosine residues are phosphorylated within the cytoplasmic domain of TrkA, and these residues then recruit signaling molecules, following several pathways that lead to the differentiation and survival of neurons. Two pathways that this complex acts to promote growth is through the Ras/MAPK pathway and the PI3K/Akt pathway. Family members The three transmembrane receptors TrkA, TrkB, and TrkC (encoded by the genes NTRK1, NTRK2, and NTRK3 respectively) make up the Trk receptor family. This family of receptors are all activated by protein nerve growth factors, or neurotrophins. Also, there are other neurotrophic factors structurally related to NGF: BDNF (for Brain-Derived Neurotrophic Factor), NT-3 (for Neurotrophin-3) and NT-4 (for Neurotrophin-4). While TrkA mediates the effects of NGF, TrkB is bound and activated by BDNF, NT-4, and NT-3. Further, TrkC binds and is activated by NT-3. In one study, the Trk gene was removed from embryonic mice stem cells which led to severe neurological disease, causing most mice to die one month after birth. Thus, Trk is the mediator of developmental and growth processes of NGF, and plays a critical role in the development of the nervous system in many organisms. There is one other NGF receptor besides TrkA, called the "LNGFR" (for "Low-affinity nerve growth factor receptor "). As opposed to TrkA, the LNGFR plays a somewhat less clear role in NGF biology. Some researchers have shown the LNGFR binds and serves as a "sink" for neurotrophins. Cells which express both the LNGFR and the Trk receptors might therefore have a greater activity – since they have a higher "microconcentration" of the neurotrophin. It has also been shown, however, that in the absence of a co-expressed TrkA, the LNGFR may signal a cell to die via apoptosis – so therefore cells expressing the LNGFR in the absence of Trk receptors may die rather than live in the presence of a neurotrophin. Role in disease There are several studies that highlight TrkA's role in various diseases. In one study conducted on two rat models, an inhibition of TrkA with AR786 led to a reduction in joint swelling, joint damage, and pain caused by inflammatory arthritis. Thus, blocking the binding of NGF allows for the alleviation of side effects from inherited arthritis, potentially highlighting a model to aid human inflammatory arthritis. In one study done on patients with functional dyspepsia, scientists found a significant increase in TrkA and nerve growth factor in gastric mucosa. The increase of TrkA and nerve growth factor is linked to indigestion and gastric symptoms in patients, thus this increase may be linked with the development of functional dyspepsia. In one study, a total absence of TrkA receptor was found in keratoconus-affected corneas, along with an increased level of repressor isoform of Sp3 transcription factor. Gene fusions involving NTRK1 have been shown to be oncogenic, leading to the constitutive TrkA activation. In a research study by Vaishnavi A. et al., NTRK1 fusions are estimated to occur in 3.3% of lung cancer as assessed through next generation sequencing or fluorescence in situ hybridization. While in some contexts, Trk A is oncogenic, in other contexts TrkA has the ability to induce terminal differentiation in cancer cells, halting cellular division. In some cancers, like neuroblastoma, TrkA is seen as a good prognostic marker as it is linked to spontaneous tumor regression. Regulation The levels of distinct proteins can be regulated by the "ubiquitin/proteasome" system. In this system, a small (7–8 kd)protein called "ubiquitin" is affixed to a target protein, and is thereby targeted for destruction by a structure called the "proteasome". TrkA is targeted for proteasome-mediated destruction by an "E3 ubiquitin ligase" called NEDD4-2. This mechanism may be a distinct way to control the survival of a neuron. The extent and maybe type of TrkA ubiquitination can be regulated by the other, unrelated receptor for NGF, p75NTR. Interactions TrkA has been shown to interact with: Abl gene, FRS2, Grb2, MATK, NGFB, PLCG1, RICS, SQSTM1, SH2B1, SH2B2, and SHC1. Ligands Small molecules such as amitriptyline and gambogic acid derivatives have been claimed to activate TrkA. Amitriptyline activates TrkA and facilitates the heterodimerization of TrkA and TrkB in the absence of NGF. Binding of amitriptyline to TrkA occurs to the Leucine Rich Region (LRR) of the extracellular domain of the receptor, which is distinct from the NGF binding site. Amitryptiline possesses neurotrophic activity both in-vitro and in-vivo (mouse model). Gambogic amide, a derivative of gambogic acid, selectively activates TrkA (but not TrkB and TrkC) both in-vitro and in-vivo by interacting with the cytoplasmic juxtamembrane domain of TrkA. ACD856 and ponazuril (ACD855) are positive allosteric modulators of both the TrkB and TrkA. Role in cancer TrkA has a dual role in cancer. TrkA was originally cloned from a colon tumor; the cancer occurred via a translocation, which resulted in the activation of the TrkA kinase domain. Although originally identified as an oncogenic fusion in 1982, only recently has there been a renewed interest in the Trk family as it relates to its role in human cancers because of the identification of NTRK1 (TrkA), NTRK2 (TrkB) and NTRK3 (TrkC) gene fusions and other oncogenic alterations in a number of tumor types. The mechanism of activation of the Human Trk oncogene is suspected to involve a folding of its kinase domain, leading the receptor to remain constitutively active. In contrast, Trk A also has the potential to induce differentiation and spontaneous regression of cancer in infants. Inhibitors in development There are several Trk inhibitors that have been FDA approved, and have been clinically seen to counteract the effects of Trk over-expression by acting as a Trk inhibitor. Entrectinib (formerly RXDX-101) is an investigational drug developed by Ignyta, Inc., which has potential antitumor activity. It is a selective pan-trk receptor tyrosine kinase inhibitor (TKI) targeting gene fusions in trkA, trkB, and trkC (coded by NTRK1, NTRK2, and NTRK3 genes) that is currently in phase 2 clinical testing. ""Larotrectinib"" is an inhibitor to all of the Trk receptors (TrkA, TrkB, and TrkC) and the drug is used as a treatment for tumors with Trk fusions. A clinical study analyzing the efficiency of the drug found that Larotrectinib was an effective anti tumor treatment, and worked efficiently regardless of age of the patient or tumor type; additionally, the drug did not have long lasting side effects, highlighting the beneficial use of this drug in treating Trk fusions. References External links GeneReviews/NCBI/NIH/UW entry on Hereditary Sensory and Autonomic Neuropathy IV Further reading Tyrosine kinase receptors
Tropomyosin receptor kinase A
[ "Chemistry" ]
2,141
[ "Tyrosine kinase receptors", "Signal transduction" ]
5,330,660
https://en.wikipedia.org/wiki/Tropomyosin%20receptor%20kinase%20B
Tropomyosin receptor kinase B (TrkB), also known as tyrosine receptor kinase B, or BDNF/NT-3 growth factors receptor or neurotrophic tyrosine kinase, receptor, type 2 is a protein that in humans is encoded by the NTRK2 gene. TrkB is a receptor for brain-derived neurotrophic factor (BDNF). The standard pronunciation for this protein is "track bee". Function Tropomyosin receptor kinase B is the high affinity catalytic receptor for several "neurotrophins", which are small protein growth factors that induce the survival and differentiation of distinct cell populations. The neurotrophins that activate TrkB are: BDNF (Brain Derived Neurotrophic Factor), neurotrophin-4 (NT-4), and neurotrophin-3 (NT-3). As such, TrkB mediates the multiple effects of these neurotrophic factors, which includes neuronal differentiation and survival. Research has shown that activation of the TrkB receptor can lead to down regulation of the KCC2 chloride transporter in cells of the CNS. In addition to the role of the pathway in neuronal development, BDNF signaling is also necessary for proper astrocyte morphogenesis and maturation, via the astrocytic TrkB.T1 isoform. The TrkB receptor is part of the large family of receptor tyrosine kinases. A "tyrosine kinase" is an enzyme which is capable of adding a phosphate group to certain tyrosines on target proteins, or "substrates". A receptor tyrosine kinase is a "tyrosine kinase" which is located at the cellular membrane, and is activated by binding of a ligand to the receptor's extracellular domain. Other examples of tyrosine kinase receptors include the insulin receptor, the IGF1 receptor, the MuSK protein receptor, the Vascular Endothelial Growth Factor (or VEGF) receptor, etc. Currently, there are three TrkB isoforms in the mammalian CNS. The full-length isoform (TK+) is a typical tyrosine kinase receptor, and transduces the BDNF signal via Ras-ERK, PI3K, and PLCγ. In contrast, two truncated isoforms (TK-: T1 and T2) possess the same extracellular domain, transmembrane domain, and first 12 intracellular amino acid sequences as TK+. However, the C-terminal sequences are isoform-specific (11 and 9 amino acids, respectively). T1 has the original signaling cascade that is involved in the regulation of cell morphology and calcium influx. Family members TrkB is part of a sub-family of protein kinases which includes also TrkA and TrkC. There are other neurotrophic factors structurally related to BDNF: NGF (for nerve growth factor), NT-3 (for neurotrophin-3) and NT-4 (for neurotrophin-4). While TrkB mediates the effects of BDNF, NT-4 and NT-3, TrkA is bound and thereby activated only by NGF. Further, TrkC binds and is activated by NT-3. TrkB binds BDNF and NT-4 more strongly than it binds NT-3. TrkC binds NT-3 more strongly than TrkB does. Role in cancer Although originally identified as an oncogenic fusion in 1982, only recently has there been a renewed interest in the Trk family as it relates to its role in human cancers because of the identification of NTRK1 (TrkA), NTRK2 (TrkB) and NTRK3 (TrkC) gene fusions and other oncogenic alterations in a number of tumor types. A number of Trk inhibitors are (in 2015) in clinical trials and have shown early promise in shrinking human tumors. Role in neurodegeneration TrkB and its ligand BDNF have been associated to both normal brain function and in the pathology and progression of Alzheimer's disease (AD) and other neurodegenerative disorders. First of all, BDNF/TrkB signalling has been implicated in long-term memory formation, the regulation of long-term potentiation, as well as hippocampal synaptic plasticity. In particular, neuronal activity has been shown to lead to an increase in TrkB mRNA transcription, as well as changes in TrkB protein trafficking, including receptor endocytosis or translocation. Both TrkB and BDNF are downregulated in the brain of early AD patients with mild cognitive impairments, while work in mice has shown that reducing TrkB levels in the brain of AD mouse models leads to a significant increase in memory deficits. In addition, combining the induction of adult hippocampal neurogenesis and increasing BDNF levels lead to an improved cognition, mimicking exercise benefits in AD mouse models. The effect of TrkB/BDNF signalling on AD pathology has been shown to be in part mediated by an increase in δ-secretase levels, via an upregulation of the JAK2/STAT3 pathway and C/EBPβ downstream of TrkB. Additionally, TrkB has been shown to reduce amyloid-β production by APP binding and phosphorylation, while TrkB cleavage by δ-secretase blocks normal TrkB activity. Dysregulation of the TrkB/BDNF pathway has been implicated in other neurological and neurodegenerative conditions, including stroke, Huntington's Disease, Parkinson's Disease, Amyotrophic lateral sclerosis and stress-related disorders.(Notaras and van den Buuse, 2020; Pradhan et al., 2019; Tejeda and Díaz-Guerra, 2017). As a drug target Entrectinib (formerly RXDX-101) is an investigational drug developed by Ignyta, Inc., which has potential antitumor activity. It is a selective pan-Trk receptor tyrosine kinase inhibitor (TKI) targeting gene fusions in trkA, trkB (this gene), and trkC (respectively, coded by NTRK1, NTRK2, and NTRK3 genes) that is currently in phase 2 clinical testing. In addition, TrkB/BDNF signalling has been the target for developing novel drugs for Alzheimer's Disease, Parkinson's Disease or other neurodegenerative and psychiatric disorders, aiming at either pharmacological modulation of the pathway (e.g. small molecule mimetics) or other means (e.g. exercise induced changes in TrkB signalling). Recent studies suggest that TrkB is the target of some antidepressants, including psychedelics. Ligands Agonists 3,7-Dihydroxyflavone 3,7,8,2'-Tetrahydroxyflavone 7,3′-Dihydroxyflavone 7,8,2'-Trihydroxyflavone 7,8,3'-Trihydroxyflavone Amitriptyline BNN-20 Brain-derived neurotrophic factor (BDNF) Deoxygedunin Diosmetin DMAQ-B1 Eutropoflavin (4'-DMA-7,8-DHF) HIOC LM22A-4 N-Acetylserotonin (NAS) Neurotrophin-3 (NT-3) Neurotrophin-4 (NT-4) Norwogonin (5,7,8-THF) R7 (prodrug of tropoflavin) R13 (prodrug of tropoflavin) TDP6 Tropoflavin (7,8-DHF) Antagonists ANA-12 Cyclotraxin B Gossypetin (3,5,7,8,3',4'-HHF) Positive allosteric modulators (2R,6R)-Hydroxynorketamine (nanomolar or micromolar range) ACD856 (nanomolar range) Antidepressants (e.g., fluoxetine, imipramine, others) (micromolar range) Ketamine (micromolar range) Lisuride (nanomolar range) Ponazuril (ACD855) (micromolar range) Serotonergic psychedelics (e.g., LSD, psilocin) (nanomolar range) Others Dehydroepiandrosterone (DHEA) Interactions TrkB has been shown to interact with: Brain-derived neurotrophic factor (BDNF), FYN, NCK2, PLCG1, Sequestosome 1, and SHC3. See also Trk receptor References Further reading External links Memories are made of this molecule - New Scientist, 15 January 2007. Tyrosine kinase receptors Developmental neuroscience
Tropomyosin receptor kinase B
[ "Chemistry" ]
1,965
[ "Tyrosine kinase receptors", "Signal transduction" ]
5,331,016
https://en.wikipedia.org/wiki/Caranna
Caranna is a hard, brittle, resinous gum, obtained from the West Indian tree Bursera acuminata (family Amyridaceae) and the South American trees Protium (plant) carana, P. altissimum, and Pachylobus hexandrus. It has an aromatic flavor, and was used in pre-modern medicine. References Resins Natural gums
Caranna
[ "Physics" ]
83
[ "Amorphous solids", "Unsolved problems in physics", "Resins" ]
5,331,179
https://en.wikipedia.org/wiki/KronoScope
KronoScope. Journal for the Study of Time is a peer-reviewed academic journal dedicated to the interdisciplinary study of time, both in the humanities and in the sciences. It is published biannually under the imprint of Brill Publishers on behalf of the International Society for the Study of Time. It is indexed in Sociological Abstracts. See also Julius Thomas Fraser Temporality References Time Sociology journals Brill Publishers academic journals
KronoScope
[ "Physics", "Mathematics" ]
85
[ "Physical quantities", "Time", "Time stubs", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
5,332,185
https://en.wikipedia.org/wiki/Burckhardt%20Compression
Burckhardt Compression AG is a Winterthur-based Swiss firm specialising in reciprocating compressors. According to the enterprise, it is the world leader in this field, with its products being used around the world in various industrial applications. The company was founded by Franz Burckhardt as a small mechanical workshop in Basel in 1844, which he expanded to a factory making air and vacuum pumps. The firm was taken over by Sulzer in 1969 and became independent again in 2002 after a management buyout. In May 2006, Burckhardt Compression announced its intention to go public on the Swiss stock exchange, probably in June 2006. Business Areas Burckhardt Compression specialises in the manufacture of labyrinth piston compressors, process gas compressors, hyper compressors, and fuel gas compressors. It is the only company globally that produces all four types of these compressors, which are used in industries such as chemical, petrochemical, refinery, gas transport and storage, hydrogen mobility and energy, industrial gas, and gas extraction and processing. These compressors are essential for compressing, cooling, or liquefying gases like hydrocarbon gases or industrial gases. The company operates in two main divisions: - Systems Division: This division focuses on the development, manufacture, and sale of compressor systems and equipment tailored to meet specific customer requirements. - Services Division: This division provides a wide range of services to enhance the performance and reliability of compressors throughout their service life. Services include maintenance, repairs, overhauls, spare parts, and customer training. History Burckhardt Compression, formerly known as Maschinenfabrik Burckhardt, was founded in Basel in 1844, as a mechanical workshop by Franz Burckhardt. The company specialised in compressors and vacuum pumps from 1878, under the direction of Burckhardt's son, August Burckhardt. Early compressors from 1913 reportedly delivered 4350 psig (300 bar), increasing to 58,000 psig (4000 bar) by 1948 through the development of high-pressure technology. In 1969, Maschinenfabrik Burckhardt became part of Sulzer AG, at which point a second production plant was opened in Winterthur. In 2000, all business activities were combined in Winterthur - the Basel site was closed and in 2001, the headquarters were relocated to Winterthur. Following a management buyout in June 2002, the company was renamed Burckhardt Compression AG, and a holding structure was put in place, under the name Burckhardt Compression Holding AG. In June 2006, an initial public offering (IPO) (VTX:BCHN) took place. In December 2015, Burckhardt Compression acquired a 40% stake in Houston-based Arkos Field Services, a provider of gas compression services and components. which was increased to 60% in November 2019 as company gains increased access to the American market. In March 2016, the company acquired a majority stake in Shenyang Yuanda Compressor, a leading Chinese manufacturer of reciprocating compressors, thereby gaining local market proximity and expanding the portfolio to cover different market needs. In September 2016, the company acquired IKS Industrie- und Kompressorenservice GmbH based in Bremen and in June 2017 Burckhardt Compression strengthened its presence in Canada through the acquisition of CSM Compressor Supplies & Machine Work Ltd based in Edmonton and Drumheller, Alberta. In March 2020, the company acquired the global compressor business from The Japan Steel Works. In December 2021, the company expanded its service business in the maritime and petrochemical industries by acquiring 100% shares in Mark van Schaick BV, based in Rotterdam, the Netherlands. In 2023, Burckhardt Compression fully acquired Arkos Field Services and integrated it into its operations, aiming to better serve customers and achieve ambitious growth targets. In April of the same year, the company acquired Thailand-based SPAN Maintenance and Service Co. Ltd. and established Burckhardt Compression (Thailand) Co. Ltd. through this acquisition. The acquisition of the Thai-based Burckhardt-authorised service partner increased the company position in Southeast Asia and included taking over two dozen employees. Financial Results References Gas compressors Manufacturing companies of Switzerland Companies based in Winterthur Manufacturing companies established in 1844 Companies listed on the SIX Swiss Exchange
Burckhardt Compression
[ "Chemistry" ]
894
[ "Gas compressors", "Turbomachinery" ]
5,334,930
https://en.wikipedia.org/wiki/Partition%20topology
In mathematics, a partition topology is a topology that can be induced on any set by partitioning into disjoint subsets these subsets form the basis for the topology. There are two important examples which have their own names: The is the topology where and Equivalently, The is defined by letting and The trivial partitions yield the discrete topology (each point of is a set in so ) or indiscrete topology (the entire set is in so ). Any set with a partition topology generated by a partition can be viewed as a pseudometric space with a pseudometric given by: This is not a metric unless yields the discrete topology. The partition topology provides an important example of the independence of various separation axioms. Unless is trivial, at least one set in contains more than one point, and the elements of this set are topologically indistinguishable: the topology does not separate points. Hence is not a Kolmogorov space, nor a T1 space, a Hausdorff space or an Urysohn space. In a partition topology the complement of every open set is also open, and therefore a set is open if and only if it is closed. Therefore, is regular, completely regular, normal and completely normal. is the discrete topology. See also References Topological spaces
Partition topology
[ "Mathematics" ]
267
[ "Topological spaces", "Mathematical structures", "Topology", "Space (mathematics)" ]
5,335,894
https://en.wikipedia.org/wiki/Methyllithium
Methyllithium is the simplest organolithium reagent, with the empirical formula CH3Li. This s-block organometallic compound adopts an oligomeric structure both in solution and in the solid state. This highly reactive compound, invariably used in solution with an ether as the solvent, is a reagent in organic synthesis as well as organometallic chemistry. Operations involving methyllithium require anhydrous conditions, because the compound is highly reactive towards water. Oxygen and carbon dioxide are also incompatible with MeLi. Methyllithium is usually not prepared, but purchased as a solution in various ethers. Synthesis In the direct synthesis, methyl bromide is treated with a suspension of lithium in diethyl ether. 2 Li + MeBr → LiMe + LiBr The lithium bromide forms a complex with the methyllithium. Most commercially available methyllithium consists of this complex. "Low-halide" methyllithium is prepared from methyl chloride. Lithium chloride precipitates from the diethyl ether since it does not form a strong complex with methyllithium. The filtrate consists of fairly pure methyllithium. Alternatively, commercial methyllithium can be treated with dioxane to precipitate LiBr(dioxane), which can be removed by filtration. The use of halide-free vs LiBr-MeLi has a decisive effect on some syntheses. Reactivity Methyllithium is both strongly basic and highly nucleophilic due to the partial negative charge on carbon and is therefore particularly reactive towards electron acceptors and proton donors. In contrast to n-BuLi, MeLi reacts only very slowly with THF at room temperature, and solutions in ether are indefinitely stable. Water and alcohols react violently. Most reactions involving methyllithium are conducted below room temperature. Although MeLi can be used for deprotonations, n-butyllithium is more commonly employed since it is less expensive and more reactive. Methyllithium is mainly used as the synthetic equivalent of the methyl anion synthon. For example, ketones react to give tertiary alcohols in a two-step process: Ph2CO + MeLi → Ph2C(Me)OLi Ph2C(Me)OLi + H+ → Ph2C(Me)OH + Li+ Nonmetal halides are converted to methyl compounds with methyllithium: PCl3 + 3 MeLi → PMe3 + 3 LiCl Such reactions more commonly employ the Grignard reagents methylmagnesium halides, which are often equally effective, and less expensive or more easily prepared in situ. It also reacts with carbon dioxide to give Lithium acetate: CH3Li + CO2 → CH3CO2−Li+ Transition metal methyl compounds can be prepared by reaction of MeLi with metal halides. Especially important are the formation of organocopper compounds (Gilman reagents), of which the most useful is lithium dimethylcuprate. This reagent is widely used for nucleophilic substitutions of epoxides, alkyl halides and alkyl sulfonates, as well as for conjugate additions to α,β-unsaturated carbonyl compounds by methyl anion. Many other transition metal methyl compounds have been prepared. ZrCl4 + 6 MeLi → Li2ZrMe6 + 4 LiCl Structure Two structures have been verified by single crystal X-ray crystallography as well as by 6Li, 7Li, and 13C NMR spectroscopy. The tetrameric structure is a distorted cubane-type cluster, with carbon and lithium atoms at alternate corners. The Li---Li distances are 2.68 Å, almost identical with the Li-Li bond in gaseous dilithium. The C-Li distances are 2.31 Å. Carbon is bonded to three hydrogen atoms and three Li atoms. The nonvolatility of (MeLi)4 and its insolubility in alkanes results from the fact that the clusters interact via further inter-cluster agostic interactions. In contrast the bulkier cluster (tertiary-butylLi)4, where intercluster interactions are precluded by steric effects, is volatile as well as soluble in alkanes. Colour code: Li- purple C- black H- white The hexameric form features hexagonal prisms with Li and C atoms again at alternate corners. Colour code: Li- purple C- black H- white The degree of aggregation, "n" for (MeLi)n, depends upon the solvent and the presence of additives (such as lithium bromide). Hydrocarbon solvents such as benzene favour formation of the hexamer, whereas ethereal solvents favour the tetramer. Bonding These clusters are considered "electron-deficient," that is, they do not follow the octet rule because the molecules lack sufficient electrons to form four 2-centered, 2-electron bonds around each carbon atom, in contrast to most organic compounds. The hexamer is a 30 electron compound (30 valence electrons.) If one allocates 18 electrons for the strong C-H bonds, 12 electrons remain for Li-C and Li-Li bonding. There are six electrons for six metal-metal bonds and one electron per methyl-η3 lithium interaction. The strength of the C-Li bond has been estimated at around 57 kcal/mol from IR spectroscopic measurements. References Organolithium compounds Methylating agents Methyl complexes Pyrophoric materials
Methyllithium
[ "Chemistry", "Technology" ]
1,176
[ "Organolithium compounds", "Methylating agents", "Methylation", "Reagents for organic chemistry" ]
5,337,925
https://en.wikipedia.org/wiki/Respirometry
Respirometry is a general term that encompasses a number of techniques for obtaining estimates of the rates of metabolism of vertebrates, invertebrates, plants, tissues, cells, or microorganisms via an indirect measure of heat production (calorimetry). Whole-animal metabolic rates The metabolism of an animal is estimated by determining rates of carbon dioxide production (VCO2) and oxygen consumption (VO2) of individual animals, either in a closed or an open-circuit respirometry system. Two measures are typically obtained: standard (SMR) or basal metabolic rate (BMR) and maximal rate (VO2max). SMR is measured while the animal is at rest (but not asleep) under specific laboratory (temperature, hydration) and subject-specific conditions (e.g., size or allometry), age, reproduction status, post-absorptive to avoid thermic effect of food). VO2max is typically determined during aerobic exercise at or near physiological limits. In contrast, field metabolic rate (FMR) refers to the metabolic rate of an unrestrained, active animal in nature. Whole-animal metabolic rates refer to these measures without correction for body mass. If SMR or BMR values are divided by the body mass value for the animal, then the rate is termed mass-specific. It is this mass-specific value that one typically hears in comparisons among species. Closed respirometry Respirometry depends on a "what goes in must come out" principle. Consider a closed system first. Imagine that we place a mouse into an air-tight container. The air sealed in the container initially contains the same composition and proportions of gases that were present in the room: 20.95% O2, 0.04% CO2, water vapor (the exact amount depends on air temperature, see dew point), 78% (approximately) N2, 0.93% argon and a variety of trace gases making up the rest (see Earth's atmosphere). As time passes, the mouse in the chamber produces CO2 and water vapor, but extracts O2 from the air in proportion to its metabolic demands. Therefore, as long as we know the volume of the system, the difference between the concentrations of O2 and CO2 at the start when we sealed the mouse into the chamber (the baseline or reference conditions) compared to the amounts present after the mouse has breathed the air at a later time must be the amounts of CO2/O2 produced/consumed by the mouse. Nitrogen and argon are inert gasses and therefore their fractional amounts are unchanged by the respiration of the mouse. In a closed system, the environment will eventually become hypoxic. Open respirometry For an open-system, design constraints include washout characteristics of the animal chamber and sensitivity of the gas analyzers. However, the basic principle remains the same: What goes in must come out. The primary distinction between an open and closed system is that the open system flows air through the chamber (i.e., air is pushed or pulled by pump) at a rate that constantly replenishes the O2 depleted by the animal while removing the CO2 and water vapor produced by the animal. The volumetric flow rate must be high enough to ensure that the animal never consumes all of the oxygen present in the chamber while at the same time, the rate must be low enough so that the animal consumes enough O2 for detection. For a 20 g mouse, flow rates of about 200 ml/min through 500 ml containers would provide a good balance. At this flow rate, about 40 ml of O2 is brought to the chamber and the entire volume of air in the chamber is exchanged within 5 minutes. For other smaller animals, chamber volumes can be much smaller and flow rates would be adjusted down as well. Note that for warm-blooded or endothermic animals (birds and mammals), chamber sizes and or flow rates would be selected to accommodate their higher metabolic rates. Calculations Calculating rates of VO2 and/or VCO2 requires knowledge of the flow rates into and out of the chamber, plus fractional concentrations of the gas mixtures into and out of the animal chamber. In general, metabolic rates are calculated from steady-state conditions (i.e., animal's metabolic rate is assumed to be constant). To know the rates of oxygen consumed, one needs to know the location of the flow meter relative to the animal chamber (if positioned before the chamber, the flow meter is "upstream," if positioned after the chamber, the flow meter is "downstream"), and whether or not reactive gases are present (e.g., CO2, water, methane, see inert gas). For an open system with upstream flow meter, water (e.g., anhydrous calcium sulfate) and CO2 removed prior to the oxygen analyzer, a suitable equation is VO2 = \frac {\mathit{FR} \cdot (\mathit F_{in}O2 - \mathit F_{ex}O2)} {1 - \mathit F_{ex}O2} For an open system with downstream flow meter, water and CO2 removed prior to the oxygen analyzer, a suitable equation is VO2 = \frac {\mathit{FR} \cdot (\mathit F_{in}O2 - \mathit F_{ex}O2)} {1 - \mathit F_{in}O2} where FR is the volumetric flow rate adjusted to STP (see Standard conditions for temperature and pressure) FinO2 is the fractional amount of oxygen present in the incurrent air stream (the baseline or reference), and FexO2 is the fractional amount of oxygen present in the excurrent air stream (what the animal has consumed relative to baseline per unit time). For example, values for BMR of a 20 g mouse (Mus musculus) might be FR = 200 mL/min, and readings of fractional concentration of O2 from an oxygen analyzer are FinO2 = 0.2095, FexO2 = 0.2072. The calculated rate of oxygen consumption is 0.58 mL/min or 35 mL/hour. Assuming an enthalpy of combustion for O2 of 20.1 joules per milliliter, we would then calculate the heat production (and therefore metabolism) for the mouse as 703.5 J/h. Respirometry equipment For open flow system, the list of equipment and parts is long compared to the components of a closed system, but the chief advantage of the open system is that it permits continuous recording of metabolic rate. The risk of hypoxia is also much less in an open system. Pumps for air flow Vacuum Pump: a pump is needed to push (i.e., upstream location) or pull (i.e., downstream location) air into and through the animal chamber and respirometry flow-through system. Subsample pump: To pull air through the analyzers, a small, stable, reliable pump is used. Flow meter and flow controllers Bubble flow meters: A simple, yet highly accurate way to measure flow rates involves timing movement of bubbles of soap film up glass tubes between marks of known volume. The glass tube is connected at the bottom (for push systems) or at the top (for pull systems) to the air stream. A small rubber pipette bulb attached at the base of the tube acts as both a reservoir and delivery system for the soap bubbles. Operation is simple. First, wet the glass surface along the path bubbles travel (e.g., press the bulb so that copious amounts of soap are pushed up the glass by the air flow) to provide a virtually friction-free surface. Second, pinch the bulb so that one clean bubble is produced. With a stopwatch in hand, record the time required for the bubble to travel between marks on the glass. Note the volume recorded on the upper mark (e.g., 125 = 125 ml), divide the volume by the time required to travel between marks and the result is the flow rate (ml/s). These instruments can be purchased from a variety of sources, but they may also be constructed from appropriate-sized, glass volumetric pipettes. Acrylic flow meters : Under some circumstances of high flow rates we may use simple acrylic flow meters (0–2.5 liters/min) to control the flow rates through the metabolic chambers. The meters are located upstream from the metabolic chambers. The flow meters are simple to use but should be calibrated twice daily for use in the respirometry system: once before recording begins (but after the animal has been sealed inside the chamber!!) and again at the end of the recording (before the animal is removed from the chamber). Calibration must be done with a bubble flow meter because the calibration marks on the acrylic meters are only approximate. For proper calibration of flow rates remember that both barometric pressure and temperature of the air streaming through the flow meter (which we assume to be equal to room temperature) must be recorded. Mass flow meters: The equations required for calculating rates of oxygen consumption or carbon dioxide production assume that the flow rates into and out of the chambers are known exactly. We use mass flow meters which have the advantage of yielding flow rates independent of temperature and air pressure. Therefore, these flow rates can be considered to be corrected to standard conditions (Standard Temperature Pressure). We only measure and control flow at one location—downstream from the chamber. Therefore, we must assume that the inflow and outflow rates are identical. However, during construction of the respirometry system, flow rate must be measured at all steps, across all connections, to verify integrity of flow. Needle valves: Mass flow meters may be purchased with mass flow controllers which permit setting flow rates. These are expensive, however. Respirometry research often will attempt to measure more than one animal at a time, which would necessitate one chamber per animal and thus controlled flow through each chamber. An alternative and more cost-effective method to control flow would be via stainless steel or carbon steel needle valves. Needle valves plus mass flow meters provides a cost-effective means to achieve desired flow rates. The valves cost about $20. Tubing and chambers Tubing and connections : Various kinds of tubing can be used to connect the components of the respirometry system to and from the animal chamber. A variety of kinds of flexible tubing may be used, depending on the characteristics of the system. Acetyl, Bev-A-Line, Kynar, nylon, Tygon tubing and connectors may be used in regions of the system where oxidizing atmospheres are low (e.g., background levels of ozone only); Teflon tubing would be recommended if there is an expectation for appreciable amounts of ozone to be present because it is inert to ozone. Teflon tubes are more costly and lack flexibility. Metabolic chambers: Chambers may be glass jars with rubber stoppers for lids; syringe barrels for small animals and insects; or constructed from Plexiglas. Ideally, chambers should be constructed from inert materials; for example, the acrylic plastics can absorb O2 and may be a poor choice for respirometry with very small insects. Chambers need to be constructed in a manner that yields rapid mixing of gases within the chamber. The simplest metabolic chamber for a small vertebrate might be a glass jar with a stopper. The stoppers are fitted with two ports: short extensions of Teflon tubing are provided for line connections. Teflon tube extensions are pushed through the bulkhead and the line connection is finished by attaching a small hose clip to the base of the Teflon tube extension. Additionally, an extension to the inlet port inside the jar should be provided—this ensures that the animal's expiratory gases are not washed away by the in flow stream. The animal is sealed inside and the rubber stopper is held in place with Velcro straps. If an upstream system is used, any metabolic chamber leak will result in loss of animal air and, therefore, an underestimate of the animal's metabolic rate. When you close an animal inside a metabolic chamber, attention must be paid to the seal. To ensure tight seals before closing the lid, firmly work the stopper into the jar and make sure that it is even. Use 1–2 straps (2 are better) and pull tightly. Acrylic (Plexiglas) chambers will be constructed for some uses, but precise engineering will be needed to ensure proper seating; gaskets will help, and judicious use of tight-fitting clamps will minimize leaks. Scrubbing tubes: Water before and after the animal chamber must be removed. One arrangement would use a large acrylic column of Drierite (8 mesh (scale), i.e., relatively coarse) upstream (before the push pump, before the animal chamber) to dry incurrent airstream and several tubes with smaller mesh (10–20, i.e., relatively fine) Drierite to remove water after the animal chamber. To prepare a scrubbing tube, make sure there is a small amount of cotton at either end of the tube to prevent dust particles from traveling to the analyzers. Use small amounts of cotton, say around 0.005 g, just enough to keep the dust out of the tubing. Large amounts of cotton will block air flow when/if it gets damp. Pour the Drierite into the tube with a funnel, tap the tube on the bench to pack the grains tightly (to increase surface area – air + water rushes through loose Drierite, requiring frequent changes of scrubbers), and cap off with a small amount of cotton. To remove carbon dioxide] before and after the animal chamber, Ascarite II is used (Ascarite II is a registered trademark of the Arthur H. Thomas Co.). Ascarite II contains NaOH, which is caustic (so don't get any on your skin and keep away from water). A scrubbing tube is prepared by placing a small amount of cotton into the tube end, filling one-third of the way with 10–20 mesh Drierite, adding a small amount of cotton, then an additional third of the tube with the Ascarite II, another layer of cotton, followed by more Drierite and capping the tube off with another small amount of cotton. Tap the tube on the bench as each layer is added to pack the grains. Note: Driereite can be used over and over again (after heating in an oven), although indicating Drierite will lose color with repeated drying; Ascarite II is used once and will be considered a hazardous waste. Analyzers Carbon dioxide analyzer: CO2 analyzers typically use infrared-based detection methods to take advantage of the fact that CO2 will absorb infra-red light and re-emit light at slightly longer wavelengths. The panel meter on the analyzer displays over the entire 0.01 – 10% CO2 range and a voltage output proportional to CO2 concentration is also generated for data recording. Oxygen analyzer: Oxygen analyzers suitable for respirometry use a variety of oxygen sensors, including galvanic ("ambient temperature"), paramagnetic, polarographic (Clark-type electrodes), and zirconium ("high temperature") sensors. Galvanic O2 analyzers use a fuel cell containing an acidic electrolyte, a heavy-metal anode and a thin gas-permeable membrane. Since the partial pressure of O2 near the anode is zero, O2 is driven by diffusion to the anode via the membrane at a rate proportional to ambient O2 partial pressure. The fuel cell produces a voltage linearly proportional to the O2 partial pressure at the membrane. As long as cabinet temperature is stable, and provided that air flow across the fuel cell is stable and within range, the response will be 0.01% or better depending on supporting electronics, software, and other considerations. Finally, a computer data acquisition and control system would be a typical addition to complete the system. Instead of a chart recorder, continuous records of oxygen consumption and or carbon dioxide production are made with the assistance of an analog-to-digital converter coupled to a computer. Software captures, filters, converts, and displays the signal as appropriate to the experimenter's needs. A variety of companies and individuals service the respirometry community (e.g., Sable Systems, Qubit Systems, see also Warthog Systems). Mitochondrial metabolic rates Inside the body oxygen is delivered to cells and in the cells to mitochondria, where it is consumed in the process generating most of the energy required by the organism. Mitochondrial respirometry measures the consumption of oxygen by the mitochondria without involving an entire living animal and is the main tool to study mitochondrial function. Three different types of samples may be subjected to such respirometric studies: isolated mitochondria (from cell cultures, animals or plants); permeabilized cells (from cell cultures); and permeabilized fibers or tissues (from animals). In the latter two cases the cellular membrane is made permeable by the addition of chemicals leaving selectively the mitochondrial membrane intact. Therefore, chemicals that usually would not be able to cross the cell membrane can directly influence the mitochondria. By the permeabilization of the cellular membrane, the cell stops to exist as a living, defined organism, leaving only the mitochondria as still functional structures. Unlike whole-animal respirometry, mitochondrial respirometry takes place in solution, i.e. the sample is suspended in a medium. Today mitochondrial respirometry is mainly performed with a closed-chamber approach. Closed-chamber system The sample suspended in a suitable medium is placed in a hermetically closed metabolic chamber. The mitochondria are brought into defined “states” by the sequential addition of substrates or inhibitors. Since the mitochondria consume oxygen, the oxygen concentration drops. This change of oxygen concentration is recorded by an oxygen sensor in the chamber. From the rate of the oxygen decline (taking into account correction for oxygen diffusion) the respiratory rate of the mitochondria can be computed. Applications Basic research The functioning of mitochondria is studied in the field of bioenergetics. Functional differences between mitochondria from different species are studied by respirometry as an aspect of comparative physiology. Applied research Mitochondrial respirometry is used to study mitochondrial functionality in mitochondrial diseases or diseases with a (suspected) strong link to mitochondria, e.g. diabetes mellitus type 2, obesity and cancer. Other fields of application are e.g. sports science and the connection between mitochondrial function and aging. Equipment The usual equipment includes a seal-able metabolic chamber, an oxygen sensor, and devices for data recording, stirring, thermostatisation and a way to introduce chemicals into the chamber. As described above for whole-animal respirometry the choice of materials is very important. Plastic materials are not suitable for the chamber because of their oxygen storage capacity. When plastic materials are unavoidable (e.g. for o-rings, coatings of stirrers, or stoppers) polymers with a very low oxygen permeability (like PVDF as opposed to e.g. PTFE) may be used. Remaining oxygen diffusion into or out of the chamber materials can be handled by correcting the measured oxygen fluxes for the instrumental oxygen background flux. The entire instrument comprising the mentioned components is often called an oxygraph. The companies providing equipment for whole-animal rspirometry mentioned above are usually not involved in mitochondrial respiromety. The community is serviced at widely varying levels of price and sophistication by companies like Oroboros Instruments, Hansatech, Respirometer Systems & Applications, YSI Life Sciences or Strathkelvin Instruments . See also Basal metabolic rate Calorimetry Metabolism Respirometer VO2max References External links Eco-environment Technology AEI Technologies AquaResp – Aquatic Respirometry Freeware Challenge Technology Loligo Systems Oroboros Instruments Qubit Systems RSA Sable Systems Seahorse Bioscience Strathkelvin Instruments Warthog systems ECHO Respirometry System Metabolism
Respirometry
[ "Chemistry", "Biology" ]
4,307
[ "Biochemistry", "Metabolism", "Cellular processes" ]
43,042,198
https://en.wikipedia.org/wiki/Tony%20Orchard
Anthony "Tony" Frederick Orchard (13 March 1941 – 19 August 2005) was a pioneer of inorganic chemistry. His research contributed to laying the foundations of much modern consumer electronic technology. Tony Orchard was born in Carmarthen, Wales, and moved to Swansea. He studied Chemistry first at Wadham College, Oxford as an undergraduate and then towards a DPhil doctoral degree in theoretical inorganic chemistry at Merton College, Oxford. He left Merton College before he had completed his doctorate at the age of 26 to become a Fellow in Inorganic Chemistry at University College in Oxford. He stayed at University College until his death. During the 1970s, Orchard led a group of researchers working in the area of photoelectron spectroscopy. This enabled scientists to examine the electronic structure of materials. The research was important for technological innovations in modern electronics, helping with the development of advances such as the personal computer and mobile phone. He published the book Magnetochemistry in 2003. As well as his research contributions, Orchard also helped to improve the system of undergraduate applications for chemistry at Oxford University. Personal life Tony Orchard was an amateur sportsman, playing tennis and snooker. At an early age, he won snooker games with the later world champions Terry Griffiths and Ray Reardon. Orchard's friends included former United States president Bill Clinton, who he met during the 1960s when Clinton was studying at University College as a Rhodes Scholar. Orchard was married to his wife Jeanne and later divorced, with two sons and two daughters. He died aged 64 of colon cancer. References 1941 births 2005 deaths People from Carmarthen Alumni of Wadham College, Oxford Alumni of Merton College, Oxford Photochemists Spectroscopists Inorganic chemists Theoretical chemists British chemists Fellows of University College, Oxford Deaths from colorectal cancer in England
Tony Orchard
[ "Physics", "Chemistry" ]
360
[ "British inorganic chemists", "Quantum chemistry", "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Inorganic chemists", "Theoretical chemistry", "Photochemists", "Spectroscopists", "Theoretical chemists", "Spectroscopy" ]
43,043,614
https://en.wikipedia.org/wiki/CastAR
castAR (formerly Technical Illusions) was a Palo Alto–based technology startup company founded in March 2013 by Jeri Ellsworth and Rick Johnson. Its first product was to be the castAR, a pair of augmented reality and virtual reality glasses. castAR was a founding member of the nonprofit Immersive Technology Alliance. History castAR was founded by two former Valve employees; the castAR glasses were born out of work that started inside Valve. While still at Valve, their team had spent over a year working on the project. They obtained legal ownership of their work after their departure. In August 2015, Playground Global funded $15 million into castAR to build its product and create augmented-reality experiences. In August 2016, Darrell Rodriguez, former President of LucasArts, joined as the new CEO. In addition, Steve Parkis became President and COO, after leading teams at The Walt Disney Company and Zynga. In September 2016, they opened castAR Salt Lake City, a new development studio formed from a team hired out of the former Avalanche Software, which worked on the Disney Infinity series. In October 2016, they announced the acquisition of Eat Sleep Play, the developer best known for Twisted Metal, also in Salt Lake City, UT. In December 2016, Parkis, who had been President and COO, was named CEO to replace Rodriguez. In June 2017, it was reported by Polygon that CastAR was shutting down, laying off 70 employees. A core group of administrators was expected to remain, to sell off the company's technology. In September 2019 Jeri Ellsworth initiated a Kickstarter for a new device based on the same principles called Tilt Five. The company uses CastAR technology acquired from the former startup and is founded by CastAR alumni Jeri Ellsworth, Amy Herndon, Jamie Gennis, and Anthony Aquilio castAR The castAR glasses combine elements of augmented reality and virtual reality. After winning Educator's and Editor's Choice ribbons at the 2013 Bay Area Maker Faire, the castAR project was successfully crowdfunded via Kickstarter. castAR surpassed its funding goal two days after the project went live, and raised over $1 million on a $400,000 goal. castAR creates transparent stereoscopic images unique to each user by sending an image from tiny projectors on the glasses into the user's surroundings using a technology that Technical Illusions called "Projected Reality". The image bounces off a retro-reflective surface back to the wearer's eyes. castAR can also be used for virtual reality purposes, using its VR clip-on. Before the time of the 2017 company shutdown all Kickstarter funds had been paid back to the original backers. Along with the repayment, a coupon for a free set of the production AR glasses was given to each backer. This happened at the time of the 2015 Playground Global investment. See also Augmented reality Display technology Smartglasses References External links Jeri Ellsworth on the demise of CastAR Augmented reality Companies based in Palo Alto, California Display technology Eyewear companies of the United States Kickstarter-funded products Video game companies based in Utah Virtual reality companies
CastAR
[ "Engineering" ]
650
[ "Electronic engineering", "Display technology" ]
43,045,000
https://en.wikipedia.org/wiki/Building%20performance%20simulation
Building performance simulation (BPS) is the replication of aspects of building performance using a computer-based, mathematical model created on the basis of fundamental physical principles and sound engineering practice. The objective of building performance simulation is the quantification of aspects of building performance which are relevant to the design, construction, operation and control of buildings. Building performance simulation has various sub-domains; most prominent are thermal simulation, lighting simulation, acoustical simulation and air flow simulation. Most building performance simulation is based on the use of bespoke simulation software. Building performance simulation itself is a field within the wider realm of scientific computing. Introduction From a physical point of view, a building is a very complex system, influenced by a wide range of parameters. A simulation model is an abstraction of the real building which allows to consider the influences on high level of detail and to analyze key performance indicators without cost-intensive measurements. BPS is a technology of considerable potential that provides the ability to quantify and compare the relative cost and performance attributes of a proposed design in a realistic manner and at relatively low effort and cost. Energy demand, indoor environmental quality (incl. thermal and visual comfort, indoor air quality and moisture phenomena), HVAC and renewable system performance, urban level modeling, building automation, and operational optimization are important aspects of BPS. Over the last six decades, numerous BPS computer programs have been developed. The most comprehensive listing of BPS software can be found in the BEST directory. Some of them only cover certain parts of BPS (e.g. climate analysis, thermal comfort, energy calculations, plant modeling, daylight simulation etc.). The core tools in the field of BPS are multi-domain, dynamic, whole-building simulation tools, which provide users with key indicators such as heating and cooling load, energy demand, temperature trends, humidity, thermal and visual comfort indicators, air pollutants, ecological impact and costs. A typical building simulation model has inputs for local weather such as Typical Meteorological Year (TMY) file; building geometry; building envelope characteristics; internal heat gains from lighting, occupants and equipment loads; heating, ventilation, and cooling (HVAC) system specifications; operation schedules and control strategies. The ease of input and accessibility of output data varies widely between BPS tools. Advanced whole-building simulation tools are able to consider almost all of the following in some way with different approaches. Necessary input data for a whole-building simulation: Climate: ambient air temperature, relative humidity, direct and diffuse solar radiation, wind speed and direction Site: location and orientation of the building, shading by topography and surrounding buildings, ground properties Geometry: building shape and zone geometry Envelope: materials and constructions, windows and shading, thermal bridges, infiltration and openings Internal gains: lights, equipment and occupants including schedules for operation/occupancy Ventilation system: transport and conditioning (heating, cooling, humidification) of air Room units: local units for heating, cooling and ventilation Plant: Central units for transformation, storage and delivery of energy to the building Controls: for window opening, shading devices, ventilation systems, room units, plant components Some examples for key performance indicators: Temperature trends: in zones, on surfaces, in construction layers, for hot or cold water supply or in double glass facades Comfort indicators: like PMV and PPD, radiant temperature asymmetry, CO2-concentration, relative humidity Heat balances: for zones, the whole building or single plant components Load profiles: for heating and cooling demand, electricity profile for equipment and lighting Energy demand: for heating, cooling, ventilation, light, equipment, auxiliary systems (e.g. pumps, fans, elevators) Daylight availability: in certain zone areas, at different time points with variable outside conditions Other use of BPS software System sizing: for HVAC components like air handling units, heat exchanger, boiler, chiller, water storage tanks, heat pumps and renewable energy systems. Optimizing control strategies: Controller setup for shading, window opening, heating, cooling and ventilation for increased operation performance. History The history of BPS is approximately as long as that of computers. The very early developments in this direction started in the late 1950s and early 1960s in the United States and Sweden. During this period, several methods had been introduced for analyzing single system components (e.g. gas boiler) using steady state calculations. The very first reported simulation tool for buildings was BRIS, introduced in 1963 by the Royal Institute of Technology in Stockholm. Until the late 1960s, several models with hourly resolution had been developed focusing on energy assessments and heating/cooling load calculations. This effort resulted in more powerful simulation engines released in the early 1970s, among those were BLAST, DOE-2, ESP-r, HVACSIM+ and TRNSYS. In the United States, the 1970s energy crisis intensified these efforts, as reducing the energy consumption of buildings became an urgent domestic policy interest. The energy crisis also initiated development of U.S. building energy standards, beginning with ASHRAE 90-75. The development of building simulation represents a combined effort between academia, governmental institutions, industry, and professional organizations. Over the past decades the building simulation discipline has matured into a field that offers unique expertise, methods and tools for building performance evaluation. Several review papers and state of the art analysis were carried out during that time giving an overview about the development. In the 1980s, a discussion about future directions for BPS among a group of leading building simulation specialists started. There was a consensus that most of the tools, that had been developed until then, were too rigid in their structure to be able to accommodate the improvements and flexibility that would be called for in the future. Around this time, the very first equation-based building simulation environment ENET was developed, which provided the foundation of SPARK. In 1989, Sahlin and Sowell presented a Neutral Model Format (NMF) for building simulation models, which is used today in the commercial software IDA ICE. Four years later, Klein introduced the Engineering Equation Solver (EES) and in 1997, Mattsson and Elmqvist reported on an international effort to design Modelica. BPS still presents challenges relating to problem representation, support for performance appraisal, enabling operational application, and delivering user education, training, and accreditation. Clarke (2015) describes a future vision of BPS with the following, most important tasks which should be addressed by the global BPS community. Better concept promotion Standardization of input data and accessibility of model libraries Standard performance assessment procedures Better embedding of BPS in practice Operational support and fault diagnosis with BPS Education, training, and user accreditation Accuracy In the context of building simulation models, error refers to the discrepancy between simulation results and the actual measured performance of the building. There are normally occurring uncertainties in building design and building assessment, which generally stem from approximations in model inputs, such as occupancy behavior. Calibration refers to the process of "tuning" or adjusting assumed simulation model inputs to match observed data from the utilities or Building Management System (BMS). The number of publications dealing with accuracy in building modeling and simulation increased significantly over the past decade. Many papers report large gaps between simulation results and measurements, while other studies show that they can match very well. The reliability of results from BPS depends on many different things, e.g. on the quality of input data, the competence of the simulation engineers and on the applied methods in the simulation engine. An overview about possible causes for the widely discussed performance gap from design stage to operation is given by de Wilde (2014) and a progress report by the Zero Carbon Hub (2013). Both conclude the factors mentioned above as the main uncertainties in BPS. ASHRAE Standard 140-2017 "Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ANSI Approved)" provides a method to validate the technical capability and range of applicability of computer programs to calculate thermal performance. ASHRAE Guideline 4-2014 provides performance indices criteria for model calibration. The performance indices used are normalized mean bias error (NMBE), coefficient of variation (CV) of the root mean square error (RMSE), and R2 (coefficient of determination). ASHRAE recommends a R2 greater than 0.75 for calibrated models. The criteria for NMBE and CV RMSE depends on if measured data is available at a monthly or hourly timescale. Technological aspects Given the complexity of building energy and mass flows, it is generally not possible to find an analytical solution, so the simulation software employs other techniques, such as response function methods, or numerical methods in finite differences or finite volume, as an approximation. Most of today's whole building simulation programs formulate models using imperative programming languages. These languages assign values to variables, declare the sequence of execution of these assignments and change the state of the program, as is done for example in C/C++, Fortran or MATLAB/Simulink. In such programs, model equations are tightly connected to the solution methods, often by making the solution procedure part of the actual model equations. The use of imperative programming languages limits the applicability and extensibility of models. More flexibility offer simulation engines using symbolic Differential Algebraic Equations (DAEs) with general purpose solvers that increase model reuse, transparency and accuracy. Since some of these engines have been developed for more than 20 years (e.g. IDA ICE) and due to the key advantages of equation-based modeling, these simulation engines can be considered as state of the art technology. Applications Building simulation models may be developed for both new or existing buildings. Major use categories of building performance simulation include: Architectural Design: quantitatively compare design or retrofit options in order to inform a more energy-efficient building design HVAC Design: calculate thermal loads for sizing of mechanical equipment and help design and test system control strategies Building Performance Rating: demonstrate performance-based compliance with energy codes, green certification, and financial incentives Building Stock Analysis: support development of energy codes and standards and plan large scale energy efficiency programs CFD in buildings: simulation of boundary conditions like surface heat fluxes and surface temperatures for a following CFD study of the situation Software tools There are hundreds of software tools available for simulating the performance of buildings and building subsystems, which range in capability from whole-building simulations to model input calibration to building auditing. Among whole-building simulation software tools, it is important to draw a distinction between the simulation engine, which dynamically solves equations rooted in thermodynamics and building science, and the modeler application (interface). In general, BPS software can be classified into Applications with integrated simulation engine (e.g. EnergyPlus, ESP-r, TAS, IES-VE, IDA ICE) Software that docks to a certain engine (e.g. Designbuilder, eQuest, RIUSKA, Sefaira) Plugins for other software enabling certain performance analysis (e.g. DIVA for Rhino, Honeybee, Autodesk Green Building Studio) Contrary to this presentation, there are some tools that in fact do not meet these sharp classification criteria, such as ESP-r which can also be used as a modeler application for EnergyPlus and there are also other applications using the IDA simulation environment, which makes "IDA" the engine and "ICE" the modeler. Most modeler applications support the user with a graphical user interface to make data input easier. The modeler creates an input file for the simulation engine to solve. The engine returns output data to the modeler application or another visualization tool which in turn presents the results to the user. For some software packages, the calculation engine and the interface may be the same product. The table below gives an overview about commonly used simulation engines and modeler applications for BPS. BPS in practice Since the 1990s, building performance simulation has undergone the transition from a method used mainly for research to a design tool for mainstream industrial projects. However, the use in different countries still varies greatly. Building certification programs like LEED (USA), BREEAM (UK) or DGNB (Germany) showed to be a good driving force for BPS to find broader application. Also, national building standards that allow BPS based analysis are of good help for an increasing industrial adoption, such as in the United States (ASHRAE 90.1), Sweden (BBR), Switzerland (SIA) and the United Kingdom (NCM). The Swedish building regulations are unique in that computed energy use has to be verified by measurements within the first two years of building operation. Since the introduction in 2007, experience shows that highly detailed simulation models are preferred by modelers to reliably achieve the required level of accuracy. Furthermore, this has fostered a simulation culture where the design predictions are close to the actual performance. This in turn has led to offers of formal energy guarantees based on simulated predictions, highlighting the general business potential of BPS. Performance-based compliance In a performance-based approach, compliance with building codes or standards is based on the predicted energy use from a building simulation, rather than a prescriptive approach, which requires adherence to stipulated technologies or design features. Performance-based compliance provides greater flexibility in the building design as it allows designers to miss some prescriptive requirements if the impact on building performance can be offset by exceeding other prescriptive requirements. The certifying agency provides details on model inputs, software specifications, and performance requirements. The following is a list of U.S. based energy codes and standards that reference building simulations to demonstrate compliance: ASHRAE 90.1 International Energy Conservation Code (IECC) Leadership in Energy and Environmental Design (LEED) Green Globes California Title 24 EnergyStar Multifamily High rise Program Passive House Institute US (PHIUS) Living Building Challenge Professional associations and certifications Professional associations International Building Performance Simulation Association (IBPSA) American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE) Certifications BEMP - Building Energy Modeling Professional, administered by ASHRAE BESA - Certified Building Energy Simulation Analyst, administered by AEE See also Energy modeling Computer simulation Energy signature References External links Bldg-sim mailing list for building simulation professionals: http://lists.onebuilding.org/listinfo.cgi/bldg-sim-onebuilding.org Simulation modeling instruction and discussion: http://energy-models.com/forum Architecture Building engineering Energy conservation Low-energy building
Building performance simulation
[ "Engineering" ]
2,986
[ "Building engineering", "Civil engineering", "Construction", "Architecture" ]
43,046,237
https://en.wikipedia.org/wiki/Fallypride
Fallypride is a high affinity dopamine D2/D3 receptor antagonist used in medical research, usually in the form of fallypride (18F) as a positron emission tomography (PET) radiotracer in human studies. References External links ChemSpider Typical antipsychotics Benzamides Pyrrolidines Phenol ethers D2 antagonists Radiopharmaceuticals Allylamines
Fallypride
[ "Chemistry" ]
93
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
43,047,182
https://en.wikipedia.org/wiki/Desmethoxyfallypride
Desmethoxyfallypride is a moderate affinity dopamine D2 receptor/D3 receptor antagonist used in medical research, usually in the form of the radiopharmaceutical [F-18]-desmethoxyfallypride (DMFP(18F)) which has been used in human studies as a positron emission tomography (PET) radiotracer. References Typical antipsychotics Salicylamide ethers Pyrrolidines D2 antagonists Organofluorides Radiopharmaceuticals Allyl compounds
Desmethoxyfallypride
[ "Chemistry" ]
121
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
40,116,145
https://en.wikipedia.org/wiki/Weak%20trace-class%20operator
In mathematics, a weak trace class operator is a compact operator on a separable Hilbert space H with singular values the same order as the harmonic sequence. When the dimension of H is infinite, the ideal of weak trace-class operators is strictly larger than the ideal of trace class operators, and has fundamentally different properties. The usual operator trace on the trace-class operators does not extend to the weak trace class. Instead the ideal of weak trace-class operators admits an infinite number of linearly independent quasi-continuous traces, and it is the smallest two-sided ideal for which all traces on it are singular traces. Weak trace-class operators feature in the noncommutative geometry of French mathematician Alain Connes. Definition A compact operator A on an infinite dimensional separable Hilbert space H is weak trace class if μ(n,A) O(n−1), where μ(A) is the sequence of singular values. In mathematical notation the two-sided ideal of all weak trace-class operators is denoted, where are the compact operators. The term weak trace-class, or weak-L1, is used because the operator ideal corresponds, in J. W. Calkin's correspondence between two-sided ideals of bounded linear operators and rearrangement invariant sequence spaces, to the weak-l1 sequence space. Properties the weak trace-class operators admit a quasi-norm defined by making L1,∞ a quasi-Banach operator ideal, that is an ideal that is also a quasi-Banach space. See also Lp space Spectral triple Singular trace Dixmier trace References Operator algebras Hilbert spaces Von Neumann algebras
Weak trace-class operator
[ "Physics" ]
333
[ "Hilbert spaces", "Quantum mechanics" ]
40,116,557
https://en.wikipedia.org/wiki/Calkin%20correspondence
In mathematics, the Calkin correspondence, named after mathematician John Williams Calkin, is a bijective correspondence between two-sided ideals of bounded linear operators of a separable infinite-dimensional Hilbert space and Calkin sequence spaces (also called rearrangement invariant sequence spaces). The correspondence is implemented by mapping an operator to its singular value sequence. It originated from John von Neumann's study of symmetric norms on matrix algebras. It provides a fundamental classification and tool for the study of two-sided ideals of compact operators and their traces, by reducing problems about operator spaces to (more resolvable) problems on sequence spaces. Definitions A two-sided ideal J of the bounded linear operators B(H) on a separable Hilbert space H is a linear subspace such that AB and BA belong to J for all operators A from J and B from B(H). A sequence space j within l∞ can be embedded in B(H) using an arbitrary orthonormal basis {en }n=0∞. Associate to a sequence a from j the bounded operator where bra–ket notation has been used for the one-dimensional projections onto the subspaces spanned by individual basis vectors. The sequence of absolute values of the entries of a in decreasing order is called the decreasing rearrangement of a. The decreasing rearrangement can be denoted μ(n,a), n = 0, 1, 2, ... Note that it is identical to the singular values of the operator diag(a). Another notation for the decreasing rearrangement is a*. A Calkin (or rearrangement invariant) sequence space is a linear subspace j of the bounded sequences l∞ such that if a is a bounded sequence and μ(n,a) ≤ μ(n,b), n  0, 1, 2, ..., for some b in j, then a belongs to j. Correspondence Associate to a two-sided ideal J the sequence space j given by Associate to a sequence space j the two-sided ideal J given by Here μ(A) and μ(a) are the singular values of the operators A and diag(a), respectively. Calkin's Theorem states that the two maps are inverse to each other. We obtain, Calkin correspondence: The two-sided ideals of bounded operators on an infinite dimensional separable Hilbert space and the Calkin sequence spaces are in bijective correspondence. It is sufficient to know the association only between positive operators and positive sequences, hence the map μ: J+ → j+ from a positive operator to its singular values implements the Calkin correspondence. Another way of interpreting the Calkin correspondence, since the sequence space j is equivalent as a Banach space to the operators in the operator ideal J that are diagonal with respect to an arbitrary orthonormal basis, is that two-sided ideals are completely determined by their diagonal operators. Examples Suppose H is a separable infinite-dimensional Hilbert space. Bounded operators. The improper two-sided ideal B(H) corresponds to l∞. Compact operators. The proper and norm closed two-sided ideal K(H) corresponds to c0, the space of sequences converging to zero. Finite rank operators. The smallest two-sided ideal F(H) of finite rank operators corresponds to c00, the space of sequences with finite non-zero terms. Schatten p-ideals. The Schatten p-ideals Lp, p ≥ 1, correspond to the lp sequence spaces. In particular, the trace class operators correspond to l1 and the Hilbert-Schmidt operators correspond to l2 . Weak-Lp ideals. The weak-Lp ideals Lp,∞, p ≥ 1, correspond to the weak-lp sequence spaces. Lorentz ψ-ideals. The Lorentz ψ-ideals for an increasing concave function ψ : [0,∞) → [0,∞) correspond to the Lorentz sequence spaces. Notes References Operator algebras Hilbert spaces Von Neumann algebras
Calkin correspondence
[ "Physics" ]
823
[ "Hilbert spaces", "Quantum mechanics" ]
40,117,035
https://en.wikipedia.org/wiki/Commutator%20subspace
In mathematics, the commutator subspace of a two-sided ideal of bounded linear operators on a separable Hilbert space is the linear subspace spanned by commutators of operators in the ideal with bounded operators. Modern characterisation of the commutator subspace is through the Calkin correspondence and it involves the invariance of the Calkin sequence space of an operator ideal to taking Cesàro means. This explicit spectral characterisation reduces problems and questions about commutators and traces on two-sided ideals to (more resolvable) problems and conditions on sequence spaces. History Commutators of linear operators on Hilbert spaces came to prominence in the 1930s as they featured in the matrix mechanics, or Heisenberg, formulation of quantum mechanics. Commutator subspaces, though, received sparse attention until the 1970s. American mathematician Paul Halmos in 1954 showed that every bounded operator on a separable infinite dimensional Hilbert space is the sum of two commutators of bounded operators. In 1971 Carl Pearcy and David Topping revisited the topic and studied commutator subspaces for Schatten ideals. As a student American mathematician Gary Weiss began to investigate spectral conditions for commutators of Hilbert–Schmidt operators. British mathematician Nigel Kalton, noticing the spectral condition of Weiss, characterised all trace class commutators. Kalton's result forms the basis for the modern characterisation of the commutator subspace. In 2004 Ken Dykema, Tadeusz Figiel, Gary Weiss and Mariusz Wodzicki published the spectral characterisation of normal operators in the commutator subspace for every two-sided ideal of compact operators. Definition The commutator subspace of a two-sided ideal J of the bounded linear operators B(H) on a separable Hilbert space H is the linear span of operators in J of the form [A,B] = AB − BA for all operators A from J and B from B(H). The commutator subspace of J is a linear subspace of J denoted by Com(J) or [B(H),J]. Spectral characterisation The Calkin correspondence states that a compact operator A belongs to a two-sided ideal J if and only if the singular values μ(A) of A belongs to the Calkin sequence space j associated to J. Normal operators that belong to the commutator subspace Com(J) can characterised as those A such that μ(A) belongs to j and the Cesàro mean of the sequence μ(A) belongs to j. The following theorem is a slight extension to differences of normal operators (setting B  0 in the following gives the statement of the previous sentence). Theorem. Suppose A,B are compact normal operators that belong to a two-sided ideal J. Then A − B belongs to the commutator subspace Com(J) if and only if where j is the Calkin sequence space corresponding to J and μ(A), μ(B) are the singular values of A and B, respectively. Provided that the eigenvalue sequences of all operators in J belong to the Calkin sequence space j there is a spectral characterisation for arbitrary (non-normal) operators. It is not valid for every two-sided ideal but necessary and sufficient conditions are known. Nigel Kalton and American mathematician Ken Dykema introduced the condition first for countably generated ideals. Uzbek and Australian mathematicians Fedor Sukochev and Dmitriy Zanin completed the eigenvalue characterisation. Theorem. Suppose J is a two-sided ideal such that a bounded operator A belongs to J whenever there is a bounded operator B in J such that If the bounded operator A and B belong to J then A − B belongs to the commutator subspace Com(J) if and only if where j is the Calkin sequence space corresponding to J and λ(A), λ(B) are the sequence of eigenvalues of the operators A and B, respectively, rearranged so that the absolute value of the eigenvalues is decreasing. Most two-sided ideals satisfy the condition in the Theorem, included all Banach ideals and quasi-Banach ideals. Consequences of the characterisation Every operator in J is a sum of commutators if and only if the corresponding Calkin sequence space j is invariant under taking Cesàro means. In symbols, Com(J)  J is equivalent to C(j)  j, where C denotes the Cesàro operator on sequences. In any two-sided ideal the difference between a positive operator and its diagonalisation is a sum of commutators. That is, A − diag(μ(A)) belongs to Com(J) for every positive operator A in J where diag(μ(A)) is the diagonalisation of A in an arbitrary orthonormal basis of the separable Hilbert space H. In any two-sided ideal satisfying () the difference between an arbitrary operator and its diagonalisation is a sum of commutators. That is, A − diag(λ(A)) belongs to Com(J) for every operator A in J where diag(λ(A)) is the diagonalisation of A in an arbitrary orthonormal basis of the separable Hilbert space H and λ(A) is an eigenvalue sequence. Every quasi-nilpotent operator in a two-sided ideal satisfying () is a sum of commutators. Application to traces A trace φ on a two-sided ideal J of B(H) is a linear functional φ:J → that vanishes on Com(J). The consequences above imply The two-sided ideal J has a non-zero trace if and only if C(j) ≠ j. φ(A) = φ ∘ diag(μ(A)) for every positive operator A in J where diag(μ(A)) is the diagonalisation of A in an arbitrary orthonormal basis of the separable Hilbert space H. That is, traces on J are in direct correspondence with symmetric functionals on j. In any two-sided ideal satisfying (), φ(A) = φ ∘ diag(λ(A)) for every operator A in J where diag(λ(A)) is the diagonalisation of A in an arbitrary orthonormal basis of the separable Hilbert space H and λ(A) is an eigenvalue sequence. In any two-sided ideal satisfying (), φ(Q) = 0 for every quasi-nilpotent operator Q from J and every trace φ on J. Examples Suppose H is a separable infinite dimensional Hilbert space. Compact operators. The compact linear operators K(H) correspond to the space of converging to zero sequences, c0. For a converging to zero sequence the Cesàro means converge to zero. Therefore, C(c0) = c0 and Com(K(H))  K(H). Finite rank operators. The finite rank operators F(H) correspond to the space of sequences with finite non-zero terms, c00. The condition occurs if and only if for the sequence (a1, a2, ... , aN, 0, 0 , ...) in c00. The kernel of the operator trace Tr on F(H) and the commutator subspace of the finite rank operators are equal, ker Tr  Com(F(H)) ⊊ F(H). Trace class operators. The trace class operators L1 correspond to the summable sequences. The condition is stronger than the condition that a1 + a2 ... = 0. An example is the sequence with and which has sum zero but does not have a summable sequence of Cesàro means. Hence Com(L1) ⊊ ker Tr ⊊ L1. Weak trace class operators. The weak trace class operators L1,∞ correspond to the weak-l1 sequence space. From the condition or equivalently it is immediate that Com(L1,∞)+  (L1)+. The commutator subspace of the weak trace class operators contains the trace class operators. The harmonic sequence 1,1/2,1/3,...,1/n,... belongs to l1,∞ and it has a divergent series, and therefore the Cesàro means of the harmonic sequence do not belong to l1,∞. In summary, L1 ⊊ Com(L1,∞) ⊊ L1,∞. Notes References Hilbert spaces Von Neumann algebras
Commutator subspace
[ "Physics" ]
1,769
[ "Hilbert spaces", "Quantum mechanics" ]
40,119,423
https://en.wikipedia.org/wiki/Blaisdell%20Slow%20Sand%20Filter%20Washing%20Machine
The Blaisdell Slow Sand Filter Washing Machine at Yuma, Arizona is a device invented by Hiram W. Blaisdell to wash sand filters used in the treatment of drinking water. The machine was built in 1902 at Blaisdell's privately operated waterworks, which treated the muddy water of the Colorado River for local consumption. Blaisdell patented the device and marketed it throughout the United States. The Yuma filter is now on City of Yuma property, and has been preserved as the first of its kind. Description The Blaisdell machine traveled along steel tracks laid on top of the walls of rectangular filter basins, bridging the walls with its structure. The washing chamber was lowered from the moving bridge frame into the basin. The chamber measures about wide, deep and long, and contains a diameter circular washing unit. The washing unit stirred the surface of the sand bed, dislodging sediment and flushing it away through two suction pumps at the top of the box, avoiding contamination of the surrounding water. The mechanism was controlled by an operator in a corrugated metal enclosure. The Blaisdell machine was placed on the National Register of Historic Places on January 18, 1979. See also List of historic properties in Yuma, Arizona References External links Buildings and structures completed in 1902 Buildings and structures in Yuma, Arizona Historic American Engineering Record in Arizona Water treatment facilities National Register of Historic Places in Yuma County, Arizona Industrial equipment on the National Register of Historic Places Water supply infrastructure on the National Register of Historic Places 1902 establishments in Arizona Territory Sand
Blaisdell Slow Sand Filter Washing Machine
[ "Chemistry", "Engineering" ]
314
[ "Water treatment", "Water treatment facilities", "Industrial equipment on the National Register of Historic Places" ]
24,655,281
https://en.wikipedia.org/wiki/Euler%27s%20differential%20equation
In mathematics, Euler's differential equation is a first-order non-linear ordinary differential equation, named after Leonhard Euler. It is given by: This is a separable equation and the solution is given by the following integral equation: References Eponymous equations of physics Mathematical physics Ordinary differential equations Leonhard Euler
Euler's differential equation
[ "Physics", "Mathematics" ]
66
[ "Equations of physics", "Applied mathematics", "Theoretical physics", "Eponymous equations of physics", "Mathematical physics" ]
24,655,745
https://en.wikipedia.org/wiki/C42H32O9
{{DISPLAYTITLE:C42H32O9}} The molecular formula C42H32O9 (molar mass: 680.69 g/mol, exact mass: 680.204633 u) may refer to: Miyabenol C, a resveratrol trimer Trans-Diptoindonesin B, an oligomeric stilbenoid Molecular formulas
C42H32O9
[ "Physics", "Chemistry" ]
83
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,661,015
https://en.wikipedia.org/wiki/C17H14O5
{{DISPLAYTITLE:C17H14O5}} The molecular formula C17H14O5 (molar mass: 298.29 g/mol, exact mass: 298.084125 u) may refer to: Fumarin, a coumarin derivative Pterocarpin, a pterocarpan Molecular formulas
C17H14O5
[ "Physics", "Chemistry" ]
73
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,662,110
https://en.wikipedia.org/wiki/Effective%20Polish%20space
In mathematical logic, an effective Polish space is a complete separable metric space that has a computable presentation. Such spaces are studied in effective descriptive set theory and in constructive analysis. In particular, standard examples of Polish spaces such as the real line, the Cantor set and the Baire space are all effective Polish spaces. Definition An effective Polish space is a complete separable metric space X with metric d such that there is a countable dense set C = (c0, c1,...) that makes the following two relations on computable (Moschovakis 2009:96-7): References Yiannis N. Moschovakis, 2009, Descriptive Set Theory, 2nd edition, American Mathematical Society. Computable analysis Effective descriptive set theory Computability theory
Effective Polish space
[ "Mathematics" ]
162
[ "Computability theory", "Mathematical logic stubs", "Mathematical logic" ]
24,662,403
https://en.wikipedia.org/wiki/Tertiapin
Tertiapin is a 21-amino acid peptide isolated from venom of the European honey bee (Apis mellifera). It blocks two different types of potassium channels, inward rectifier potassium channels (Kir) and calcium activated large conductance potassium channels (BK). Sources Tertiapin is a peptidic component of the venom of the European honey bee (Apis mellifera). Chemistry Tertiapin peptide is composed of 21 amino acids with the sequence: Ala-Leu-Cys-Asn-Cys-Asn-Arg-Ile-Ile-Ile-Pro-His-Met-Cys-Trp-Lys-Lys-Cys-Gly-Lys-Lys. The methionine residue is sensitive to oxidation, reducing the ability to block the ionic channels. Methionine can be substituted by glutamine in order to prevent the oxidation. The new synthesized peptide is named Tertiapin-Q and does not show any functional change as compared to the original peptide, which makes it a more suitable research tool. Target and mode of action Tertiapin has been described as a potent potassium channel blocker, acting on two different types of K+ channels. Inward rectifier potassium channels Tertiapin binds specifically to different subunits of the inward rectifier potassium channel (Kir), namely GIRK1 (Kir 3.1), GIRK4 (Kir 3.4) and ROMK1 (Kir 1.1), inducing a dose-dependent block of the potassium current. It is thought that tertiapin binds to the Kir channel with its α-helix situated at the C-terminal of the peptide. This α-helix is plugged into the external end of the conduction pore, thereby blocking the channel. The N-terminal of the peptide sticks out of the extracellular side. Tertiapin has a high affinity for Kir channels with approximately Kd = 8 nM for GIRK1/4 channels and Kd = 2 nM for ROMK1 channels. In contrast to the voltage-gated K+ channels, Kir channels are more permeable to K+ during hyperpolarization than during depolarization. A voltage-dependent blockade by intracellular cations at voltages more positive than the K+ reversal potential is the mechanism underlying this feature. At more negative voltages the Kir channels are responsible for an inward K+ current. Therefore Kir channels contribute to the maintenance of the resting potential, the duration of the action potential and the neuronal excitability. GIRK1 and -4 are subunits of the muscarinic potassium channels (KACh) and have an important role in the slowing down of the heart rate in response to parasympathetic stimulation via acetylcholine. KAch channels activate during hyperpolarization, prolonging the cardiac action potential by inflow of potassium ions and reducing the frequency of action potential generation. An inhibition by tertiapin will result in a shorter cardiac action potential with loss of parasympathetic control, resulting in a faster heart rate ROMK is found in the kidneys where it contributes to K+ recycling. An inhibition will result in loss of potassium, as observed in Bartter syndrome, which can be caused by mutations in the ROMK channels. BK channels The second type of potassium channel that tertiapin blocks is the calcium activated large conductance potassium channel (BK). The block of BK cells is voltage-, concentration- and use-dependent, meaning the blockage changes with different stimulation voltages and frequencies, different concentrations and with the duration of application of tertiapin. The IC50 for BK channels is 5.8 nM. The BK channels have a role in the onset of the afterhyperpolarization, thereby shortening the action potential and enhancing the speed of repolarization. Total blockage by tertiapin prolongs the duration of the action potential and inhibits the afterhyperpolarization amplitude, leading to an increase of the neuronal excitability. Tertiapin inhibits the BK channels only after a minimal stimulation of 15 minutes, in contrast with less than a minute for the GIRK channels. For this reason it is thought that the mode of action of tertiapin is different for each channel type. Toxicity Tertiapin is a compound of the honey bee venom (apitoxin) that causes pain and signs of inflammation around the sting, but a great number of stings can be lethal (LD50 is 18-22 stings per kg for humans). An anaphylactic shock can develop if a person has an allergy to the venom. In that case even one sting can be lethal. Therapeutic use As a paradox to the symptoms after a bee sting, bee venom is used for treatment of pain, inflammation (e.g. rheumatoid arthritis) and multiple sclerosis. Tertiapin may contribute to this effect by prolonging the depolarization phase by blocking the BK channels. Eventually this will lead to inactivation of the voltage-gated Na+ channels of the dorsal root ganglion neurons, reducing sensory transmission to the central nervous system. Excessive stimulation with acetylcholine can induce an AV-block in the heart as shown in guinea pigs, which can be prevented by blockage of the KAch channels by tertiapin. This suggests a possible therapeutic role in excessive parasympathetic innervation or inferior myocardial infarction. References Ion channel toxins Neurotoxins
Tertiapin
[ "Chemistry" ]
1,163
[ "Neurochemistry", "Neurotoxins" ]
24,662,847
https://en.wikipedia.org/wiki/Discrepin
Discrepin (α-KTx15.6) is a peptide from the venom of the Venezuelan scorpion Tityus discrepans. It acts as a neurotoxin by irreversibly blocking A-type voltage-dependent K+-channels. Etymology and source Discrepin is named after its source: a Venezuelan scorpion called Tityus discrepans. Its systematic number is α-KTx15.6. Chemistry The subfamily α-KTx15 consists of 6 toxins. The first five toxins of this subfamily are very much alike, but discrepin only shares 50% amino acid homology with other members of this subfamily. Discrepin contains 38 amino acid residues. It has a polyglutamic acid at its N-terminal region. Discrepin has the α and β folds that are characteristic of scorpion toxins. It consists of one α-helix and three β-sheet helix strands. The α-helix is formed from amino acid Ser11 until Arg21. The three antiparallel β-sheets are formed from amino acid Ile2 until Lys7, Ala27 until Cys29 and Arg33 until Cys36. Target Discrepin blocks voltage-gated Shal-type (Kv4.x) K+ channels in cerebellar granular cells. These A-type K+ channels regulate firing frequency, spike initiation and the waveform of action potentials. Discrepin has yet only been tested in cerebellar cells, however, Kv4.x family channels are in general highly expressed in the brain, heart and smooth muscles. Competition experiments showed that discrepin inhibits the binding of scorpion toxin BmTx3 to its receptor site, where other K+ channel blockers (Kv1-, Kv3.4-, Kv4.2/4.3 family blockers) were unable to compete with this toxin. These results support the hypothesis that discrepin can bind to a very specific and unique type of Kv4.x receptor channels. The residues of discrepin that are important for blocking these channels have not yet been clarified. However, it is clear that the N-terminal plays a role in the binding affinity. The stoichiometry of toxin binding to the potassium channel is 1:1. Mode of action Discrepin specifically blocks the IA currents (fast transient, low-voltage-activated currents) of voltage-dependent K+ channels. Inhibition of these K+ currents occurs in an irreversible manner, i.e. washing out of the toxin gives no recovery of the currents. The kinetics of the channel are not affected by discrepin and the blockage is independent of the holding potential. Toxicity The half-effective dose (IC50) is 190 ± 30 nM. References Ion channel toxins Neurotoxins
Discrepin
[ "Chemistry" ]
593
[ "Neurochemistry", "Neurotoxins" ]
24,663,108
https://en.wikipedia.org/wiki/Phorbol%2012%2C13-dibutyrate
Phorbol 12,13-dibutyrate (PDBu) is a phorbol ester which is one of the constituents of croton oil. As an activator of protein kinase C, it is a weak tumor promoter compared to 12-O-tetradecanoylphorbol-13-acetate. PDBu is widely used as a chemical reagent because of its solubility in water and other organic solvents. References Butyrate esters Diterpenes Alcohols Ketones Carcinogens Cyclopentenes Phorbol esters
Phorbol 12,13-dibutyrate
[ "Chemistry", "Environmental_science" ]
123
[ "Ketones", "Carcinogens", "Toxicology", "Functional groups" ]
24,664,232
https://en.wikipedia.org/wiki/Proximity%20analysis
Proximity analysis is a class of spatial analysis tools and algorithms that employ geographic distance as a central principle. Distance is fundamental to geographic inquiry and spatial analysis, due to principles such as the friction of distance, Tobler's first law of geography, and Spatial autocorrelation, which are incorporated into analytical tools. Proximity methods are thus used in a variety of applications, especially those that involve movement and interaction. Distance measures All proximity analysis tools are based on a measure of distance between two locations. This may seen as a simplistic geometric measurement, but the nature of geographic phenomena and geographic activity requires several candidate methods to measure and express distance and distance-related measures. Euclidean distance, the straight-line geometric distance measured on a planar surface. In geographic information systems, this can be easily calculated from locations in a cartesian Projected coordinate system using the Pythagorean theorem. While it is the simplest method to measure distance, it rarely reflects actual geographic movement. Manhattan distance, the distance between two locations in a cartesian (planar) coordinate system along a path that only follows the x and y axes (thus appearing similar to a path through a grid street network such as that of Manhattan). Geodesic distance, the shortest distance between two locations that stays on the surface of the Earth, following a great circle. On a sphere, the formula uses the spherical law of cosines, but the method is significantly more difficult on an ellipsoid. Network distance, a measurement between two locations along a route within a constrained linear space, such as a road or utility network. Abstract distance, a measurement of distance in a space that is only indirectly related to geographic space, or only metaphorically spatial. Examples include social networks of interpersonal connections, information spaces of related concepts, and the hypertext network of the World Wide Web. Although these are not inherently geographic, projecting them into an abstract space allows geographic tools such as proximity analysis to be used to study them. Cost distance, a measurement along a route (in any of the above spaces) in which geometric distance is replaced by some other quantity that accumulates along the route (and is thus proportional to distance), called a cost because it generally serves as an undesirable quantity to be minimized. Travel time is the most common cost measurement, but other costs include carbon emissions, fuel consumption, environmental impacts, and construction costs. Techniques There are a variety of tools, models, and algorithms that incorporate geographic distance, due to the variety of relevant problems and tasks. Buffers, a tool for determining the region that is within a specified distance of a set of geographic features. Cost distance analysis, algorithms for finding optimal routes through continuous space that minimize distance and/or other location dependent costs. Voronoi diagram, also known as Thiessen polygons, an algorithm for partitioning continuous space into a set of regions based on a set of point locations, such that each region consists of locations that are closer to one of the points than any others. Distance decay, based on the Inverse square law, a mathematical model of how the influence of a phenomenon tends to be inversely proportional to the distance from it. A Gravity model is a similar model. Location analysis, a set of (usually heuristic) algorithms for finding the optimal locations of a limited set of points (e.g., store locations) that minimize the aggregate distance to another set of points (e.g., customer locations). A commonly used example is Lloyd's algorithm. Distance matrix, an array containing the distances (Euclidean or otherwise) between any two points in a set. This is frequently used as the independent variable in statistical tests of whether the strength of a relationship is correlated with distance, such as the volume of trade between cities. Transport network analysis, a set of algorithms and tools for solving a number of distance routing problems when travel is constrained to a network of one-dimensional lines, such as roads and utility networks. For example, the common task of finding the shortest route from point A to point B, which is typically solved using Dijkstra's algorithm References External links Proximity tools in Esri ArcGIS OGC ST_DWithin function (PostGIS implementation) Spatial analysis
Proximity analysis
[ "Physics" ]
862
[ "Spacetime", "Space", "Spatial analysis" ]
24,664,397
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20non-nucleoside%20reverse-transcriptase%20inhibitors
Non-nucleoside reverse-transcriptase inhibitors (NNRTIs) are antiretroviral drugs used in the treatment of human immunodeficiency virus (HIV). NNRTIs inhibit reverse transcriptase (RT), an enzyme that controls the replication of the genetic material of HIV. RT is one of the most popular targets in the field of antiretroviral drug development. Discovery and development of NNRTIs began in the late 1980s and in the end of 2009 four NNRTI had been approved by regulatory authorities and several others were undergoing clinical development. Drug resistance develops quickly if NNRTIs are administered as monotherapy and therefore NNRTIs are always given as part of combination therapy, the highly active antiretroviral therapy (HAART). History Acquired immunodeficiency syndrome (AIDS) is a leading cause of death in the world. It was identified as a disease in 1981. Two years later the etiology agent for AIDS, the HIV was described. HIV is a retrovirus and has two major serotypes, HIV-1 and HIV-2. The pandemic mostly involves HIV-1 while HIV-2 has lower morbidity rate and is mainly restricted to western Africa. In the year 2009 over 40 million people were infected worldwide with HIV and the number keeps on growing. The vast majority of infected individuals live in the developing countries. HIV drugs do not cure HIV infection, but the treatment aims at improving the quality of patients´ lives and decreased mortality. 25 antiretroviral drugs were available in 2009 for the treatment of HIV infection. The drugs belong to six different classes that act at different targets. The most popular target in the field of antiretroviral drug development is the HIV-1 reverse transcriptase (RT) enzyme. There are two classes of drugs that target the HIV-1 RT enzyme, nucleoside/nucleotide reverse-transcriptase inhibitors (NRTIs/NtRTIs) and non-nucleoside reverse-transcriptase inhibitors (NNRTIs). Drugs in these classes are important components of the HIV combination therapy called highly active antiretroviral therapy, better known as HAART. In 1987, the first drug for the treatment of HIV infection was approved by the U.S. Food and Drug Administration (FDA). This was the NRTI called zidovudine. In the late 1980s, during further development of NRTIs, the field of NNRTIs discovery began. The development of NNRTIs improved quickly into the 1990s and they soon became the third class of antiretroviral drugs, following the protease inhibitors. The NNRTIs are HIV-1 specific and have no activity against HIV-2 and other retroviruses. The first NNRTI, nevirapine was discovered by researchers at Boehringer Ingelheim and approved by the FDA in 1996. In the next two years two other NNRTIs were approved by the FDA, delavirdine in 1997 and efavirenz in 1998. These three drugs are so-called first generation NNRTIs. The need for NNRTIs with better resistance profile led to the development of the next generation of NNRTIs. Researchers at Janssens Foundation and Tibotec discovered the first drug in this class, etravirine, which was approved by the FDA in 2008. The second drug in this class, rilpivirine, was also discovered by Tibotec and was approved by the FDA in 2011. In addition to these four NNRTIs several other are in clinical development. The HIV-1 reverse transcriptase enzyme Function Reverse transcriptase (RT) is an enzyme that controls the replication of the genetic material of HIV and other retroviruses. The enzyme has two enzymatic functions. Firstly it acts as a polymerase where it transcribes the single-stranded RNA genome into single-stranded DNA and subsequently builds a complementary strand of DNA. This provides a DNA double helix which can be integrated in the host cell's chromosome. Secondly it has ribonuclease H (Rnase H) activity as it degrades the RNA strand of RNA-DNA intermediate that forms during viral DNA synthesis. Structure The HIV-1 RT is an asymmetric 1000-amino acid heterodimer composed of p66 (560 amino acids) and p51 subunits (440 amino acids). The p66 subunit has two domains, a polymerase and ribonuclease H. The polymerase domain contains four subdomains, which have been termed “fingers”, “palm”, “thumb” and “connection” and it is often compared to a right hand. The role of the p66 subunit is to carry out the activity of RT whereas it contains the active sites of the enzyme. The p51 is believed to play mainly a structural role. Binding and pharmacophore Despite the chemical diversity of NNRTIs they all bind at the same site in the RT. The binding occurs allosterically in a hydrophobic pocket located approximately 10 Å from the catalytic site in the palm domain of the p66 subunit site of the enzyme. The NNRTI binding pocket (NNIBP) contains five aromatic (Tyr-181, Tyr-188, Phe-227 and Trp-229), six hydrophobic (Pro-59, Leu-100, Val-106, Val-179, Leu-234 and Pro-236) and five hydrophilic (Lys-101, Lys-103, Ser-105, Asp-132 and Glu-224) amino acids that belong to the p66 subunit and additional two amino acids (Ile-135 and Glu-138) belonging to the p51 subunit. Each NNRTI interacts with different amino acid residues in the NNIBP. An important factor in the binding of the first generation NNRTIs, such as nevirapine, is the butterfly-like shape. Despite their chemical diversity they assume very similar butterfly-like shape. Two aromatic rings of NNRTIs conform within the enzyme to resemble the wings of a butterfly (figure 2). The butterfly structure has a hydrophilic centre as a ‘body’ and two hydrophobic moieties representing the wings. Wing I is usually a heteroaromatic ring and wing II is a phenyl or allyl substituent. Wing I has a functional group at one side of the ring which is capable of accepting and/or donating hydrogen bonds with the main chain of the amino acids Lys-101 and Lys-103. Wing II interacts through π-π interactions with a hydrophobic pocket, formed in most part by the side chains of aromatic amino acids. On the butterfly body a hydrophobic part fills a small pocket which is mainly formed by the side chains of Lys-103, Val-106 and Val-179. However many other NNRTIs have been found to bind to RT in different modes. Second generation NNRTIs such as diarylpyrimidins (DAPYs), have a horseshoe-like shape with two lateral hydrophobic wings and a pyrimidine ring which is the central polar part. The NNIBP is elastic and the conformation depends on the size, specific chemical composition and binding mode of the NNRTI. The total structure of RT has segmental flexibility that depends on the nature of the bound NNRTI. It's important for the inhibitor to have flexibility to be able to bind in the modified pockets of a mutant target. Inhibitor flexibility may not affect the inhibitor-target interactions. Mechanism of action The NNRTIs act by binding non-competitively to the RT enzyme (figure 3). The binding causes conformational change in the three-dimensional structure of the enzyme and creates the NNIBP. Binding of NNRTI to HIV-1 RT makes the p66 thumb domain hyper extended because it induces rotamer conformation changes in amino acid residues Tyr-181 and Tyr-188. This affects the catalytic activity of the enzyme and blocks the HIV-1 replication by inhibiting the polymerase active site of the RT's p66 subunit. The global conformational change additionally destabilizes the enzyme on its nucleic acid template and reduces its ability to bind nucleotides. The transcription of the viral RNA is inhibited and therefore the replication rate of the virus reduces. Although the exact molecular mechanism is still hypothetical this has been demonstrated by multiple studies to be the primary mechanism of action. In addition to this proposed primary mechanism of action it has been shown that the NNRTIs have other mechanisms of action and interfere with various steps in the reverse transcriptase reaction. It has been suggested that the inhibition of reverse transcription by the NNRTIs may be due to effects on the RT Rnase H activity and/or template/primer binding. Some NNRTIs interfere with HIV-1 Gag-Pol polyprotein processing by inhibiting the late stage of HIV-1 replication. It is important to gain profound understanding of the various mechanism of action of the NNRTIs in order to develop next-generation NNRTIs and for understanding the mechanism of drug resistance. Drug discovery and design The development of effective anti-HIV drugs is difficult due to wide variations in nucleotide and amino acid sequences. The perfect anti-HIV drug chemical should be effective against drug resistance mutation. Understanding the target RT enzyme and its structure, mechanism of drug action and the consequence of drug resistance mutations provide useful information which can be helpful to design more effective NNRTIs. The RT enzyme can undergo change due to mutations that can disturb NNRTI binding. Discovery The first two classes of compounds that were identified as NNRTIs were the 1-(2-2-hydroxyethoxymethyl)-6-(phenylthio)thymine (HEPT) and tetrahydroimidazo[4,5,1-jkj] [1,4]benzodiazepin-2(1H)-one and -thione (TIBO) compounds. The discovery of the TIBO compounds led to the definition of the NNRTI class in the late 1980s when they were unexpectedly found to inhibit RT. This finding initiated researches on mechanism of action for these compounds. The HEPT compounds were described before the TIBO compounds and were originally believed to be NRTIs. Later it was discovered that they shared common mechanism of action with the TIBO compounds. Both the HEPT and TIBO compounds were first to be identified as highly specific and potent HIV-1 RT inhibitors, not active against other RTs. These compounds do not interrupt the cellular or mitochondrial DNA synthesis. The specificity of the NNRTIs for HIV-1 is considered the hallmark of the NNRTI drug class. Development First generation NNRTIs After the discovery of HEPT and TIBO, compounds screening methods were used to develop BI-RG-587, the first NNRTI commonly known as nevirapine. Like HEPT and TIBO, nevirapine blocked viral RT activity by non-competitive inhibition (with respect to dNTP binding). This reinforced the idea that the new class of anti-HIV inhibitors was inhibiting the activity of RT but not at the active site. Several molecular families of NNRTIs have emerged following screening and evolution of many molecules. Three NNRTI compounds of the first generation have been approved by the FDA for treating HIV-1 infection. Nevirapine was approved in 1996, delavirdine in 1997 and efavirenz in 1998 (table 1). Two of these drugs, nevirapine and efavirenz, are cornerstones of first line HAART while delavirdine is hardly used nowadays. The structure of these three drugs show the wide array of rings, substituents, and bonds that allow activity against HIV-1 RT. This diversity demonstrates why so many non-nucleosides have been synthesised but doesn't explain why only three drugs have reached the market. The main problem has been the potency of these compounds to develop resistance. Development from α-APA to ITU Crystal structure analysis showed that the first generation NNRTIs (for example TIBO, nevirapine and α-APA) bind HIV-1 RT in a “butterfly-like” conformation. These first generation NNRTIs were vulnerable against the common drug-resistance mutations like Tyr-181C and Tyr-188L/H. This triggered the need for finding new and more effective NNRTIs. ITU (imidoylthiourea), a promising series of NNRTIs emerged from α-APA analogs (figure 4). The ITU compounds were obtained by extending the linker that binds the aryl side groups of the α-APA. A potent ITU compound, R100943, was obtained by an arrangement of the chemical composition of the side groups based on structure-activity relationships (SAR). A crystal structure of the HIV-1/R100943 complex demonstrated that ITU compounds are more flexible than α-APA compound. The ITU compounds showed distinct mode of binding where they bound with "horseshoe" or "U" mode. The 2,6-dichlorophenyl part of R100943 which corresponds chemically to the wing II 2,6-dibromophenyl part of the α-APA occupied the wing I part in the NNIBP whereas the 4-cyanoanilino part of R100943 occupies the wing II position in the NNIBP. R100943 inhibited HIV-1 and was considerably effective against a number of key NNRTI-resistant mutants like G190A mutation, which caused high-level resistance to loviride (α-APA) and nevirapine. G190A mutation was thought to cause resistance by occupying a part of the binding pocket that would otherwise be filled by the linker part of the butterfly shaped NNRTIs. R100943, in the horseshoe mode of binding, is located at a distance of approximately 6.0 Å from G190. When compared with nevirapine and loviride which bind in the butterfly shape the ITU derivatives revealed improved activity against Tyr-181C and Tyr-188L mutants. A structural study suggested that a potent TIBO compound could partly supplement for the effects of the Tyr-181C mutation by moving itself in the non-nucleoside inhibitor binding pocket (NNIBP) of the mutant RT. In this context, R100943 has torsional freedom that enables the conformational alternations of the NNRTI. This torsional freedom could be used by the ITU derivate to bind to a mutated NNIBP and thus compensating for the effects of a resistance mutation. Nevertheless, the potency of R100943 against HIV-1 resistant mutants was not adequate for it to be considered as an effective drug candidate. Additionally, the chemical stability of the imidoylthiourea part of the ITU derivative was not favorable for an oral drug. Development from ITU to DATA Changes in the imidoylthiourea complexes led to the synthesis of a new class of compounds, diaryltriazine (DATA). In these compounds, the thiourea part of the ITU compounds was replaced by a triazine ring. The DATA compounds were more potent than the ITU compounds against common NNRTI resistant mutant strains. R106168, a prototype DATA compound, was rather easy to synthesize. Multiple substitutions were made at different positions on all of the three rings and on the linkers connecting the rings. In the pocket, most of the DATA derivatives conformed a horseshoe conformation. The two wings in R106168 (2,6-dichlorobenzyl and 4-cyanoanilino) occupied positions in the pocket similar to that of the two wings of the derivatives of ITU. The central part of the DATA compounds, in which the triazine ring replaced the thiourea group of ITU derivatives, is positioned between the side chains of L100 and V179. This removed a number of torsional degrees of freedom in the central part while keeping the flexibility between the triazine ring and the wings. Chemical substitution or modification in the three-aromatic-ring backbone of the DATA compounds had substantial effect on the activity. R120393, a DATA analog, was designed with a chloroindole part in wing I to expand interactions with the side chain of conserved W229 of the polymerase primer grip loop. R120393 had similar effect as R106168 against most of the NNRTI-resistant mutants. The cloroindole part interacted with the hydrophobic core of the pocket and influenced the binding mode of the R120393 so it went deeper into the pocket compared to the wing I position of other DATA analogs. Crystal structures showed that the DATA compounds could bind the NNIBP in different conformations. The capability to bind in multiple modes made the NNRTIs stronger against drug-resistance mutations. Variability between the inhibitors could be seen when the chemical composition, size of wing I and the two linker groups connecting the rings were altered. The potency of the NNRTIs changed when the triazine nitrogen atoms were substituted with carbons. Next generation NNRTIs Researchers used multi-disciplinary approach to design NNRTIs with better resistance profile and an increased genetic barrier to the development of resistance. A new class of compounds, diarylpyrimide (DAPY), were discovered with the replacement of the central triazine ring from the DATA compounds, with a pyrimidine. This new class was more effective against drug resistant HIV-1 strains than the corresponding DATA analogs. The replacement enabled substitutions to the CH-group at the 5-position of the central aromatic ring. One of the first DAPY compounds, dapivirine (with R1= 2,4,6-trimethylanilino, R2 = R3 = H and Y = NH) was found to be effective against drug-resistant HIV-1 strains. Systematic chemical substitutions were made at the R1, R2, R3 and Y positions to find new DAPY derivatives. This led to the discovery of etravirine which has a bromine substitution at the 5-position (R3) of the pyrimidine ring (with R1 = 2,6-dimethyl-4-cyanoanilino, R2 = NH2 and Y = O) (figure 5). Etravirine was discovered by researchers at the Jansen Research Foundation and Tibotec and approved in 2008 by the FDA. It is used in treatment-experienced adult patients with HIV infection that is multidrug resistant in combination with other antiretroviral drugs. Resistance When treating infection, whether bacterial or viral, there is always a risk of the infectious agent to develop drug resistance. The treatment of HIV infection is especially susceptible to drug resistance which is a serious clinical concern in the chemotherapeutic treatment of the infection. Drug resistant HIV-strains emerge if the virus is able to replicate in the presence of the antiretroviral drugs. NNRTI-resistant HIV-strains have the occurring mutations mainly in and around the NNIBP affecting the NNRTI binding directly by altering the size, shape and polarity on different areas of the pocket or by affecting, indirectly, the access to the pocket. Those mutations are primarily noted in domains which span amino acids 98-108, 178-190 or 225-238 of the p66 subunit. The most frequent mutations observed in viruses isolated from patients who have been on a failing NNRTI containing chemotherapy are Lys-103N and Tyr-181C. NNRTI resistance has been linked to over 40 amino acid substitutions in vitro and in vivo. Antiretroviral drugs are never used in monotherapy due to rapid resistance development. The highly active antiretroviral therapy (HAART) was introduced in 1996. The treatment regimen combines three drugs from at least two different classes of antiretroviral drugs. The advance of etravirine over other NNRTIs is that multiple mutations are required for the development of drug resistance. The drug has also shown activity against viruses with common NNRTI resistance associated mutations and cross-resistance mutations. Current status Five drugs in the class of NNRTIs have been approved by regulatory authorities. These are the first generation NNRTIs nevirapine, delavirdine and efavirenz and the next generation NNRTIs etravirine, and rilpivirine. Several other NNRTIs underwent clinical development but were discontinued due to unfavourable pharmacokinetic, efficacy and/or safety factors. Currently there are four other NNRTIs undergoing clinical development, IDX899, RDEA-428 and lersivirine (table 2). Rilpivirine Rilpivirine is a DAPY compound like etravirine and was discovered when further optimization within this family of NNRTIs was conducted. The resistance profile and the genetic barrier to the development of resistance is comparable to etravirine in vitro. The advantage of rilpivirine over etravirine is a better bioavailability and it is easier to formulate than etravirine. Etravirine has required extensive chemical formulation work due to poor solubility and bioavailability. Rilpivirine was approved by the FDA for HIV therapy in May 2011 under the brand name Edurant. Edurant is approved for treatment-naive patients with a viral load of 100,000 copies/mL or less at therapy initiation. Its recommended dosage is 25 mg orally once daily with a meal, in combination with other antiretrovirals. It is contraindicated for use with proton pump inhibitors due to the increased gastric pH causing decreased rilpivirine plasma concentrations, potentially resulting in loss of virologic response and possible resistance. A fixed-dose drug combining rilpivirine with emtricitabine and tenofovir disoproxil (TDF), was approved by the U.S. Food and Drug Administration in August 2011 under the brand name Complera. A newer fixed-dose drug also combining rilpivirine with emtricitabine and tenofovir alafenamide (TAF) was approved in March 2016 under the brand name Odefsey. RDEA806 In 2007 a new family of triazole NNRTIs was presented by researchers from the pharmaceutical company Ardea Biosciences. The selected candidate from the screening executed was RDEA806 belonging to the family of triazoles. It has similar resistance profile against selected NNRTI resistant HIV-1 strains to other next generation NNRTIs. The candidate entered phase IIb clinical trials in the end of 2009, but no further trial have been initiated. Ardea was sold to AstraZeneca in 2012. Fosdevirine (IDX899) Fosdevirine (also known as IDX899 and GSK-2248761) is another next generation NNRTI developed by Idenix Pharmaceuticals and ViiV Healthcare. It belongs to the family of 3-phosphoindoles. In vitro studies have shown comparable resistance profile to that of the other next generation NNRTIs. In November 2009 the candidate entered phase II clinical trials, but the trial and all further development was halted when 5 of 35 subjects receiving fosdevirine experienced delayed-onset seizures. Lersivirine (UK-453061) Lersivirine belongs to the pyrazole family and is another next generation NNRTI in clinical trials developed by the pharmaceutical company ViiV Healthcare. The resistance profile is similar to that of other next generation NNRTIs. In the end of 2009 lersivirine was in phase IIb. In February 2013, ViiV Healthcare announced a stop of the development program investigating lersivirine. See also Antiretroviral drug Reverse-transcriptase inhibitor Protease inhibitor Entry inhibitor Discovery and development of HIV-protease inhibitors Discovery and development of CCR5-receptor antagonists Discovery and development of nucleoside and nucleotide reverse-transcriptase inhibitors References Non-Nucleoside Reverse Transcriptase Inhibitors, Discovery And Development Of Non-nucleoside reverse transcriptase inhibitors
Discovery and development of non-nucleoside reverse-transcriptase inhibitors
[ "Chemistry", "Biology" ]
5,133
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
34,818,152
https://en.wikipedia.org/wiki/Federer%E2%80%93Morse%20theorem
In mathematics, the Federer–Morse theorem, introduced by , states that if f is a surjective continuous map from a compact metric space X to a compact metric space Y, then there is a Borel subset Z of X such that f restricted to Z is a bijection from Z to Y. Moreover, the inverse of that restriction is a Borel section of f—it is a Borel isomorphism. See also Uniformization Hahn–Banach theorem References Further reading L. W. Baggett and Arlan Ramsay, A Functional Analytic Proof of a Selection Lemma, Can. J. Math., vol. XXXII, no 2, 1980, pp. 441–448. Theorems in topology
Federer–Morse theorem
[ "Mathematics" ]
150
[ "Mathematical problems", "Mathematical theorems", "Topology", "Theorems in topology" ]
34,821,892
https://en.wikipedia.org/wiki/Katherine%20A.%20Lathrop
Katherine Austin Lathrop (1915 – 2005) was an American nuclear medicine researcher, biochemist and member of the Manhattan Project. Lathrop conducted pioneer work on the effects of radiation exposure on animals and humans. Early career Lathrop was born in Lawton, Oklahoma, on June 16, 1915. She attended Oklahoma A&M, where she earned bachelor's degrees in home economics and chemistry. She met her husband, Clarence Lathrop, while they were both studying for master's degrees in chemistry.. They married in 1938 and had five children. Upon completion of their master's degrees in 1939, the couple first moved to New Mexico and then to Wyoming in 1941. Lathrop became a research assistant at the University of Wyoming where she focused her efforts on research pertaining to poisonous plants that grew on the Great Plains. In 1944, Lathrop and her family moved to Chicago where Clarence pursued a medical degree at Northwestern University. They officially divorced in 1976. Involvement in the Manhattan Project Upon hearing her husband's friend talking about a secret project at the University of Chicago that hired scientists, she applied and was hired in the Biology Division of the Metallurgical Laboratory. Lathrop, who previously avoided work that involved animal experimentation, was now studying the uptake, retention, distribution, and excretion of radioactive materials in animals. Lathrop's assignment in the project was to test the biological effects radiation had on animals. She worked on the Manhattan Project from 1945 to 1946. Post Manhattan Project In 1947 after the Manhattan Project had been dismantled, Lathrop remained on staff at the lab as an associate biochemist as it was renamed Argonne National Laboratory. In 1954, tired of an exhausting commute, Lathrop left Argonne to pursue a career at the Argonne Cancer Research Hospital. It had opened in 1953 on the University of Chicago campus making it much closer to her home. Lathrop was hired by the US Atomic Energy Commission facility as a research associate under the guidance of acclaimed researcher Paul Harper. Their goal was to find ways to manipulate radiation to allow for cancer detection and treatment. Their groundbreaking work on using the gamma camera to scan the body is a method still in practice to this day. The pair also discovered that Technetium 99-m could be used as a scanning agent. She became a professor emeritus in 1985 and published her last paper in 1999 and then retired in 2000. Personal life In addition to her research and teaching career, Lathrop was involved in national societies. In 1966, she helped establish the SNM Medical Internal Radiation Dose Committee. She also was the first person to teach radiation safety to workers that would come into contact with radioactive material. After semi-retirement, she became very involved with the Daughters of the American Revolution and genealogy. Lathrop retired in 2000 due to multiple cerebral ischemic attacks. She died in Las Cruces, New Mexico, on March 10, 2005, from complications caused by dementia. Lathrop had five children. She had 10 grandchildren at the time of her death. She died March 10, 2005. References 1915 births 2005 deaths Nuclear chemists 20th-century American chemists Oklahoma State University alumni University of Chicago faculty Manhattan Project people American women chemists Place of birth missing Place of death missing Radiation health effects researchers Women radiobiologists Radiobiologists American women academics 21st-century American women Women on the Manhattan Project 20th-century American women scientists Chemists from Oklahoma
Katherine A. Lathrop
[ "Chemistry" ]
708
[ "Nuclear chemists" ]
34,823,206
https://en.wikipedia.org/wiki/Quantitative%20microbiological%20risk%20assessment
Quantitative microbiological risk assessment (QMRA) is the process of estimating the risk from exposure to microorganisms. The process involves measuring known microbial pathogens or indicators and running a Monte Carlo simulation to estimate the risk of transfer. If a dose-response model is available for the microbe, it be used to estimate the probability of infection. QMRA has expanded to be used to estimate microbial risk in many fields, but is particularly important in assessments of food water supply and human faeces/wastewater safety. QMRA to assess safety of sanitation systems The World Health Organisation's 2006 Guidelines for the Safe Use of Wastewater, Excreta and Greywater in Agriculture suggest that QMRA should be used to determine possible risk levels which can be achieved by sanitation systems. References Microbiology Risk management
Quantitative microbiological risk assessment
[ "Chemistry", "Biology" ]
169
[ "Microbiology", "Microscopy" ]
34,827,966
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20logging
Nuclear magnetic resonance (NMR) logging is a type of well logging that uses the NMR response of a formation to directly determine its porosity and permeability, providing a continuous record along the length of the borehole. Background NMR logging exploits the large magnetic moment of hydrogen, which is abundant in rocks in the form of water. The NMR signal amplitude is proportional to the quantity of hydrogen nuclei present in a formation and can be calibrated to give a value for porosity that is free from lithology effects. Uniquely, a petrophysicist can also analyse the rate of decay of the NMR signal amplitude to obtain information on the permeability of the formation - a crucial quantity in hydrocarbon exploration. Relationship of NMR signal to pore size The most important mechanism affecting NMR relaxation is grain-surface relaxation. Molecules in fluids are in constant Brownian motion, diffusing about the pore space and bouncing off the grain surfaces. Upon interaction with the grain surface, hydrogen protons can transfer some nuclear spin energy to the grain (contributing to T1 relaxation) or irreversibley dephase (contributing to T2 relaxation). Therefore the speed of relaxation most significantly depends on how often the hydrogen nuclei collide with the grain surface and this is controlled by the surface-to-volume ratio of the pore in which the nuclei are located. Collisions are less frequent in larger pores, resulting in a slower decay of the NMR signal amplitude and allowing a petrophysicist to understand the distribution of pore sizes. See also Nuclear magnetic resonance Nuclear magnetic resonance in porous media Logging while drilling SNMR References Well logging
Nuclear magnetic resonance logging
[ "Engineering" ]
345
[ "Petroleum engineering", "Well logging" ]
34,828,378
https://en.wikipedia.org/wiki/Subsea%20valves
Subsea valves are used to isolate or control the flow of material through an undersea pipeline (submarine pipeline) or other apparatus. Most commonly used to transport oil and gas, they are designed to function in a sub-marine environment, withstanding the effects of raised external pressure, salt-water corrosion, and bubbles or debris in the material carried. Subsea valves undergo stringent testing to ensure high reliability. Usage Subsea valves are used in sub-marine environments, which can range in depth from shallow water (usually down to a depth of 75 meters) to deep water (down to 3500 meters). Various industries use subsea valves, with the oil and gas sector accounting for the majority, where there is a need to move material from, to, or below the seabed. Hazards to subsea valves External environmental factors to be considered specifically for subsea valves include waterproofing, increased ambient pressure, and long-term corrosion from the high salt content of seawater. Internal factors to consider for subsea valves are related to the type of flow material (what passes through the valve apparatus). Typically in subsea environments, the flows will either be liquid or gas based but due to location of the operation, the flow can contain a significant amount of sand and debris. This can present internal structural challenges. One of the most challenging aspects for subsea valve deployment is cavitation. This occurs when liquid, being pumped through various pieces of machinery including the subsea valve, contains bubbles (or cavities). When the bubbles move through the system into areas of higher pressure they will collapse, and on moving into areas with lower pressure they will expand. This can have several negative effects including: An increase in noise and more importantly vibration, which may cause damage to a number of machinery components including the subsea valve and in extreme cases cause total pump failure. The pump may undergo a reduction in capacity. Pressure may not be maintained, potentially causing fracturing within the pump. Overall pump efficiency drops. Due to the subsea valve not being easily accessible, it is of particular importance that it can function without hindrance, as replacement may be extremely costly. Subsea valve testing To overcome the problems associated with sub-marine environments, subsea valves are required to pass a number of stringent tests. These may include (as applicable): Gas testing according to API 6DSS / API 17D or API 6A (PSL 2, PSL 3, PSL 3G or PSL 4) Performance verification test according to API 6A PR 2 Hyperbaric testing Endurance Testing according to API6A Bending calculation and test Seismic test References https://www.mendeley.com/research/one-companys-experience-subsea-valve-testing/ Valves Control devices
Subsea valves
[ "Physics", "Chemistry", "Engineering" ]
568
[ "Control devices", "Physical systems", "Control engineering", "Valves", "Hydraulics", "Piping" ]
34,831,297
https://en.wikipedia.org/wiki/Random%20coil%20index
Random coil index (RCI) predicts protein flexibility by calculating an inverse weighted average of backbone secondary chemical shifts and predicting values of model-free order parameters as well as per-residue RMSD of NMR and molecular dynamics ensembles from this parameter. The key advantages of this protocol over existing methods of studying protein flexibility are it does not require prior knowledge of a protein's tertiary structure, it is not sensitive to the protein's overall tumbling and it does not require additional NMR measurements beyond the standard experiments for backbone assignments. The application of secondary chemical shifts to characterize protein flexibility is based on an assumption that the proximity of chemical shifts to random coil values is a manifestation of increased protein mobility, while significant differences from random coil values are an indication of a relatively rigid structure. Even though chemical shifts of rigid residues may adopt random coil values as a result of comparable contributions of shielding and deshielding effects (e.g. from torsion angles, hydrogen bonds, ring currents, etc.), combining the chemical shifts from multiple nuclei into a single parameter allows one to decrease the influence of these flexibility false positives. The improved performance originates from the different probabilities of random coil chemical shifts from different nuclei being found among amino acid residues in flexible regions versus rigid regions. Typically, residues in rigid helices or rigid beta-strands are less likely to have more than one random coil chemical shift among their backbone shifts than residues in mobile regions. The actual calculation of the RCI involves several additional steps including the smoothing of secondary shifts over several adjacent residues, the use of neighboring residue corrections, chemical shift re-referencing, gap filling, chemical shift scaling and numeric adjustments to prevent divide-by-zero problems. 13C, 15 N and 1H secondary chemical shifts are then scaled to account for the characteristic resonance frequencies of these nuclei and to provide numeric consistency among different parts of the protocol. Once these scaling corrections have been done, the RCI is calculated. The ‘‘end-effect correction’’ can also be applied at this point. The last step of the protocol involves smoothing the initial set of RCI values by three-point averaging. See also Chemical Shift Chemical shift index Protein dynamics Protein NMR NMR Nuclear magnetic resonance spectroscopy Protein nuclear magnetic resonance spectroscopy Protein dynamics#Domains and protein flexibility Protein References Nuclear magnetic resonance Nuclear magnetic resonance software Protein methods Protein structure Biophysics Scientific techniques Chemistry software
Random coil index
[ "Physics", "Chemistry", "Biology" ]
485
[ "Biochemistry methods", "Applied and interdisciplinary physics", "Nuclear magnetic resonance", "Chemistry software", "Nuclear magnetic resonance software", "Protein methods", "Protein biochemistry", "Biophysics", "nan", "Nuclear physics", "Structural biology", "Protein structure" ]
27,944,111
https://en.wikipedia.org/wiki/Physical%20Review%20E
Physical Review E is a peer-reviewed, scientific journal, published monthly by the American Physical Society. The main field of interest is collective phenomena of many-body systems. It is edited by Dario Corradini as of December 2024. While original research content requires subscription, editorials, news, and other non-research content is openly accessible. Scope Although the focus of this journal is many-body phenomena, the broad scope of the journal includes quantum chaos, soft matter physics, classical chaos, biological physics and granular materials. Also emphasized are statistical physics, equilibrium and transport properties of fluids, liquid crystals, complex fluids, polymers, chaos, fluid dynamics, plasma physics, classical physics, and computational physics. Former names This journal began as "Physical Review" in 1893. In 1913 the American Physical Society took over Physical Review. In 1970 Physical Review was subdivided into Physical Review A, B, C, and D. From 1990 until 1993 a process was underway which split the journal then entitled Physical Review A: General Physics into two journals. Hence, from 1993 until 2000, one of the split off journals became Physical Review E: Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics. In 2001 the journal was changed, in name, to its present title. As an aside, in January 2007, the section which published works on classical optics was transferred from Physical Review E to Physical Review A. This action unified the classical and quantum parts of optics into a single journal. Rapid Communications Physical Review E Rapid Communications was announced on June 7, 2010. This section (or feature) gives priority to results which are deemed significant, and merits a prominent display on the Physical Review E website. The specific article is displayed for several weeks, and is part of a rotation with other articles, also deemed significant. Abstracting and indexing Physical Review E is indexed in the following bibliographic databases: Science Citation Index Expanded Current Contents / Physical, Chemical & Earth Sciences Chemical Abstracts Service - CASSI Current Physics Index Inspec MEDLINE Index Medicus PubMed NLM catalog Physics Abstracts SPIN See also American Journal of Physics Annales Henri Poincaré Applied Physics Express CRC Handbook of Chemistry and Physics European Physical Journal E Journal of Physical and Chemical Reference Data Journal of Physics A References External links Editorial: 40th Anniversary of Physical Review A. American Physical Society. July 1, 2010. Academic journals established in 2001 Physics journals Fluid dynamics journals English-language journals Monthly journals American Physical Society academic journals
Physical Review E
[ "Chemistry" ]
500
[ "Fluid dynamics journals", "Fluid dynamics" ]
27,948,250
https://en.wikipedia.org/wiki/Carbocatalysis
Carbocatalysis is a form of catalysis that uses heterogeneous carbon materials for the transformation or synthesis of organic or inorganic substrates. The catalysts are characterized by their high surface areas, surface functionality, and large, aromatic basal planes. Carbocatalysis can be distinguishable from supported catalysis (such as palladium on carbon) in that no metal is present, or if metals are present they are not the active species. As of 2010, the mechanisms of reactivity are not well understood. One of the most common examples of carbocatalysis is the oxidative dehydrogenation of ethylbenzene to styrene discovered in the 1970s. Also in the industrial process of (non-oxidative) dehydrogenation of ethylbenzene, the potassium-promoted iron oxide catalyst is coated with a carbon layer as the active phase. In another early example, a variety of substituted nitrobenzenes were reduced to the corresponding aniline using hydrazine and graphite as the catalyst. The discovery of nanostructured carbon allotropes such as carbon nanotubes, fullerenes, or graphene promoted further developments. Oxidized carbon nanotubes were used to dehydrogenate n-butane to 1-butene, and to selectively oxidize acrolein to acrylic acid. Fullerenes were used in the catalytic reduction of nitrobenzene to aniline in the presence of H2. Graphene oxide was used as a carbocatalyst to facilitate the oxidation of alcohols to the corresponding aldehydes/ketones (shown in the picture), the hydration of alkynes, and the oxidation of alkenes. References External links Carbon Materials for Catalysis Catalysis
Carbocatalysis
[ "Chemistry" ]
380
[ "Catalysis", "Chemical kinetics" ]
27,949,272
https://en.wikipedia.org/wiki/Synthesis%20of%20nucleosides
Synthesis of nucleosides involves the coupling of a nucleophilic, heterocyclic base with an electrophilic sugar. The silyl-Hilbert-Johnson (or Vorbrüggen) reaction, which employs silylated heterocyclic bases and electrophilic sugar derivatives in the presence of a Lewis acid, is the most common method for forming nucleosides in this manner. Introduction Nucleosides are typically synthesized through the coupling of a nucleophilic pyrimidine, purine, or other basic heterocycle with a derivative of ribose or deoxyribose that is electrophilic at the anomeric carbon. When an acyl-protected ribose is employed, selective formation of the β-nucleoside (possessing the S configuration at the anomeric carbon) results from neighboring group participation. Stereoselective synthesis of deoxyribonucleosides directly from deoxyribose derivatives is more difficult to achieve because neighboring group participation cannot take place. Three general methods have been used to synthesize nucleosides from nucleophilic bases and electrophilic sugars. The fusion method involves heating the base and acetyl-protected 1-acetoxyribose to 155 °C and results in the formation of the nucleoside with a maximum yield of 70%. (1) The metal salt method involves the combination of a metal salt of the heterocycle with a protected sugar halide. Silver and mercury salts were originally used; however, more recently developed methods use sodium salts. (2) The silyl-Hilbert-Johnson (SHJ) reaction (or Vorbrüggen reaction), the mildest general method for the formation of nucleosides, is the combination of a silylated heterocycle and protected sugar acetate (such as 1-O-acetyl-2,3,5-tri-O-benzoyl-beta-D-ribofuranose) in the presence of a Lewis acid. Problems associated with the insolubility of the heterocyclic bases and their metal salts are avoided; however, site selectivity is sometimes a problem when heterocycles containing multiple basic sites are used, as the reaction is often reversible. (3) Mechanism and Stereochemistry The Silyl-Hilbert-Johnson Reaction The mechanism of the SHJ reaction begins with the formation of the key cyclic cation 1. Nucleophilic attack at the anomeric position by the most nucleophilic nitrogen (N1) then occurs, yielding the desired β-nucleoside 2. A second reaction of this nucleoside with 1 generates bis(riboside) 3. Depending on the nature of the Lewis acid used, coordination of the nucleophile to the Lewis acid may be significant. Reaction of this "blocked" nucleophile with 1 results in undesired constitutional isomer 4, which may undergo further reaction to 3. Generally Lewis acid coordination is not a problem when a Lewis acid such as trimethylsilyl triflate is used; it is much more important when a stronger Lewis acid like tin(IV) chloride is employed. (4) 2-Deoxysugars are unable to form the cyclic cation intermediate 1 because of their missing benzoyl group; instead, under Lewis acidic conditions they form a resonance-stabilized oxocarbenium ion. The diastereoselectivity of nucleophilic attack on this intermediate is much lower than the stereoselectivity of attack on cyclic cation 1. Because of this low stereoselectivity, deoxyribonucleosides are usually synthesized using methods other than the SHJ reaction. Scope and Limitations The silyl-Hilbert-Johnson reaction is the most commonly used method for the synthesis of nucleosides from heterocyclic and sugar-based starting materials. However, the reaction suffers from some issues that are not associated with other methods, such as unpredictable site selectivity in some cases (see below). This section describes both derivatives of and alternatives to the SHJ reaction that are used for the synthesis of nucleosides. Silyl-Hilbert-Johnson Reactions Because most heterocyclic bases contain multiple nucleophilic sites, site selectivity is an important issue in nucleoside synthesis. Purine bases, for instance, react kinetically at N3 and thermodynamically at N1 (see Eq. (4)). Glycosylation of thymine with protected 1-acetoxy ribose produced 60% of the N1 nucleoside and 23% of the N3 nucleoside. Closely related triazines, on the other hand, react with complete selectivity to afford the N2 nucleoside. (5) The most nucleophilic nitrogen can be blocked through alkylation prior to nucleoside synthesis. Heating the blocked nucleoside in Eq. (6) in the presence of a protected sugar chloride provides the nucleoside in 59% yield. Reactions of this type are hampered by alkylation of the heterocycle by incipient alkyl chloride. (6) Silylated heterocyclic bases are susceptible to hydrolysis and somewhat difficult to handle as a result; thus, the development of a one-pot, one-step method for silylation and nucleoside synthesis represented a significant advance. The combination of trifluoroacetic acid (TFA), trimethylsilyl chloride (TMSCl), and hexamethyldisilazide (HMDS) generates trimethylsilyl trifluoroacetate in situ, which accomplishes both the silylation of the heterocycle and its subsequent coupling with the sugar. (7) Other Methods for Nucleoside Synthesis Transglycosylation, which involves the reversible transfer of a sugar moiety from one heterocyclic base to another, is effective for the conversion of pyrimidine nucleosides to purine nucleosides. Most other transglycosylation reactions are low yielding due to a small thermodynamic difference between equilibrating nucleosides. (8) Deoxyribose-derived electrophiles are unable to form the cyclic cation 1; as a result, the stereoselective synthesis of deoxyribonucleosides is more difficult than the synthesis of ribonucleosides. One solution to this problem involves the synthesis of a ribonucleoside, followed by protection of the 3'- and 5'-hydroxyl groups, removal of the 2'-hydroxyl group through a Barton deoxygenation, and deprotection. (9) Comparison with Other Methods A useful alternative to the methods described here that avoids the site selectivity concerns of the SHJ reaction is tandem Michael reaction/cyclization to simultaneously form the heterocyclic base and establish its connection to the sugar moiety. (10) A second alternative is enzymatic transglycosylation, which is completely kinetically controlled (avoiding issues of chemical transglycosylation associated with thermodynamic control). However, operational complications associated with the use of enzymes are a disadvantage of this method. (11) Experimental Conditions and Procedure Typical Conditions The sugar derivatives used for SHJ reactions should be purified, dried, and powdered before use. Heterocycles must not be too basic in order to avoid excessive complexation with the Lewis acid; amino-substituted heterocycles such as cytosine, adenine, and guanine react slowly or not at all under SHJ conditions (although their N-acetylated derivatives react more rapidly). Silylation is most commonly accomplished using HMDS, which evolves ammonia as the only byproduct of silylation. Catalytic or stoichiometric amounts of acidic additives such as trimethylsilyl chloride accelerate silylation; when such an additive is used, ammonium salts will appear in the reaction as a turbid impurity. Lewis acids should be distilled immediately before use for best results. More than about 1.2-1.4 equivalents of Lewis acid are rarely needed. Acetonitrile is the most common solvent employed for these reactions, although other polar solvents are also common. Workup of reactions employing TMSOTf involves treatment with an ice-cold solution of sodium bicarbonate and extraction of the resulting sodium salts. When tin(IV) chloride is used in 1,2-dichloroethane, workup involves the addition of pyridine and filtering of the resulting pyridine-tin complex, followed by extraction with aqueous sodium bicarbonate. Example Procedure (12) To a stirred mixture of 13.5 mL (4.09 mmol) of a 0.303 N standard solution of silylated N2-acetylguanine in 1,2-dichloroethane and 1.86 g (3.7 mmol) of benzoate-protected 1-acetoxy ribose in 35 mL of 1,2-dichloroethane was added 6.32 mL (4.46 mmol) of a 0.705 N standard solution of TMSOTf in 1,2-dichloroethane. The reaction mixture was heated at reflux for 1.5–4 hours, and then diluted with CH2Cl2. On workup with ice-cold NaHCO3 solution, there was obtained 2.32 g of crude product, which was kept for 42 hours in 125 mL of methanolic ammonia at 24°. After workup, recrystallization from H2O gave, in two crops, 0.69 g (66%) of pure guanosine, which was homogeneous (Rf 0.3) in the partition system n-butanol:acetic acid:H2O (5:1:4) and whose 1H NMR spectrum at 400 MHz in D2O showed only traces of the undesired N7-anomer of guanosine. 1H NMR (CDCl3): δ 3.55, 3.63, 3.90, 4.11, 4.43, 5.10, 5.20, 5.45, 5.72, 6.52, 7.97, 10.75. Prebiotic synthesis of nucleosides In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. Nam et al. demonstrated the direct condensation of nucleobases with ribose to give ribonucleosides in aqueous microdroplets, a key step leading to RNA formation. Also, a plausible prebiotic process for synthesizing pyrimidine and purine ribonucleosides and ribonucleotides using wet-dry cycles was presented by Becker et al. References Organic reactions
Synthesis of nucleosides
[ "Chemistry" ]
2,358
[ "Organic reactions" ]
26,061,258
https://en.wikipedia.org/wiki/Molecular%20Inversion%20Probe
Molecular Inversion Probe (MIP) belongs to the class of Capture by Circularization molecular techniques for performing genomic partitioning, a process through which one captures and enriches specific regions of the genome. Probes used in this technique are single stranded DNA molecules and, similar to other genomic partitioning techniques, contain sequences that are complementary to the target in the genome; these probes hybridize to and capture the genomic target. MIP stands unique from other genomic partitioning strategies in that MIP probes share the common design of two genomic target complementary segments separated by a linker region. With this design, when the probe hybridizes to the target, it undergoes an inversion in configuration (as suggested by the name of the technique) and circularizes. Specifically, the two target complementary regions at the 5’ and 3’ ends of the probe become adjacent to one another while the internal linker region forms a free hanging loop. The technology has been used extensively in the HapMap project for large-scale SNP genotyping as well as for studying gene copy alterations and characteristics of specific genomic loci to identify biomarkers for different diseases such as cancer. Key strengths of the MIP technology include its high specificity to the target and its scalability for high-throughput, multiplexed analyses where tens of thousands of genomic loci are assayed simultaneously. Technique Procedure Molecular Inversion Probe Structure The probes are designed with sequences that are complementary to the genomic target at its 5’ and 3’ ends . The internal region contains two universal PCR primer sites that are common to all MIPs as well as a probe-release site, which is usually a restriction site. If the identification of the captured genomic target is performed using array-based hybridization approaches, the internal region may optionally contain a probe-specific tag sequence that uniquely identifies the given probe as well as a tag-release site, which, similar to the probe-release site, is also a restriction site. Protocol Anneal probe to genomic target DNA Probes are added to the genomic DNA sample. After a denaturation followed by an annealing step, the target-complementary ends of the probe are hybridized to the target DNA. The probes then undergo circularization in this process. These probes, however, are designed such that a gap delimited by the hybridized ends of the probes remains over the target region. The size of the gap ranges from a single nucleotide for SNP genotyping to several hundred nucleotides for loci capture (e.g. exome capture). Gap filling The gap is filled by DNA polymerase using free nucleotides and the ends of the probe are ligated by ligase, resulting in a fully circularized probe. Remove non-reacted probes Since gap filling is not performed for non-reacted probes, they remain linear. Exonuclease treatment removes these non-reacted probes as well as any remaining linear DNA in the reaction. Probe release In some versions of the protocol, the probe-release site (commonly a restriction site) is cleaved by restriction enzymes such that the probe becomes linearized. In this linearized probe the universal PCR primer sequences are located at the 5’ and 3’ ends and the captured genomic target becomes part of the internal segment of the probe. Other protocols leave the probe as a circularized molecule. Captured target enrichment If the probe is linearized, traditional PCR amplification is performed to enrich the captured target using the universal primers of the probe. Otherwise, rolling circle amplification is performed for the circular probe. Captured target identification The captured target can be identified either via array-based hybridization approaches or by sequencing of the target. If array-based approach is used, the probe may optionally contain a probe-specific tag that uniquely identifies the probe as well as the genomic region targeted by it. The tags from each probe are released by cleaving the tag release site with restriction enzymes. These tags are then hybridized to the sequences that are placed on the array and are complementary to them. The captured target can also be identified by sequencing the probe, now also containing the target. Traditional Sanger sequencing or cheaper, more high-throughput technologies such as SOLiD, Illumina or Roche 454 can be used for this purpose. Multiplex analysis Although each probe examines one specific genomic locus, multiple probes can be combined into a single tube for multiplexed assay that simultaneously examines multiple loci. Currently, multiplexed MIP analysis can examine more than 55,000 loci in a single assay. Technique Development History Padlock Probe The design of the molecular inversion probes (MIP) originated from padlock probes, a molecular biology technique first reported by Nilsson et al. in 1994 . Similar to MIP, padlock probes are single stranded DNA molecules with two 20-nucleotide long segments complementary to the target connected by a 40-nucleotide long linker sequence. When the target complementary regions are hybridized to the DNA target, the padlock probes also become circularized. However, unlike MIP, padlock probes are designed such that the target complementary regions span the entire target region upon hybridization, leaving no gaps. Thus, padlock probes are only useful for detecting DNA molecules with known sequences. Nilsson et al. demonstrated the use of padlock probes to detect numerous DNA targets, including a synthetic oligonucleotide and a circular genomic clone. Padlock probes have high specificity towards their target and can distinguish target molecules that closely resemble one another. Nilsson et al. also demonstrated the use of padlock probes to differentiate between a normal and a mutant cystic fibrosis conductance receptor (CFCR) where the CFCR mutant had a 3bp deletion corresponding to one of the ends of the probe. Since ligation requires the ends of the probe to be immediately adjacent to one another when hybridized to the target, the 3bp deletion in the mutant prevented successful ligation. Padlock probes were also successfully used for in situ hybridization to detect alphoid repeats specific to chromosome 12 in a sample of chromosomes in metastasis state. Here, traditional, linear oligonucleotide probes failed to yield results. Thus, padlock probes possess sufficient specificity to detect single copy elements in the genome. Molecular Inversion Probe In order to perform SNP genotyping, Hardenbol et al. modified padlock probes such that when the probe is hybridized to the genomic target, there is a gap at the SNP position. Gap filling using a nucleotide that is complementary to the nucleotide at the SNP location determines the identity of the polymorphism. This design brings numerous benefits over the more traditional padlock probe technique. Using multiple padlock probes specific to a plausible SNP requires careful balancing of the concentration of these allele specific probes to ensure SNP counts at a given locus are properly normalized. In addition, with this design, bad probes affect all genotypes at a given locus equally. For instance, since MIP probes can assay multiple genotypes at a particular genomic locus, if the probe for a given locus does not work (e.g. fails to properly hybridize to the genomic target), none of the genotypes at this locus will be detected. In contrast, for padlock probes, one needs to design a distinct padlock probe to detect each plausible genotype a given locus (e.g. one padlock probe is needed for detecting "A" at a given SNP locus and another padlock probe is needed for detecting "T" at the locus). Thus, a bad padlock probe will only affect the detection of the specific genotype that the probe is designed to detect whereas a bad MIP probe will affect all genotypes at the locus. Using MIP, one avoids potential incorrect SNP calling since if the probe designed to assay a given locus does not work, no data is generated for this locus and no SNP calling is performed. In their procedure, Hardenbol et al. assayed more than 1000 SNP loci simultaneously in a single tube where the tube contained more than 1000 probes with distinct designs. The pool of probes was aliquoted into four tubes for four different reactions. In each reaction, a distinct nucleotide (A, T, C or G) was used for gap filling. Only when the nucleotide at the SNP locus was complementary to the applied nucleotide would the gap be closed by ligation and the probe be circularized. Identification of the captured SNPs was performed on genotyping arrays where each spot on the array contained sequences complementary to the locus-specific tags in the probes. Since the DNA array costs is a major contributor to the cost of this technique, the performance of four-chip-one-color detection was compared to two-chip-two color detection. The results were found to be similar in terms of SNP call rate and signal-to-noise ratio. In a recent report, this group successfully increased the level of multiplexing to simultaneously assay more than 10,000 SNP loci, using 12,000 distinct probes. The study examined SNP polymorphisms in 30 trio samples (each trio consisted of a mother, father and their child). Knowing the genotypes of the parents, the accuracy of the SNP genotypes predicted in the child was determined by examining whether a concordance existed between the expected Mendelian inheritance patterns and the predicted genotypes. Trio concordance rate has been found to be > 99.6%. In addition, a set of MIP-specific performance metrics was developed. This work set the framework for high-throughput SNP genotyping in the HapMap project. Connector Inversion Probe To capture longer genomic regions than a single nucleotide, Akhras et al. modified the design of MIP by extending the gap delimited by the hybridized probe ends and named the design Connector Inversion Probe (CIP). The gap corresponds to the genomic region of interest to be captured (e.g. exons). Gap filling reaction is achieved with DNA polymerase, using all four nucleotides. Identification of the captured regions can then be done by sequencing them using locus-specific primers that map to one of the target complementary ends of the probes. Akhras et al. also developed the multiplexing multiplex padlocks (MMP) barcode system in order to lower the costs of reagents. A single assay might involve DNA samples from multiple individuals and examine multiple genomic loci in each individual. A DNA barcode system that uniquely identifies each plausible combination of individual and genomic locus is represented as DNA tags that were inserted into the linker region of the probes. Thus, sequences from the captured regions would include the barcode, allowing the non-ambiguous determination of the individual and the genomic locus that the captured region belongs to. This group has also developed a software for designing locus-specific CIPs (CIP creator 1.0.1). Application Molecular Inversion Probe (MIP) is one of the techniques widely used to capture a small region of the genome for further examination. With the invention of the next generation sequencing technologies, the cost of sequencing whole genomes has decreased dramatically, however the cost is still too high for these sequencing machines to be used in practice in every laboratory. Instead, different genome partitioning techniques can be used to isolate smaller but highly specific regions of the genome for further analysis. MIP, for instance, can be used to capture targets for SNPgenotyping, copy number variation or allelic imbalance studies, to name a few. SNP Genotyping In SNP genotyping, the probes are separated into four reactions and a different type of nucleotide is added to each reaction. If the SNP at the target region is complementary to the added nucleotide, the ligation is successful and the probe becomes fully circularized. Since each probe hybridizes to exactly one SNP target in the genome, successfully circularized probes provide the nucleotide identities of the SNPs. The tag sequences from the four nucleotide-specific reactions are then hybridized to either four genotyping arrays or two, dual-colour arrays (one channel for each reaction). Analyzing which spots on the array are bound by the tags allows the determination of the SNP identities at the genomic loci represented by those tags. The SNPs targeted by MIP can then be used in areas of research such as quantitative trait loci (QTL) analysis or genome-wide association studies (GWAS) where the SNPs are used in either indirect linkage disequilibrium studies or directly screened for causative mutations. Copy Number Variation Detection Molecular inversion probe technique can also be used for copy number variation (CNV) detection. This dual role in SNP genotyping as well as CNV analysis of MIP is similar to the high-density SNP genotyping arrays which have recently been used for CNV detection and analysis as well. These techniques extract the allele-specific signal intensities from genotyping data and use that to generate CNV results. These techniques have higher precision and resolution than traditional techniques such as G-banded karyotypic analyses, fluorescence in situ hybridization (FISH) or array comparative genomic hybridization (aCGH). Current Research MIP has been used extensively in many areas of research; some of the examples of the use of this technique in recent literature are outlined below: Molecular inversion probe technique has been used in studying childhood brain tumors, the most common solid pediatric cancer and the leading cause of pediatric cancer mortality. Despite their high prevalence, little is known about the genetic events that contribute to the development and progression of pediatric gliomas. MIP has identified novel areas of copy number events in this cancer using minimal DNA. Identification of these events can in return lead to the understanding of the underlying mechanism of this disease. 45 pediatric leukemia samples were analyzed for gene copy aberrations using molecular inversion probe technology. The MIP analysis identified 69 regions of recurring copy number changes, of which 41 have not been identified with other DNA microarray platforms. Copy number gains and losses were validated in 98% of clinical karyotypes and 100% of fluorescence in situ hybridization studies available. In another study, the MIP was used to identify the association between the polymorphisms and haplotypes in the caspase-3, caspase-7, and caspase-8 genes and the risk for endometrial cancer. A recent study has demonstrated the success of MIP for copy number variation and genotyping studies in formalin-fixed paraffin embedded samples. These banked samples, usually with extensive follow-up information, underperform or suffer high failure rates compared to fresh frozen samples because of DNA degradation and cross-linking during the fixation and processing. The study, however, successfully applied MIP to obtain high quality copy number and genotyping data from formalin-fixed paraffin embedded samples. Molecular inversion probe technique has also been used in the field of pharmacogenomics. Genotyping of genes important in drug metabolism, excretion and transport using MIP has paved the way in understanding the patient-to-patient variability in responses to drugs. MIP Design and Optimization Probe Design Optimization Strategies To optimize the degree of multiplexing and the lengths of the captured regions, a number of factors should be considered when designing probes: The sequences of the probe that are complementary to the DNA target must be specific and map only to unique regions with reasonable sequence complexity in the genome . Genomic regions containing repeats should be treated with caution. For all probes used in a single assay, the annealing temperatures of the two target complementary ends of the probes should be similar such that hybridization of the two ends to their targets can be achieved at the same temperature. The GC content of the genomic targets should be similar and the targets lengths variability should be restricted such that all gaps can be filled under similar elongation timeframes. The lengths of the genomic targets cannot be too long (current successful applications worked with 100 to 200bp target lengths), otherwise steric effects may interfere with successful hybridization of the probes to their targets. The tags from each probe used for microarray-based captured region identification should have similar melting temperatures as well as maximal orthogonal base complexities. These ensure that all tags can be hybridized to the array under similar conditions and that cross-hybridizations are minimized, respectively. MIP Protocol Optimization Strategies A number of experimental conditions can be modified for optimization, these include: Hybridization and gap-fill time Probes, Ligase and DNA polymerase concentrations Enrichment of the captured target by either rolling circle amplification or linearizing the probes to perform multi-template PCR using the universal primers, common for all probes Captured target identification via either array-based hybridization approaches or direct sequencing of the target These factors are critical since in one study, proper optimization strategies increased target capture efficiency from 18 to 91 percent. Performance Metrics Turner et al. 2009 summarized two metrics that are commonly reported in MIP-based genomic capture experiments that identify the target by sequencing. Capture Uniformity: analogous to recall – the fraction of genomic targets that are captured with confidence. Specifically, the relative abundance of sequence reads that are mapped to each genomic target. Capture Specificity: analogous to precision – the fraction of sequence reads that actually map to the genomic targets of interest. These two metrics are directly affected by the quality of the batch of probes. To improve the results for low quality probes, higher levels of sequencing depths can be performed. The amount of sequencing scales needed nearly exponentially with decreases in uniformity and specificity. Hardenbol et al. 2005 proposed a set of metrics that concern SNP genotyping using MIPs. Single/noise ratio: Ratio of true genotype counts over background counts Probe conversion rate: Number of genomic SNP loci for which probes can be designed and successfully assayed. In other words, this metric concerns the fraction of probes that produce genotyping results. Call rate: For a given SNP locus, the number of DNA samples whose genotypes at this locus can be measured. In other words, the number of supporting evidence for the genotype(s) assigned to the given SNP locus. Completeness: For the set of SNPs assayed, the total fraction of genotypes that are successfully obtained. Accuracy: For the set of SNPs assayed, the fraction detected genotypes that are correct. This is commonly measured by the repeatability of the results. An inherent trade-off exists between probe conversion rate and accuracy. Removing probes that yielded incorrect genotypes increases the accuracy but decreases the probe conversion rate. In contrast, using a lenient probe acceptance threshold increases probe conversion rate but decreases the accuracy. Other Genomic Partitioning Techniques To reduce the costs from sequencing whole genomes, many methods that enrich specific genomic regions of interest have been proposed. Other Capture by Circularization Methods Gene selector method: An initial multiplex PCR step is performed to enrich the targets of interest. The PCR products are circularized upon hybridization to target-specific probes with sequences complementary to the two primers used in the PCR step. Capture by selective circularization method: The genomic DNA is digested into fragments with restriction enzymes. Using selector probes with flanking regions that are complementary to the target of interest, the digested DNA fragments are circularized upon hybridization to the selector probes. Performance Comparisons between Genomic Partitioning Techniques Each method demonstrates trade offs between uniformity, capture specificity, cost, scalability and availability. In terms of capture specificity, Capture by Circularization methods demonstrate the best results. This is due to the fact that all methods in this class require two ends of the same DNA molecule (e.g. two ends of MIP probes) to simultaneously bind to a single cognate partner molecule (e.g. genomic target region) in the proper configuration for successful ligation. In contrast, Capture by Circularization methods demonstrate less uniformity compared to other methods. This is because the probe design for each distinct genomic target is unique and thus the performance between individual probes may vary. Regarding scalability, high specificity of Capture by Circularization and Solution-based Capture methods make them the most appropriate for studies which involve large number of genomic targets and many samples. Array-based Capture techniques are appropriate for studying many genomic targets but with fewer samples due to limited resolution and specificity of microarrays. Multiplex PCR methods are most appropriate for small-scale studies due to it ease of use and availability of reagents. The costs associated with each technique are difficult to compare given the vast choices of designs and experimental conditions. However, for every technique, attaining a high multiplexing level where many loci are assayed simultaneously amortizes the costs. Advantages of MIP Unlike some of the other genotyping techniques, the need to PCR amplify the DNA sample prior to MIP application is eliminated. This is beneficial when examining a large number of target sequences simultaneously when cross-talk between primer pairs is likely to happen High specificity: High specificity is achieved by that i) Unlike other highly multiplexed genotyping techniques, MIP utilizes enzymatic steps (DNA polymerization and ligation) in solution to capture specific loci, which is then followed by an amplification step. Such a combination of enzymatic steps confers a high degree of specificity on the MIP assay ii) Exonuclease treatment removes non-reacted, linear probes iii) The tag sequences are selected in a way to increase specificity at hybridization and thus prevent cross-talk at the detection step iv) Target complementary sequences at both ends of the probe are physically limited to interact locally Built-in quality control of the signal to noise ratio: the MIP technique examines the possibility of all four bases for each SNP position. A homozygous SNP is expected to have a single signal and a heterozygous SNP to have two signals. Thus, the signal to noise ratio can be monitored using the background alleles and if a call has a suspicious signal, it can be discarded from the downstream analysis High levels of multiplexing (on the order of 104-105 probes in one assay) can be achieved Low amount of sample DNA (e.g. 0.2 ng/SNP) is needed since the MIP probes can be applied directly to genomic DNA instead of shotgun libraries High concordance: trio concordance rate is found to be > 99.6% Reproducibility: genotyping the same individual several times showed that the genotyped SNPs were concordant (99.9%) High dynamic range: in CNV detection studies, up to 60 copies of amplified regions can be detected in the genome Since MIP requires only 40 base-pairs of intact genomic DNA, its use in degraded samples, such as formaldehyde fixed paraffin embedded samples, may offer distinct advantages Simple infrastructure (only common bench-top reagents and tools are required) and simple design make this technique broadly applicable in many laboratories The choices of the platform for identifying the captured target are very flexible such that cost-efficiency may be improved. For instance, the captured targets can be directly sequenced, bypassing the need for sequencing library construction. Limitations of MIP Sensitivity and uniformity are relatively low compared to other genomic capture techniques since not all targets can be captured under the same experimental conditions for high-throughput runs that involve multiple probes. However, a recent study that used probes with longer linker regions improved uniformity. The plausible sizes of the target that can be captured are limited since i) Large gap region leads to steric constraints for the intramolecular circularization of the probe and ii) Large gap requires longer probes be synthesized, increasing the costs. The degree of multiplexing is constrained by the multiplexing capability of the method chosen for target identification. If array-based detection methods are used, the number of targets that can be assayed is limited by the available spots on the array. Since a distinct probe is needed to capture each region, it is costly to assay many regions. However, with multiplexity, the costs are amortized. For instance, at a multiplexity level of 1000, the costs become $0.01 per probe for each assay. MIP reaction conditions may require optimization, which is particularly important for assaying heterozygotic sites. See also International HapMap Project Exome Exon trapping Polymerase chain reaction Rolling circle replication DNA microarray DNA sequencing References External links Molecular Inversion Probe Protocol at National Center for Biotechnology Information (NCBI) Connector Inversion Probe (CIP) Creator Software Genomics techniques
Molecular Inversion Probe
[ "Chemistry", "Biology" ]
5,256
[ "Genetics techniques", "Genomics techniques", "Molecular biology techniques" ]
31,741,941
https://en.wikipedia.org/wiki/Epistemic%20feedback
The term "epistemic feedback" is a form of feedback which refers to an interplay between what is being observed (or measured) and the result of the observation. The concept can apply to a process to obtain information, where the process itself changes the information when being obtained. For example, instead of quietly asking customers for their opinions about food in a restaurant, making an announcement about food quality, as being tested in a survey, could cause cooks to focus on producing high-quality results. The concept can also apply to changing the method of observation, rather than affecting the data. For example, if after asking several customers about food, they noted the food as generally good or fair, then the questions might be changed to ask more specifically which food items were most/least liked. Hence, the interplay can alter either the observations, or the method of observation, or both. Viewing negative or positive effects The effects of epistemic feedback can be viewed as either negative or positive depending on the goal of the observations. When trying to get a secret survey of results, epistemic feedback can be seen as a negative factor which distorts the original data. However if the goal is to improve quality, then epistemic feedback could be a positive factor to periodically report areas which need improvement. The risk comes when the feedback temporarily slants the evaluation of quality so that long-term performance is hindered by distortion in the way results were measured. Methods to compensate for feedback Some methods to compensate for epistemic feedback are to use a "double-blind study" or to conduct secret surveys to quietly check the results. Also, "controlled experiments" can be used, where the outcome is adjusted for the placebo effect of reactions to unchanged parameters. Additionally, longitudinal studies, re-assessing the results over a long period of time, can reduce the impact of short-term feedback on the observed results. See also Reactivity (psychology) Self-determination theory Motivation Experimenter effect Observer-expectancy effect Reflexivity (social theory) Pygmalion effect Placebo effect Novelty effect References Epistemics Measurement Control theory Electronic feedback
Epistemic feedback
[ "Physics", "Mathematics" ]
433
[ "Physical quantities", "Applied mathematics", "Control theory", "Quantity", "Measurement", "Size", "Dynamical systems" ]
31,745,436
https://en.wikipedia.org/wiki/Iterated%20filtering
Iterated filtering algorithms are a tool for maximum likelihood inference on partially observed dynamical systems. Stochastic perturbations to the unknown parameters are used to explore the parameter space. Applying sequential Monte Carlo (the particle filter) to this extended model results in the selection of the parameter values that are more consistent with the data. Appropriately constructed procedures, iterating with successively diminished perturbations, converge to the maximum likelihood estimate. Iterated filtering methods have so far been used most extensively to study infectious disease transmission dynamics. Case studies include cholera, Ebola virus, influenza, malaria, HIV, pertussis, poliovirus and measles. Other areas which have been proposed to be suitable for these methods include ecological dynamics and finance. The perturbations to the parameter space play several different roles. Firstly, they smooth out the likelihood surface, enabling the algorithm to overcome small-scale features of the likelihood during early stages of the global search. Secondly, Monte Carlo variation allows the search to escape from local minima. Thirdly, the iterated filtering update uses the perturbed parameter values to construct an approximation to the derivative of the log likelihood even though this quantity is not typically available in closed form. Fourthly, the parameter perturbations help to overcome numerical difficulties that can arise during sequential Monte Carlo. Overview The data are a time series collected at times . The dynamic system is modeled by a Markov process which is generated by a function in the sense that where is a vector of unknown parameters and is some random quantity that is drawn independently each time is evaluated. An initial condition at some time is specified by an initialization function, . A measurement density completes the specification of a partially observed Markov process. We present a basic iterated filtering algorithm (IF1) followed by an iterated filtering algorithm implementing an iterated, perturbed Bayes map (IF2). Procedure: Iterated filtering (IF1) Input: A partially observed Markov model specified as above; Monte Carlo sample size ; number of iterations ; cooling parameters and ; covariance matrix ; initial parameter vector for to draw for set for set for to draw for set for set for draw such that set and for set to the sample mean of , where the vector has components set to the sample variance of set Output: Maximum likelihood estimate Variations For IF1, parameters which enter the model only in the specification of the initial condition, , warrant some special algorithmic attention since information about them in the data may be concentrated in a small part of the time series. Theoretically, any distribution with the requisite mean and variance could be used in place of the normal distribution. It is standard to use the normal distribution and to reparameterise to remove constraints on the possible values of the parameters. Modifications to the IF1 algorithm have been proposed to give superior asymptotic performance. Procedure: Iterated filtering (IF2) Input: A partially observed Markov model specified as above; Monte Carlo sample size ; number of iterations ; cooling parameter ; covariance matrix ; initial parameter vectors for to set for set for for to draw for set for set for draw such that set and for set for Output: Parameter vectors approximating the maximum likelihood estimate, Software "pomp: statistical inference for observed Markov processes" : R package. References Dynamical systems Monte Carlo methods Nonlinear filters
Iterated filtering
[ "Physics", "Mathematics" ]
685
[ "Monte Carlo methods", "Mechanics", "Dynamical systems", "Computational physics" ]
31,746,215
https://en.wikipedia.org/wiki/Iwasawa%20algebra
In mathematics, the Iwasawa algebra Λ(G) of a profinite group G is a variation of the group ring of G with p-adic coefficients that take the topology of G into account. More precisely, Λ(G) is the inverse limit of the group rings Zp(G/H) as H  runs through the open normal subgroups of G. Commutative Iwasawa algebras were introduced by in his study of Zp extensions in Iwasawa theory, and non-commutative Iwasawa algebras of compact p-adic analytic groups were introduced by . Iwasawa algebra of the p-adic integers In the special case when the profinite group G is isomorphic to the additive group of the ring of p-adic integers Zp, the Iwasawa algebra Λ(G) is isomorphic to the ring of the formal power series Zp[[T]] in one variable over Zp. The isomorphism is given by identifying 1 + T with a topological generator of G. This ring is a 2-dimensional complete Noetherian regular local ring, and in particular a unique factorization domain. It follows from the Weierstrass preparation theorem for formal power series over a complete local ring that the prime ideals of this ring are as follows: Height 0: the zero ideal. Height 1: the ideal (p), and the ideals generated by irreducible distinguished polynomials (polynomials with leading coefficient 1 and all other coefficients divisible by p). Height 2: the maximal ideal (p,T). Finitely generated modules The rank of a finitely generated module is the number of times the module Zp[[T]] occurs in it. This is well-defined and is additive for short exact sequences of finitely-generated modules. The rank of a finitely generated module is zero if and only if the module is a torsion module, which happens if and only if the support has dimension at most 1. Many of the modules over this algebra that occur in Iwasawa theory are finitely generated torsion modules. The structure of such modules can be described as follows. A quasi-isomorphism of modules is a homomorphism whose kernel and cokernel are both finite groups, in other words modules with support either empty or the height 2 prime ideal. For any finitely generated torsion module there is a quasi-isomorphism to a finite sum of modules of the form Zp[[T]]/(fn) where f is a generator of a height 1 prime ideal. Moreover, the number of times any module Zp[[T]]/(f) occurs in the module is well defined and independent of the composition series. The torsion module therefore has a characteristic power series, a formal power series given by the product of the power series fn, that is uniquely defined up to multiplication by a unit. The ideal generated by the characteristic power series is called the characteristic ideal of the Iwasawa module. More generally, any generator of the characteristic ideal is called a characteristic power series. The μ-invariant of a finitely-generated torsion module is the number of times the module Zp[[T]]/(p) occurs in it. This invariant is additive on short exact sequences of finitely generated torsion modules (though it is not additive on short exact sequences of finitely generated modules). It vanishes if and only if the finitely generated torsion module is finitely generated as a module over the subring Zp. The λ-invariant is the sum of the degrees of the distinguished polynomials that occur. In other words, if the module is pseudo-isomorphic to where the fj are distinguished polynomials, then and In terms of the characteristic power series, the μ-invariant is the minimum of the (p-adic) valuations of the coefficients and the λ-invariant is the power of T at which that minimum first occurs. If the rank, the μ-invariant, and the λ-invariant of a finitely generated module all vanish, the module is finite (and conversely); in other words its underlying abelian group is a finite abelian p-group. These are the finitely generated modules whose support has dimension at most 0. Such modules are Artinian and have a well defined length, which is finite and additive on short exact sequences. Iwasawa's theorem Write νn for the element 1+γ+γ2+...+γpn–1 where γ is a topological generator of Γ. showed that if X is a finitely generated torsion module over the Iwasawa algebra and X/νnX has order pen then for n sufficiently large, where μ, λ, and c depend only on X and not on n. Iwasawa's original argument was ad hoc, and pointed out that the Iwasawa's result could be deduced from standard results about the structure of modules over integrally closed Noetherian rings such as the Iwasawa algebra. In particular this applies to the case when en is the largest power of p dividing the order of the ideal class group of the cyclotomic field generated by the roots of unity of order pn+1. The Ferrero–Washington theorem states that μ=0 in this case. Higher rank and non-commutative Iwasawa algebras More general Iwasawa algebras are of the form where G is a compact p-adic Lie group. The case above corresponds to . A classification of modules over up to pseudo-isomorphism is possible in case For non-commutative G, -modules are classified up to so-called pseudo-null modules. References Number theory
Iwasawa algebra
[ "Mathematics" ]
1,177
[ "Discrete mathematics", "Number theory" ]
31,746,484
https://en.wikipedia.org/wiki/Center%20for%20Nanotechnology%20in%20Society
The Center for Nanotechnology in Society at the University of California at Santa Barbara (CNS-UCSB) is funded by the National Science Foundation and "serves as a national research and education center, a network hub among researchers and educators concerned with societal issues concerning nanotechnologies, and a resource base for studying these issues in the US and abroad." The CNS-UCSB began its operations in January 2006. Nanotechnology (sometimes shortened to nanotech or nano) is the study of manipulating matter on an atomic and molecular scale. Generally, nanotechnology deals with structures sized between 1 and 100 nanometre in at least one dimension, and involves developing materials or devices possessing at least one dimension within that size. Quantum mechanical effects are very important at this scale. CNS-UCSB looks at the societal implications of nano, including governance, economics, technological development, potential environmental and health risks (risk perception), and "social risks" such as distribution of benefits. History The Center received its first five years of funding from the U.S. National Science Foundation. The Center aims to disseminate both its technological and social scientific findings on nanoscale science to policymakers and those outside the nano field, and to facilitate broader public participation in the nanotechnological enterprise. It does this through public engagement between academic researchers with regulators, educators, industrial scientists, and policy makers, as well as community-based organizations and NGOs. The Center’s education and outreach programs include students and people outside the nanotech field. Focus The Center has three main areas of research: the historical context of the nano-enterprise; innovation processes and global diffusion of nanotech; risk perception and the public sphere. Partnerships CNS–UCSB researchers collaborate with the California NanoSystems Institute, UC Santa Cruz, UC Berkeley, the Science History Institute (formerly the Chemical Heritage Foundation), Duke University, Rice University, SUNY Levin Institute, and SUNY New Paltz in the US, and Cardiff University, UK, University of British Columbia, Canada, University of Edinburgh, UK, University of East Anglia, UK, as well as institutes and centers in China and East Asia. References External links Center for Nanotechnology in Society CNS Nanoscience and Nanosociety: Risk Innovation Global Energy History Innovation Group: Center for Nanotechnology in Society Technology assessment organisations Nanotechnology institutions Research institutes in California Educational organizations established in 2006
Center for Nanotechnology in Society
[ "Materials_science", "Technology" ]
494
[ "Nanotechnology", "Nanotechnology institutions", "Technology assessment organisations" ]
31,747,442
https://en.wikipedia.org/wiki/Von%20Babo%27s%20law
Von Babo's law (sometimes styled Babo's law) is an experimentally determined scientific law formulated by German chemist Lambert Heinrich von Babo in 1857. It states that the vapor pressure of solution decreases according to the concentration of solute. The law is related to other laws concerning the vapor pressure of solutions, such as Henry's law and Raoult's law. References See also Henry's Law Raoults Law Empirical laws Eponymous laws of physics Solutions
Von Babo's law
[ "Physics", "Chemistry" ]
97
[ "Thermodynamics stubs", "Homogeneous chemical mixtures", "Thermodynamics", "Solutions", "Physical chemistry stubs" ]
31,747,782
https://en.wikipedia.org/wiki/6-simplex%20honeycomb
In six-dimensional Euclidean geometry, the 6-simplex honeycomb is a space-filling tessellation (or honeycomb). The tessellation fills space by 6-simplex, rectified 6-simplex, and birectified 6-simplex facets. These facet types occur in proportions of 1:1:1 respectively in the whole honeycomb. A6 lattice This vertex arrangement is called the A6 lattice or 6-simplex lattice. The 42 vertices of the expanded 6-simplex vertex figure represent the 42 roots of the Coxeter group. It is the 6-dimensional case of a simplectic honeycomb. Around each vertex figure are 126 facets: 7+7 6-simplex, 21+21 rectified 6-simplex, 35+35 birectified 6-simplex, with the count distribution from the 8th row of Pascal's triangle. The A lattice (also called A) is the union of seven A6 lattices, and has the vertex arrangement of the dual to the omnitruncated 6-simplex honeycomb, and therefore the Voronoi cell of this lattice is the omnitruncated 6-simplex. ∪ ∪ ∪ ∪ ∪ ∪ = dual of Related polytopes and honeycombs Projection by folding The 6-simplex honeycomb can be projected into the 3-dimensional cubic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement: See also Regular and uniform honeycombs in 6-space: 6-cubic honeycomb 6-demicubic honeycomb Truncated 6-simplex honeycomb Omnitruncated 6-simplex honeycomb 222 honeycomb Notes References Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Honeycombs (geometry) 7-polytopes
6-simplex honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
520
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
3,968,707
https://en.wikipedia.org/wiki/Exner%20equation
The Exner equation describes conservation of mass between sediment in the bed of a channel and sediment that is being transported. It states that bed elevation increases (the bed aggrades) proportionally to the amount of sediment that drops out of transport, and conversely decreases (the bed degrades) proportionally to the amount of sediment that becomes entrained by the flow. It was developed by the Austrian meteorologist and sedimentologist Felix Maria Exner, from whom it derives its name. It is typically applied to sediment in a fluvial system such as a river. The Exner equation states that the change in bed elevation, , over time, , is equal to one over the grain packing density, , times the negative divergence of sediment flux, , Note that can also be expressed as , where equals the bed porosity. Good values of for natural systems range from 0.45 to 0.75. A typical value for spherical grains is 0.64, as given by random close packing. An upper bound for close-packed spherical grains is 0.74048 (see sphere packing for more details); this degree of packing is extremely improbable in natural systems, making random close packing the more realistic upper bound on grain packing density. Often, for reasons of computational convenience and/or lack of data, the Exner equation is used in its one-dimensional form. This is generally done with respect to the downstream direction , as one is typically interested in the downstream distribution of erosion and deposition though a river reach where is scalar sediment flux in the downstream direction. References Geomorphology Sedimentology Partial differential equations Conservation equations
Exner equation
[ "Physics", "Mathematics" ]
333
[ "Conservation laws", "Mathematical objects", "Equations", "Conservation equations", "Symmetry", "Physics theorems" ]
3,969,818
https://en.wikipedia.org/wiki/Vaccination%20Act
The UK Vaccination Acts of 1840, 1853, 1867 and 1898 were a series of legislative Acts passed by the Parliament of the United Kingdom regarding the vaccination policy of the country. Provisions The 1840 act The Vaccination Act 1840 (3 & 4 Vict. c. 29): Made variolation illegal. Provided optional vaccination free of charge. In general, the disadvantages of variolation are the same as those of vaccination, but added to them is the general agreement that variolation was always more dangerous than vaccination. The 1853 act By the Vaccination Act 1853 (16 & 17 Vict. c. 100) it was required: That every child, whose health permits, shall be vaccinated within three, or in case of orphanage within four months of birth, by the public vaccinator of the district, or by some other medical practitioner. That notice of this requirement, and information as to the local arrangements for public vaccination, shall, whenever a birth is registered, be given by the registrar of births to the parents or guardians of the child. That every medical practitioner who, whether in public or private practice, successfully vaccinates a child shall send to the local registrar of births a certificate that he has done so; and the registrar shall keep a minute of all the notices given, and an account of all the certificates thus received. That parents or guardians who, without sufficient reason, after having duly received the registrar's notice of the requirement of Vaccination, either omit to have a child duly vaccinated, or, this being done, omit to have it inspected as to the results of vaccination, shall be liable to a penalty of £1; and all penalties shall be recoverable under the Summary Jurisdiction Act 1848 and shall be paid toward the local poor-rate. The 1867 act The Vaccination Act 1867 (30 & 31 Vict. c. 84) consolidated and updated the existing laws relating to vaccination, and was repealed by the National Health Service Act 1946. The poor-law guardians were to control vaccination districts formed out of the parishes, and pay vaccinators from 1s to 3s per child vaccinated in the district (the amount paid varied with how far they had to travel). Within seven days of the birth of a child being registered, the registrar was to deliver a notice of vaccination; if the child was not presented to be vaccinated within three months, or brought for inspection afterwards, the parents or guardians were liable to a summary conviction and fine of 20s. The Act also provided that any person who produced or attempted to inoculate another with smallpox could be imprisoned for a month. The 1871 act In 1871 another act, the Vaccination Act 1871 (34 & 35 Vict. c. 98) was passed appointing a Vaccination Officer, also authorising a defendant to appear in a court of law by any member of his family, or any other person authorised by him. This act also confirmed the principle of compulsion, which evidently sparked hostility and opposition to the practice. The 1874 act The Vaccination Act 1874 (37 & 38 Vict. c. 75) clarified the role of the Local Government Board in making regulations for guardians to implement the 1871 act. The 1889 royal commission A royal commission was established in 1889, which issued six reports between 1892 and 1896. Its recommendations, including the abolition of cumulative penalties and the use of safer vaccine, were incorporated into the Vaccination Act 1898. The 1898 and 1907 acts In 1898 a new vaccination law was passed, in some respects modifying, but not superseding, previous acts, giving conditional exemption of conscientious objectors, (and substituting calf lymph for humanised lymph). It removed cumulative penalties and introduced a conscience clause, allowing parents who did not believe vaccination was efficacious or safe to obtain a certificate of exemption. The Vaccination Act 1898 (61 & 62 Vict. c. 49) purported to give liberty of non-vaccination, but this liberty was not really obtained. Parents applying for a certificate of exemption had to satisfy two magistrates, or one stipendiary, of their conscientious objections. Some stipendiaries, and many of the magistrates, refused to be satisfied, and imposed delays. Unless the exemption was obtained before the child was four months old, it was too late. The consequence was that in the year 1906, only about 40,000 exemptions were obtained in England and Wales. In the year 1907 the Government recognised that the magistrates had practically declined to carry out the law of 1898, and, consequently, a new law, the Vaccination Act 1907 (7 Edw. 7. c. 31), was passed. Under this law the parent escaped penalties for the non-vaccination of his child if within four months from the birth he made a statutory declaration that he confidently believed that vaccination would be prejudicial to the health of the child, and within seven days thereafter delivered, or sent by post, the declaration to the Vaccination Officer of the district. References Further reading Acts of the Parliament of the United Kingdom Medical controversies in the United Kingdom Smallpox vaccines Health law in the United Kingdom Vaccine controversies Vaccination law
Vaccination Act
[ "Chemistry", "Biology" ]
1,099
[ "Biotechnology law", "Vaccination law", "Drug safety", "Vaccine controversies", "Vaccination" ]
3,973,645
https://en.wikipedia.org/wiki/Taylor%20expansions%20for%20the%20moments%20of%20functions%20of%20random%20variables
In probability theory, it is possible to approximate the moments of a function f of a random variable X using Taylor expansions, provided that f is sufficiently differentiable and that the moments of X are finite. A simulation-based alternative to this approximation is the application of Monte Carlo simulations. First moment Given and , the mean and the variance of , respectively, a Taylor expansion of the expected value of can be found via Since the second term vanishes. Also, is . Therefore, . It is possible to generalize this to functions of more than one variable using multivariate Taylor expansions. For example, Second moment Similarly, The above is obtained using a second order approximation, following the method used in estimating the first moment. It will be a poor approximation in cases where is highly non-linear. This is a special case of the delta method. Indeed, we take . With , we get . The variance is then computed using the formula . An example is, The second order approximation, when X follows a normal distribution, is: First product moment To find a second-order approximation for the covariance of functions of two random variables (with the same function applied to both), one can proceed as follows. First, note that . Since a second-order expansion for has already been derived above, it only remains to find . Treating as a two-variable function, the second-order Taylor expansion is as follows: Taking expectation of the above and simplifying—making use of the identities and —leads to . Hence, Random vectors If X is a random vector, the approximations for the mean and variance of are given by Here and denote the gradient and the Hessian matrix respectively, and is the covariance matrix of X. See also Propagation of uncertainty WKB approximation Delta method Notes Further reading Statistical approximations Algebra of random variables Moment (mathematics)
Taylor expansions for the moments of functions of random variables
[ "Physics", "Mathematics" ]
380
[ "Mathematical analysis", "Moments (mathematics)", "Physical quantities", "Mathematical relations", "Statistical approximations", "Approximations", "Moment (physics)" ]
3,973,930
https://en.wikipedia.org/wiki/Electrophoretic%20mobility%20shift%20assay
An electrophoretic mobility shift assay (EMSA) or mobility shift electrophoresis, also referred as a gel shift assay, gel mobility shift assay, band shift assay, or gel retardation assay, is a common affinity electrophoresis technique used to study protein–DNA or protein–RNA interactions. This procedure can determine if a protein or mixture of proteins is capable of binding to a given DNA or RNA sequence, and can sometimes indicate if more than one protein molecule is involved in the binding complex. Gel shift assays are often performed in vitro concurrently with DNase footprinting, primer extension, and promoter-probe experiments when studying transcription initiation, DNA gang replication, DNA repair or RNA processing and maturation, as well as pre-mRNA splicing. Although precursors can be found in earlier literature, most current assays are based on methods described by Garner and Revzin and Fried and Crothers. Principle A mobility shift assay is electrophoretic separation of a protein–DNA or protein–RNA mixture on a polyacrylamide or agarose gel for a short period (about 1.5-2 hr for a 15- to 20-cm gel). The speed at which different molecules (and combinations thereof) move through the gel is determined by their size and charge, and to a lesser extent, their shape (see gel electrophoresis). The control lane (DNA probe without protein present) will contain a single band corresponding to the unbound DNA or RNA fragment. However, assuming that the protein is capable of binding to the fragment, the lane with a protein that binds present will contain another band that represents the larger, less mobile complex of nucleic acid probe bound to protein which is 'shifted' up on the gel (since it has moved more slowly). Under the correct experimental conditions, the interaction between the DNA (or RNA) and protein is stabilized and the ratio of bound to unbound nucleic acid on the gel reflects the fraction of free and bound probe molecules as the binding reaction enters the gel. This stability is in part due to a "caging effect", in that the protein, surrounded by the gel matrix, is unable to diffuse away from the probe before they recombine. If the starting concentrations of protein and probe are known, and if the stoichiometry of the complex is known, the apparent affinity of the protein for the nucleic acid sequence may be determined. Unless the complex is very long lived under gel conditions, or dissociation during electrophoresis is taken into account, the number derived is an apparent Kd. If the protein concentration is not known but the complex stoichiometry is, the protein concentration can be determined by increasing the concentration of DNA probe until further increments do not increase the fraction of protein bound. By comparison with a set of standard dilutions of free probe run on the same gel, the number of moles of protein can be calculated. Variants and additions An antibody that recognizes the protein can be added to this mixture to create an even larger complex with a greater shift. This method is referred to as a supershift assay, and is used to unambiguously identify a protein present in the protein – nucleic acid complex. Often, an extra lane is run with a competitor oligonucleotide to determine the most favorable binding sequence for the binding protein. The use of different oligonucleotides of defined sequence allows the identification of the precise binding site by competition (not shown in diagram). Variants of the competition assay are useful for measuring the specificity of binding and for measurement of association and dissociation kinetics. Thus, EMSA might also be used as part of a SELEX experiment to select for oligonucleotides that do actually bind a given protein. Once DNA-protein binding is determined in vitro, a number of algorithms can narrow the search for identification of the transcription factor. Consensus sequence oligonucleotides for the transcription factor of interest will be able to compete for the binding, eliminating the shifted band, and must be confirmed by supershift. If the predicted consensus sequence fails to compete for binding, identification of the transcription factor may be aided by Multiplexed Competitor EMSA (MC-EMSA), whereby large sets of consensus sequences are multiplexed in each reaction, and where one set competes for binding, the individual consensus sequences from this set are run in a further reaction. For visualization purposes, the nucleic acid fragment is usually labelled with a radioactive, fluorescent or biotin label. Standard ethidium bromide staining is less sensitive than these methods and can lack the sensitivity to detect the nucleic acid if small amounts of nucleic acid or single-stranded nucleic acid(s) are used in these experiments. When using a biotin label, streptavidin conjugated to an enzyme such as horseradish peroxidase is used to detect the DNA fragment. While isotopic DNA labeling has little or no effect on protein binding affinity, use of non-isotopic labels including flurophores or biotin can alter the affinity and/or stoichiometry of the protein interaction of interest. Competition between fluorophore- or biotin-labeled probe and unlabeled DNA of the same sequence can be used to determine whether the label alters binding affinity or stoichiometry. References External links Chemiluminescent Gel Shift Protocol Genetics techniques Molecular genetics Molecular biology Protein methods Proteomics Analytical chemistry Laboratory techniques Electrophoresis Biological techniques and tools
Electrophoretic mobility shift assay
[ "Chemistry", "Engineering", "Biology" ]
1,155
[ "Biochemistry methods", "Genetics techniques", "Instrumental analysis", "Protein methods", "Protein biochemistry", "Genetic engineering", "Biochemical separation processes", "Molecular biology techniques", "Molecular genetics", "nan", "Molecular biology", "Biochemistry", "Electrophoresis" ]
3,975,868
https://en.wikipedia.org/wiki/Quantum%20metrology
Quantum metrology is the study of making high-resolution and highly sensitive measurements of physical parameters using quantum theory to describe the physical systems, particularly exploiting quantum entanglement and quantum squeezing. This field promises to develop measurement techniques that give better precision than the same measurement performed in a classical framework. Together with quantum hypothesis testing, it represents an important theoretical model at the basis of quantum sensing. Mathematical foundations A basic task of quantum metrology is estimating the parameter of the unitary dynamics where is the initial state of the system and is the Hamiltonian of the system. is estimated based on measurements on Typically, the system is composed of many particles, and the Hamiltonian is a sum of single-particle terms where acts on the kth particle. In this case, there is no interaction between the particles, and we talk about linear interferometers. The achievable precision is bounded from below by the quantum Cramér-Rao bound as where is the number of independent repetitions and is the quantum Fisher information. Examples One example of note is the use of the NOON state in a Mach–Zehnder interferometer to perform accurate phase measurements. A similar effect can be produced using less exotic states such as squeezed states. In quantum illumination protocols, two-mode squeezed states are widely studied to overcome the limit of classical states represented in coherent states. In atomic ensembles, spin squeezed states can be used for phase measurements. Applications An important application of particular note is the detection of gravitational radiation in projects such as LIGO or the Virgo interferometer, where high-precision measurements must be made for the relative distance between two widely separated masses. However, the measurements described by quantum metrology are currently not used in this setting, being difficult to implement. Furthermore, there are other sources of noise affecting the detection of gravitational waves which must be overcome first. Nevertheless, plans may call for the use of quantum metrology in LIGO. Scaling and the effect of noise A central question of quantum metrology is how the precision, i.e., the variance of the parameter estimation, scales with the number of particles. Classical interferometers cannot overcome the shot-noise limit. This limit is also frequently called standard quantum limit (SQL) where is the number of particles. Shot-noise limit is known to be asymptotically achievable using coherent states and homodyne detection. Quantum metrology can reach the Heisenberg limit given by However, if uncorrelated local noise is present, then for large particle numbers the scaling of the precision returns to shot-noise scaling Relation to quantum information science There are strong links between quantum metrology and quantum information science. It has been shown that quantum entanglement is needed to outperform classical interferometry in magnetometry with a fully polarized ensemble of spins. It has been proved that a similar relation is generally valid for any linear interferometer, independent of the details of the scheme. Moreover, higher and higher levels of multipartite entanglement is needed to achieve a better and better accuracy in parameter estimation. Additionally, entanglement in multiple degrees of freedom of quantum systems (known as "hyperentanglement"), can be used to enhance precision, with enhancement arising from entanglement in each degree of freedom. See also Dimensional metrology Forensic metrology Smart Metrology Time metrology References Quantum information science Quantum optics Metrology
Quantum metrology
[ "Physics" ]
696
[ "Quantum optics", "Quantum mechanics" ]
30,774,055
https://en.wikipedia.org/wiki/Auslander%E2%80%93Buchsbaum%20theorem
In commutative algebra, the Auslander–Buchsbaum theorem states that regular local rings are unique factorization domains. The theorem was first proved by . They showed that regular local rings of dimension 3 are unique factorization domains, and had previously shown that this implies that all regular local rings are unique factorization domains. References Commutative algebra Theorems in ring theory
Auslander–Buchsbaum theorem
[ "Mathematics" ]
77
[ "Fields of abstract algebra", "Commutative algebra" ]
30,774,748
https://en.wikipedia.org/wiki/Moser%27s%20worm%20problem
Moser's worm problem (also known as mother worm's blanket problem) is an unsolved problem in geometry formulated by the Austrian-Canadian mathematician Leo Moser in 1966. The problem asks for the region of smallest area that can accommodate every plane curve of length 1. Here "accommodate" means that the curve may be rotated and translated to fit inside the region. In some variations of the problem, the region is restricted to be convex. Examples For example, a circular disk of radius 1/2 can accommodate any plane curve of length 1 by placing the midpoint of the curve at the center of the disk. Another possible solution has the shape of a rhombus with vertex angles of 60° and 120° and with a long diagonal of unit length. However, these are not optimal solutions; other shapes are known that solve the problem with smaller areas. Solution properties It is not completely trivial that a minimum-area cover exists. An alternative possibility would be that there is some minimal area that can be approached but not actually attained. However, there does exist a smallest convex cover. Its existence follows from the Blaschke selection theorem. It is also not trivial to determine whether a given shape forms a cover. conjectured that a shape accommodates every unit-length curve if and only if it accommodates every unit-length polygonal chain with three segments, a more easily tested condition, but showed that no finite bound on the number of segments in a polychain would suffice for this test. Known bounds The problem remains open, but over a sequence of papers researchers have tightened the gap between the known lower and upper bounds. In particular, constructed a (nonconvex) universal cover and showed that the minimum shape has area at most 0.260437; and gave weaker upper bounds. In the convex case, improved an upper bound to 0.270911861. used a min-max strategy for area of a convex set containing a segment, a triangle and a rectangle to show a lower bound of 0.232239 for a convex cover. In the 1970s, John Wetzel conjectured that a 30° circular sector of unit radius is a cover with area . Two proofs of the conjecture were independently claimed by and by . If confirmed, this will reduce the upper bound for the convex cover by about 3%. See also Moving sofa problem, the problem of finding a maximum-area shape that can be rotated and translated through an L-shaped corridor Kakeya set, a set of minimal area that can accommodate every unit-length line segment (with translations allowed, but not rotations) Lebesgue's universal covering problem, find the smallest convex area that can cover any planar set of unit diameter Bellman's lost-in-a-forest problem, find the shortest path to escape from a forest of known size and shape. Notes References . . . . . . . . Discrete geometry Unsolved problems in geometry Recreational mathematics Eponyms in geometry Curves Area
Moser's worm problem
[ "Physics", "Mathematics" ]
616
[ "Scalar physical quantities", "Unsolved problems in mathematics", "Discrete mathematics", "Geometry problems", "Physical quantities", "Eponyms in geometry", "Recreational mathematics", "Quantity", "Discrete geometry", "Unsolved problems in geometry", "Size", "Geometry", "Wikipedia categories...
30,774,917
https://en.wikipedia.org/wiki/Recombinant%20AAV%20mediated%20genome%20engineering
Recombinant adeno-associated virus (rAAV) based genome engineering is a genome editing platform centered on the use of recombinant AAV vectors that enables insertion, deletion or substitution of DNA sequences into the genomes of live mammalian cells. The technique builds on Mario Capecchi and Oliver Smithies' Nobel Prize–winning discovery that homologous recombination (HR), a natural hi-fidelity DNA repair mechanism, can be harnessed to perform precise genome alterations in mice. rAAV mediated genome-editing improves the efficiency of this technique to permit genome engineering in any pre-established and differentiated human cell line, which, in contrast to mouse ES cells, have low rates of HR. The technique has been widely adopted for use in engineering human cell lines to generate isogenic human disease models. It has also been used to optimize bioproducer cell lines for the biomanufacturing of protein vaccines and therapeutics. In addition, due to the non-pathogenic nature of rAAV, it has emerged as a desirable vector for performing gene therapy in live patients. rAAV Vector The rAAV genome is built of single-stranded deoxyribonucleic acid (ssDNA), either positive- or negative-sensed, which is about 4.7 kilobases long. These single-stranded DNA viral vectors have high transduction rates and have a unique property of stimulating endogenous HR without causing double strand DNA breaks in the genome, which is typical of other homing endonuclease mediated genome editing methods. Capabilities Users can design a rAAV vector to any target genomic locus and perform both gross and subtle endogenous gene alterations in mammalian somatic cell-types. These include gene knock-outs for functional genomics, or the ‘knock-in’ of protein tag insertions to track translocation events at physiological levels in live cells. Most importantly, rAAV targets a single allele at a time and does not result in any off-target genomic alterations. Because of this, it is able to routinely and accurately model genetic diseases caused by subtle SNPs or point mutations that are increasingly the targets of novel drug discovery programs. Applications To date, the use of rAAV mediated genome engineering has been published in over 2100 peer reviewed scientific journals. Another emerging application of rAAV based genome editing is for gene therapy in patients, due to the accuracy and lack of off-target recombination events afforded by the approach. See also Biological engineering Genome engineering Homing endonuclease Homologous recombination Meganuclease Zinc finger nuclease Isogenic human disease models Cas9 References Sources Endogenous Expression of Oncogenic PI3K Mutation Leads to Activated PI3K Signaling and an Invasive Phenotype Poster Presented at AACR/EORTC Molecular Targets and Cancer Therapeutics, Boston, USA, Nov. 2009 Endogenous Expression of Oncogenic PI3K Mutation Leads to accumulation of anti-apoptotic proteins in mitochondria Poster Presented at AACR 2010, Washington, D.C., USA, April. 2010 The use of ‘X-MAN’ isogenic cell lines to define PI3-kinase inhibitor activity profiles Poster Presented at AACR 2010, Washington, D.C., USA, April. 2010 The use of ‘X-MAN’ mutant PI3CA increases the expression of individual tubulin isoforms and promoted resistance to anti-mitotic chemotherapy drugs Poster Presented at AACR 2010, Washington, D.C., USA, April. 2010 Biochemistry Recombinant proteins
Recombinant AAV mediated genome engineering
[ "Chemistry", "Biology" ]
743
[ "Biochemistry", "Recombinant proteins", "Biotechnology products", "nan" ]
30,775,584
https://en.wikipedia.org/wiki/Louis%20Leithold
Louis Leithold (San Francisco, United States, 16 November 1924 – Los Angeles, 29 April 2005) was an American mathematician and teacher. He is best known for authoring The Calculus, a classic textbook about calculus that changed the teaching methods for calculus in world high schools and universities. Known as "a legend in AP calculus circles," Leithold was the mentor of Jaime Escalante, the Los Angeles high-school teacher whose story is the subject of the 1988 movie Stand and Deliver. Biography Leithold attended the University of California, Berkeley, where is attained his B.A., M.A. and PhD. He went on to teach at Phoenix College (Arizona) (which has a math scholarship in his name), California State University, Los Angeles, the University of Southern California, Pepperdine University, and The Open University (UK). In 1968, Leithold published The Calculus, a "blockbuster best-seller" which simplified the teaching of calculus. At age 72, after his retirement from Pepperdine, he began teaching calculus at Malibu High School, in Malibu, California, drilling his students for the Advanced Placement Calculus, and achieving considerable success. He regularly assigned two hours of homework per night, and had two training sessions at his own house that ran Saturdays or Sundays from 9AM to 4PM before the AP test. His teaching methods were praised for their liveliness, and his love for the topic was well known. He also taught workshops for calculus teachers. One of the people he influenced was Jaime Escalante, who taught math to minority students at Garfield High School in East Los Angeles. Escalante's subsequent success as a teacher is portrayed in the 1988 film Stand and Deliver. Leithold died of natural causes the week before his class (which he had been "relentlessly drilling" for eight months) was to take the AP exam; his students went on to receive top scores. A memorial service was held in Glendale, and a scholarship established in his name. Leithold experienced a notable legal event in his personal life in 1959 when he and his then-wife, musician Dr. Thyra N. Pliske, adopted a minor child, Gordon Marc Leithold. The couple eventually divorced in 1962, with an Arizona court granting Thyra custody of the child and Louis receiving certain visitation rights. Thyra later married Gilbert Norman Plass, and the family moved to Dallas, Texas in 1963. In 1965, Louis filed a suit against his former wife and her new husband in the Juvenile Court of Dallas County, Texas. The suit, titled "Application for Modification of Visitation and Custody," sought significant changes to the Arizona decree based on allegations of changed conditions and circumstances. Following a hearing, the Dallas court modified the Arizona decree with respect to Louis' visitation rights. His son would die in 1994, at the age of 35 in Houston, Texas. He was an art collector, and had art by Vasa Mihich. He also used art in his Calculus book by Patrick Caulfield. References University of California, Berkeley alumni Writers from San Francisco Schoolteachers from Arizona 20th-century American mathematicians 21st-century American mathematicians 1924 births 2005 deaths History of calculus American science writers Educators from California California State University, Los Angeles faculty University of Southern California faculty Pepperdine University faculty 20th-century American educators
Louis Leithold
[ "Mathematics" ]
672
[ "Mathematics of infinitesimals", "History of calculus", "Calculus" ]
30,776,767
https://en.wikipedia.org/wiki/Grasshopper%203D
Grasshopper is a visual programming language and environment that runs within the Rhinoceros 3D computer-aided design (CAD) application. The program was created by David Rutten, at Robert McNeel & Associates. Programs are created by dragging components onto a canvas. The outputs of those components are then connected to the inputs of subsequent components. Overview Grasshopper is primarily used to build generative algorithms, such as for generative art. Many of Grasshopper's components create 3D geometry. Programs may also contain other types of algorithms including numeric, textual, audio-visual and haptic applications. Advanced uses of Grasshopper include parametric modelling for structural engineering, architecture and fabrication, lighting performance analysis for energy efficient architecture, and building energy use. The first version of Grasshopper, then named Explicit History, was released in September 2007. Grasshopper was made part of the standard Rhino toolset in Rhino 6.0, and continues to be. AEC Magazine stated that Grasshopper is "Popular among students and professionals, McNeel Associate’s Rhino modelling tool is endemic in the architectural design world. The new Grasshopper environment provides an intuitive way to explore designs without having to learn to script." Research supporting this claim has come from product design and architecture. See also Architectural engineering Comparison of computer-aided design software Design computing Parametric design Generative design Responsive computer-aided design Visual programming language References Further reading K Lagios, J Niemasz and C F Reinhart, "Animated Building Performance Simulation (ABPS) - Linking Rhinoceros/Grasshopper with Radiance/Daysim", Accepted for Publication in the Proceedings of SimBuild 2010, New York City, August 2010 (full article). J Niemasz, J Sargent, C F Reinhart, "Solar Zoning and Energy in Detached Residential Dwellings", Proceedings of SimAUD 2011, Boston, April 2011 Arturo Tedeschi, Architettura Parametrica - Introduzione a Grasshopper, II edizione, Le Penseur, Brienza 2010, Arturo Tedeschi, Parametric Architecture with Grasshopper, Le Penseur, Brienza 2011, Arturo Tedeschi, AAD Algorithms-Aided Design, Parametric Strategies using Grasshopper, Le Penseur, Brienza 2014, Pedro Molina-Siles, Parametric Environment. The Handbook of grasshopper. Nodes & Exercises , Universitat Politècnica de València, 2016. Diego Cuevas, Advanced 3D Printing with Grasshopper: Clay and FDM (2020). External links Computer-aided design Building engineering software Computer-aided design software
Grasshopper 3D
[ "Engineering" ]
531
[ "Building engineering", "Computer-aided design", "Design engineering", "Building engineering software" ]
30,778,796
https://en.wikipedia.org/wiki/Raman%20cooling
In atomic physics, Raman cooling is a sub-recoil cooling technique that allows the cooling of atoms using optical methods below the limitations of Doppler cooling, Doppler cooling being limited by the recoil energy of a photon given to an atom. This scheme can be performed in simple optical molasses or in molasses where an optical lattice has been superimposed, which are called respectively free space Raman cooling and Raman sideband cooling. Both techniques make use of Raman scattering of laser light by the atoms. Two photon Raman process The transition between two hyperfine states of the atom can be triggered by two laser beams: the first beam excites the atom to a virtual excited state (for example because its frequency is lower than the real transition frequency), and the second beam de-excites the atom to the other hyperfine level. The frequency difference of the two beams is exactly equal to the transition frequency between the two hyperfine levels. Raman transitions are good for cooling due to the extremely narrow line width of Raman transitions between levels that have long lifetimes, and to exploit the narrow line width the difference in frequency between the two laser beams must be controlled very precisely. The illustration of this process is shown in the example schematic illustration of a two-photon Raman process. It enables the transition between the two levels and . The intermediate, virtual level is represented by the dashed line, and is red-detuned with respect to the real excited level, . The frequency difference here matches exactly the energy difference between and . Free space Raman cooling In this scheme, a pre-cooled cloud of atoms (whose temperature is of a few tens of microkelvins) undergoes a series of pulses of Raman-like processes. The beams are counter-propagating, and their frequencies are just as what has been described above, except that the frequency is now slightly red-detuned (detuning ) with respect to its normal value. Thus, atoms moving towards the source of the laser 2 with a sufficient velocity will be resonant with the Raman pulses, thanks to the Doppler effect. They will be excited to the state, and get a momentum kick decreasing the modulus of their velocity. If the propagation directions of the two lasers are interchanged, then the atoms moving in the opposite direction will be excited and get the momentum kick that will decrease the modulus of their velocities. By regularly exchanging the lasers propagating directions and varying the detuning , one can manage to have all atoms for which the initial velocity satisfies in the state , while the atoms such that are still in the state. A new beam is then switched on, whose frequency is exactly the transition frequency between and . This will optically pump the atoms from the state to the state, and the velocities will be randomized by this process, such that a fraction of the atoms in will acquire a velocity . By repeating this process several times (eight in the original paper, see references), the temperature of the cloud can be lowered to less than a microkelvin. Raman sideband cooling Raman sideband cooling is a method to prepare atoms in the vibrational ground state of a periodic potential and cool them below recoil limit. It can be implemented inside an optical dipole trap where cooling with less loss of trapped atoms could be achieved in comparison to evaporative cooling, can be implemented as a mid-stage cooling to improve the efficiency and speed of evaporative cooling, and is generally extremely insensitive to the traditional limitations of laser cooling to low temperatures at high densities. It has been successfully applied to cooling ions, as well as atoms like caesium, potassium, and lithium, etc. General Raman sideband cooling scheme The main method of Raman sideband cooling utilizes the two photon Raman process to connect levels that differ by one harmonic oscillator energy. Since the atoms are not in their ground state, they will be trapped in one of the excited levels of the harmonic oscillator. The aim of Raman sideband cooling is to put the atoms into the ground state of the harmonic potential. For a general example of a scheme, Raman beams (red in the included diagram) are two different photons ( and ) that are linearly polarized differently such that we have a change in angular momentum, shifting from to , but lowering from to vibrational levels. Then, we utilize repumping with a single beam (blue in the included diagram) that does not change vibrational levels (i.e. keeping us in , thus lowering the state of the harmonic potential in the site. Degenerate Raman sideband cooling in an optical lattice This more specific cooling scheme starts from atoms in a magneto-optical trap, using Raman transitions inside an optical lattice to bring the atoms to their vibrational ground states. An optical lattice is a spatially periodic potential formed by the interference of counter-propagating beams. An optical lattice is ramped up, such that an important fraction of the atoms are then trapped. If the lasers of the lattice are powerful enough, each site can be modeled as a harmonic trap. The optical lattice should provide a tight binding for the atoms, to prevent them from interacting with the scattered resonant photons and suppress the heating from them. This can be quantified in terms of Lamb-Dicke parameter , which gives the ratio of the ground state wave-packet size to the wavelength of the interacting laser light. In an optical lattice, can be interpreted as the ratio of photon recoil energy to the energy separation in the vibrational modes: where is recoil energy and is vibrational energy. is the Lamb-Dicke limit. In this regime, vibrational energy is larger than the recoil energy, and scattered photons cannot change the vibrational state of the atom. For specifically degenerate Raman sideband cooling, we can consider a two level atom, the ground state of which has a quantum number of , such that it is three-fold degenerate with , or . A magnetic field is added, which lifts the degeneracy in due to the Zeeman effect. Its value is exactly tuned such that the Zeeman splitting between and and between and is equal to the spacing of two levels in the harmonic potential created by the lattice. By means of Raman processes, an atom can be transferred to a state where the magnetic moment has decreased by one and the vibrational state has also decreased by one (red arrows on the above image). After that, the atoms which are in the lowest vibrational state of the lattice potential (but with ) are optically pumped to the state (role of the and light beams). Since the temperature of the atoms is low enough with respect to the pumping beam frequencies, the atom is very likely not to change its vibrational state during the pumping process. Thus it ends up in a lower vibrational state, which is how it is cooled. In order to reach this efficient transfer to the lower vibrational state at each step, the parameters of the laser, i.e. power and timing, should be carefully tuned. In general, these parameters are different for different vibrational states because the strength of the coupling (Rabi frequency) depends on the vibrational level. Additional complication to this naive picture arises from the recoil of photons, which drive this transition. The last complication can be generally avoided by performing cooling in the previously mentioned Lamb-Dicke regime, where the atom is trapped so strongly in the optical lattice that it effectively does not change its momentum due to the photon recoils. The situation is similar to the Mössbauer effect. This cooling scheme allows one to obtain a rather high density of atoms at a low temperature using only optical techniques. For instance, the Bose–Einstein condensation of caesium was achieved for the first time in an experiment that used Raman sideband cooling as its first step. Recent experiments have shown it is even sufficient to attain Bose–Einstein condensation directly. See also Laser cooling Sub-Doppler cooling References Atomic physics Cooling technology
Raman cooling
[ "Physics", "Chemistry" ]
1,670
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
30,780,490
https://en.wikipedia.org/wiki/Groupoid%20algebra
In mathematics, the concept of groupoid algebra generalizes the notion of group algebra. Definition Given a groupoid (in the sense of a category with all morphisms invertible) and a field , it is possible to define the groupoid algebra as the algebra over formed by the vector space having the elements of (the morphisms of) as generators and having the multiplication of these elements defined by , whenever this product is defined, and otherwise. The product is then extended by linearity. Examples Some examples of groupoid algebras are the following: Group rings Matrix algebras Algebras of functions Properties When a groupoid has a finite number of objects and a finite number of morphisms, the groupoid algebra is a direct sum of tensor products of group algebras and matrix algebras. See also Hopf algebra Partial group algebra Notes References Abstract algebra
Groupoid algebra
[ "Mathematics" ]
176
[ "Abstract algebra", "Algebra" ]
30,780,965
https://en.wikipedia.org/wiki/Bidirectional%20transformation
In computer programming, bidirectional transformations (bx) are programs in which a single piece of code can be run in several ways, such that the same data are sometimes considered as input, and sometimes as output. For example, a bx run in the forward direction might transform input I into output O, while the same bx run backward would take as input versions of I and O and produce a new version of I as its output. Bidirectional model transformations are an important special case in which a model is input to such a program. Some bidirectional languages are bijective. The bijectivity of a language is a severe restriction of its power, because a bijective language is merely relating two different ways to present the very same information. More general is a lens language, in which there is a distinguished forward direction ("get") that takes a concrete input to an abstract output, discarding some information in the process: the concrete state includes all the information that is in the abstract state, and usually some more. The backward direction ("put") takes a concrete state and an abstract state and computes a new concrete state. Lenses are required to obey certain conditions to ensure sensible behaviour. The most general case is that of symmetric bidirectional transformations. Here the two states that are related typically share some information, but each also includes some information that is not included in the other. Usage Bidirectional transformations can be used to: Maintain the consistency of several sources of information Provide an 'abstract view' to easily manipulate data and write them back to their source Definition Bidirectional transformations fall into various well-studied categories. A lens is a pair of functions , relating a source and a view . If these functions obey the three lens laws: PutGet: GetPut: PutPut: It is called a well-behaved lens. A related notion is that of a prism, in which the signatures of the functions are instead , . Unlike a lens, a prism may not always give a view; also unlike a lens, given a prism, a view is sufficient to construct a source. If lenses allow "focusing" (viewing, updating) on a part of a product type, prisms allow focusing (possible viewing, building) on a part of a sum type. Both lenses and prisms, as well as other constructions such as traversals, are more general notion of bidirectional transformations known as optics. Examples of implementations Boomerang is a programming language that allows writing lenses to process text data formats bidirectionally Augeas is a configuration management library whose lens language is inspired by the Boomerang project biXid is a programming language for processing XML data bidirectionally XSugar allows translation from XML to non-XML formats See also Bidirectionalization Reverse computation Transformation language References External links Mathematical relations
Bidirectional transformation
[ "Mathematics" ]
589
[ "Predicate logic", "Mathematical analysis", "Mathematical relations", "Basic concepts in set theory" ]
30,782,834
https://en.wikipedia.org/wiki/Matched%20Z-transform%20method
The matched Z-transform method, also called the pole–zero mapping or pole–zero matching method, and abbreviated MPZ or MZT, is a technique for converting a continuous-time filter design to a discrete-time filter (digital filter) design. The method works by mapping all poles and zeros of the s-plane design to z-plane locations , for a sample interval . So an analog filter with transfer function: is transformed into the digital transfer function The gain must be adjusted to normalize the desired gain, typically set to match the analog filter's gain at DC by setting and and solving for . Since the mapping wraps the s-plane's axis around the z-plane's unit circle repeatedly, any zeros (or poles) greater than the Nyquist frequency will be mapped to an aliased location. In the (common) case that the analog transfer function has more poles than zeros, the zeros at may optionally be shifted down to the Nyquist frequency by putting them at , causing the transfer function to drop off as in much the same manner as with the bilinear transform (BLT). While this transform preserves stability and minimum phase, it preserves neither time- nor frequency-domain response and so is not widely used. More common methods include the BLT and impulse invariance methods. MZT does provide less high frequency response error than the BLT, however, making it easier to correct by adding additional zeros, which is called the MZTi (for "improved"). A specific application of the matched Z-transform method in the digital control field is with the Ackermann's formula, which changes the poles of the controllable system; in general from an unstable (or nearby) location to a stable location. References Control theory Digital signal processing Filter theory
Matched Z-transform method
[ "Mathematics", "Engineering" ]
376
[ "Telecommunications engineering", "Applied mathematics", "Control theory", "Filter theory", "Dynamical systems" ]
30,783,530
https://en.wikipedia.org/wiki/Kinetic%20diameter
Kinetic diameter is a measure applied to atoms and molecules that expresses the likelihood that a molecule in a gas will collide with another molecule. It is an indication of the size of the molecule as a target. The kinetic diameter is not the same as atomic diameter defined in terms of the size of the atom's electron shell, which is generally a lot smaller, depending on the exact definition used. Rather, it is the size of the sphere of influence that can lead to a scattering event. Kinetic diameter is related to the mean free path of molecules in a gas. Mean free path is the average distance that a particle will travel without collision. For a fast moving particle (that is, one moving much faster than the particles it is moving through) the kinetic diameter is given by, where, d is the kinetic diameter, r is the kinetic radius, r = d/2, l is the mean free path, and n is the number density of particles However, a more usual situation is that the colliding particle being considered is indistinguishable from the population of particles in general. Here, the Maxwell–Boltzmann distribution of energies must be considered, which leads to the modified expression, List of diameters The following table lists the kinetic diameters of some common molecules; Dissimilar particles Collisions between two dissimilar particles occur when a beam of fast particles is fired into a gas consisting of another type of particle, or two dissimilar molecules randomly collide in a gas mixture. For such cases, the above formula for scattering cross section has to be modified. The scattering cross section, σ, in a collision between two dissimilar particles or molecules is defined by the sum of the kinetic diameters of the two particles, where. r1, r2 are, half the kinetic diameter (ie, the kinetic radii) of the two particles, respectively. We define an intensive quantity, the scattering coefficient α, as the product of the gas number density and the scattering cross section, The mean free path is the inverse of the scattering coefficient, For similar particles, r1 = r2 and, as before. References Bibliography Breck, Donald W., "Zeolite Molecular Sieves: Structure, Chemistry, and Use", New York: Wiley, 1974 . Freude, D., Molecular Physics, chapter 2, 2004 unpublished draft, retrieved and archived 18 October 2015. Ismail, Ahmad Fauzi; Khulbe, Kailash; Matsuura, Takeshi, Gas Separation Membranes: Polymeric and Inorganic, Springer, 2015 . Joos, Georg; Freeman, Ira Maximilian, Theoretical Physics, Courier Corporation, 1958 . Li, Jian-Min; Talu, Orhan, "Effect of structural heterogeneity on multicomponent adsorption: benzene and p-xylene mixture on silicalite", in Suzuki, Motoyuki (ed), Fundamentals of Adsorption, pp. 373-380, Elsevier, 1993 . Matteucci, Scott; Yampolskii, Yuri; Freeman, Benny D.; Pinnau, Ingo, "Transport of gases and vapors in glassy and rubbery polymers" in, Yampolskii, Yuri; Freeman, Benny D.; Pinnau, Ingo, Materials Science of Membranes for Gas and Vapor Separation, pp. 1-47, John Wiley & Sons, 2006 . Molecular physics ئئ
Kinetic diameter
[ "Physics", "Chemistry" ]
712
[ "Molecular physics", " molecular", "nan", "Atomic", " and optical physics" ]
30,783,771
https://en.wikipedia.org/wiki/C15H17N
{{DISPLAYTITLE:C15H17N}} The molecular formula C15H17N (molar mass: 211.30 g/mol, exact mass: 211.1361 u) may refer to: 2,2-Diphenylpropylamine 2,3-Diphenylpropylamine 3,3-Diphenylpropylamine Molecular formulas
C15H17N
[ "Physics", "Chemistry" ]
84
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
21,681,084
https://en.wikipedia.org/wiki/Aanderaa%E2%80%93Karp%E2%80%93Rosenberg%20conjecture
In theoretical computer science, the Aanderaa–Karp–Rosenberg conjecture (also known as the Aanderaa–Rosenberg conjecture or the evasiveness conjecture) is a group of related conjectures about the number of questions of the form "Is there an edge between vertex and vertex ?" that have to be answered to determine whether or not an undirected graph has a particular property such as planarity or bipartiteness. They are named after Stål Aanderaa, Richard M. Karp, and Arnold L. Rosenberg. According to the conjecture, for a wide class of properties, no algorithm can guarantee that it will be able to skip any questions: any algorithm for determining whether the graph has the property, no matter how clever, might need to examine every pair of vertices before it can give its answer. A property satisfying this conjecture is called evasive. More precisely, the Aanderaa–Rosenberg conjecture states that any deterministic algorithm must test at least a constant fraction of all possible pairs of vertices, in the worst case, to determine any non-trivial monotone graph property. In this context, a property is monotone if it remains true when edges are added; for example, planarity is not monotone, but non-planarity is monotone. A stronger version of this conjecture, called the evasiveness conjecture or the Aanderaa–Karp–Rosenberg conjecture, states that exactly tests are needed for a graph with vertices. Versions of the problem for randomized algorithms and quantum algorithms have also been formulated and studied. The deterministic Aanderaa–Rosenberg conjecture was proven by , but the stronger Aanderaa–Karp–Rosenberg conjecture remains unproven. Additionally, there is a large gap between the conjectured lower bound and the best proven lower bound for randomized and quantum query complexity. Example The property of being non-empty (that is, having at least one edge) is monotone, because adding another edge to a non-empty graph produces another non-empty graph. There is a simple algorithm for testing whether a graph is non-empty: loop through all of the pairs of vertices, testing whether each pair is connected by an edge. If an edge is ever found in this way, break out of the loop, and report that the graph is non-empty, and if the loop completes without finding any edges, then report that the graph is empty. On some graphs (for instance the complete graphs) this algorithm will terminate quickly, without testing every pair of vertices, but on the empty graph it tests all possible pairs before terminating. Therefore, the query complexity of this algorithm is : in the worst case, the algorithm performs tests. The algorithm described above is not the only possible method of testing for non-emptiness, but the Aanderaa–Karp–Rosenberg conjecture implies that every deterministic algorithm for testing non-emptiness has the same worst-case query complexity, . That is, the property of being non-empty is evasive. For this property, the result is easy to prove directly: if an algorithm does not perform tests, it cannot distinguish the empty graph from a graph that has one edge connecting one of the untested pairs of vertices, and must give an incorrect answer on one of these two graphs. Definitions In the context of this article, all graphs will be simple and undirected, unless stated otherwise. This means that the edges of the graph form a set (and not a multiset) and each edge is a pair of distinct vertices. Graphs are assumed to have an implicit representation in which each vertex has a unique identifier or label and in which it is possible to test the adjacency of any two vertices, but for which adjacency testing is the only allowed primitive operation. Informally, a graph property is a property of a graph that is independent of labeling. More formally, a graph property is a mapping from the class of all graphs to such that isomorphic graphs are mapped to the same value. For example, the property of containing at least one vertex of degree two is a graph property, but the property that the first vertex has degree two is not, because it depends on the labeling of the graph (in particular, it depends on which vertex is the "first" vertex). A graph property is called non-trivial if it does not assign the same value to all graphs. For instance, the property of being a graph is a trivial property, since all graphs possess this property. On the other hand, the property of being empty is non-trivial, because the empty graph possesses this property, but non-empty graphs do not. A graph property is said to be monotone if the addition of edges does not destroy the property. Alternately, if a graph possesses a monotone property, then every supergraph of this graph on the same vertex set also possesses it. For instance, the property of being nonplanar is monotone: a supergraph of a nonplanar graph is itself nonplanar. However, the property of being regular is not monotone. The big O notation is often used for query complexity. In short, is (read as "of the order of ") if there exist positive constants and such that, for all , . Similarly, is if there exist positive constants and such that, for all , . Finally, is if it is both and . Query complexity The deterministic query complexity of evaluating a function on bits (where the bits may be labeled as ) is the number of bits that have to be read in the worst case by a deterministic algorithm that computes the function. For instance, if the function takes the value 0 when all bits are 0 and takes value 1 otherwise (this is the OR function), then its deterministic query complexity is exactly . In the worst case, regardless of the order it chooses to examine its input, the first bits read could all be 0, and the value of the function now depends on the last bit. If an algorithm doesn't read this bit, it might output an incorrect answer. (Such arguments are known as adversary arguments.) The number of bits read are also called the number of queries made to the input. One can imagine that the algorithm asks (or queries) the input for a particular bit and the input responds to this query. The randomized query complexity of evaluating a function is defined similarly, except the algorithm is allowed to be randomized. In other words, it can flip coins and use the outcome of these coin flips to decide which bits to query in which order. However, the randomized algorithm must still output the correct answer for all inputs: it is not allowed to make errors. Such algorithms are called Las Vegas algorithms. (A different class of algorithms, Monte Carlo algorithms, are allowed to make some error.) Randomized query complexity can be defined for both Las Vegas and Monte Carlo algorithms, but the randomized version of the Aanderaa–Karp–Rosenberg conjecture is about the Las Vegas query complexity of graph properties. Quantum query complexity is the natural generalization of randomized query complexity, of course allowing quantum queries and responses. Quantum query complexity can also be defined with respect to Monte Carlo algorithms or Las Vegas algorithms, but it is usually taken to mean Monte Carlo quantum algorithms. In the context of this conjecture, the function to be evaluated is the graph property, and the input can be thought of as a string of size , describing for each pair of vertices whether there is an edge with that pair as its endpoints. The query complexity of any function on this input is at most , because an algorithm that makes queries can read the whole input and determine the input graph completely. Deterministic query complexity For deterministic algorithms, originally conjectured that for all nontrivial graph properties on vertices, deciding whether a graph possesses this property requires The non-triviality condition is clearly required because there are trivial properties like "is this a graph?" which can be answered with no queries at all. The conjecture was disproved by Aanderaa, who exhibited a directed graph property (the property of containing a "sink") which required only queries to test. A sink, in a directed graph, is a vertex of indegree and outdegree zero. The existence of a sink can be tested with less than queries. An undirected graph property which can also be tested with queries is the property of being a scorpion graph, first described in . A scorpion graph is a graph containing a three-vertex path, such that one endpoint of the path is connected to all remaining vertices, while the other two path vertices have no incident edges other than the ones in the path. Then Aanderaa and Rosenberg formulated a new conjecture (the Aanderaa–Rosenberg conjecture) which says that deciding whether a graph possesses a non-trivial monotone graph property requires queries. This conjecture was resolved by by showing that at least queries are needed to test for any nontrivial monotone graph property. Through successive improvements this bound was further increased to . Richard Karp conjectured the stronger statement (which is now called the evasiveness conjecture or the Aanderaa–Karp–Rosenberg conjecture) that "every nontrivial monotone graph property for graphs on vertices is evasive." A property is called evasive if determining whether a given graph has this property sometimes requires all possible queries. This conjecture says that the best algorithm for testing any nontrivial monotone property must (in the worst case) query all possible edges. This conjecture is still open, although several special graph properties have shown to be evasive for all . The conjecture has been resolved for the case where is a prime power using a topological approach. The conjecture has also been resolved for all non-trivial monotone properties on bipartite graphs. Minor-closed properties have also been shown to be evasive for large . In the conjecture was generalized to properties of other (non-graph) functions too, conjecturing that any non-trivial monotone function that is weakly symmetric is evasive. This case is also solved when is a prime power. Randomized query complexity Richard Karp also conjectured that queries are required for testing nontrivial monotone properties even if randomized algorithms are permitted. No nontrivial monotone property is known which requires less than queries to test. A linear lower bound (i.e., ) on all monotone properties follows from a very general relationship between randomized and deterministic query complexities. The first superlinear lower bound for all monotone properties was given by who showed that queries are required. This was further improved by to , and then by This was subsequently improved to the current best known lower bound (among bounds that hold for all monotone properties) of by . Some recent results give lower bounds which are determined by the critical probability of the monotone graph property under consideration. The critical probability is defined as the unique number in the range such that a random graph (obtained by choosing randomly whether each edge exists, independently of the other edges, with probability per edge) possesses this property with probability equal to . showed that any monotone property with critical probability requires queries. For the same problem, showed a lower bound of . As in the deterministic case, there are many special properties for which an lower bound is known. Moreover, better lower bounds are known for several classes of graph properties. For instance, for testing whether the graph has a subgraph isomorphic to any given graph (the so-called subgraph isomorphism problem), the best known lower bound is due to . Quantum query complexity For bounded-error quantum query complexity, the best known lower bound is as observed by Andrew Yao. It is obtained by combining the randomized lower bound with the quantum adversary method. The best possible lower bound one could hope to achieve is , unlike the classical case, due to Grover's algorithm which gives an -query algorithm for testing the monotone property of non-emptiness. Similar to the deterministic and randomized case, there are some properties which are known to have an lower bound, for example non-emptiness (which follows from the optimality of Grover's algorithm) and the property of containing a triangle. There are some graph properties which are known to have an lower bound, and even some properties with an lower bound. For example, the monotone property of nonplanarity requires queries, and the monotone property of containing more than half the possible number of edges (also called the majority function) requires queries. Notes References Further reading . Conjectures Graph theory Combinatorics Unsolved problems in computer science Computational complexity theory
Aanderaa–Karp–Rosenberg conjecture
[ "Mathematics" ]
2,606
[ "Discrete mathematics", "Unsolved problems in mathematics", "Unsolved problems in computer science", "Graph theory", "Combinatorics", "Conjectures", "Mathematical relations", "Statements in graph theory", "Mathematical problems" ]
21,681,203
https://en.wikipedia.org/wiki/Stratospheric%20aerosol%20injection
Stratospheric aerosol injection (SAI) is a proposed method of solar geoengineering (or solar radiation modification) to reduce global warming. This would introduce aerosols into the stratosphere to create a cooling effect via global dimming and increased albedo, which occurs naturally from volcanic winter. It appears that stratospheric aerosol injection, at a moderate intensity, could counter most changes to temperature and precipitation, take effect rapidly, have low direct implementation costs, and be reversible in its direct climatic effects. The Intergovernmental Panel on Climate Change concludes that it "is the most-researched [solar geoengineering] method that it could limit warming to below ." However, like other solar geoengineering approaches, stratospheric aerosol injection would do so imperfectly and other effects are possible, particularly if used in a suboptimal manner. Various forms of sulfur have been shown to cool the planet after large volcanic eruptions. However, as of 2021, there has been little research and existing natural aerosols in the stratosphere are not well understood. So there is no leading candidate material. Alumina, calcite and salt are also under consideration. The leading proposed method of delivery is custom aircraft. Scientific basis Natural and anthropogenic sulfates There is a wide range of particulate matter suspended in the atmosphere at various height and in various sizes. By far the best-studied are the various sulfur compounds collectively referred to sulfate aerosols. This group includes inorganic sulfates (SO42-,HSO4− and H2SO4): organic sulfur compounds are sometimes included as well, but are of lower importance. Sulfate aerosols can be anthropogenic (through the combustion of fossil fuels with a high sulfur content, primarily coal and certain less-refined fuels, like aviation and bunker fuel), biogenic from hydrosphere and biosphere, geological via volcanoes or weather-driven from wildfires and other natural combustion events. Inorganic aerosols are mainly produced when sulfur dioxide reacts with water vapor to form gaseous sulfuric acid and various salts (often through an oxidation reaction in the clouds), which are then thought to experience hygroscopic growth and coagulation and then shrink through evaporation as microscopic liquid droplets or fine (diameter of about 0.1 to 1.0 micrometre) sulfate solid particles in a colloidal suspension, with smaller particles at times coagulating into larger ones. The other major source are chemical reactions with dimethyl sulfide (DMS), predominantly sourced from marine plankton, with a smaller contribution from swamps and other such wetlands. And sometimes, aerosols are produced from photochemical decomposition of COS (carbonyl sulfide), or when solid sulfates in the sea salt spray can react with gypsum dust particles). Pollution controls and the discovery of radiative effects The discovery of these negative effects spurred the rush to reduce atmospheric sulfate pollution, typically through flue-gas desulfurization installations at power plants, such as wet scrubbers or fluidized bed combustion. In the United States, this began with the passage of the Clean Air Act in 1970, which was strengthened in 1977 and 1990. According to the EPA, from 1970 to 2005, total emissions of the six principal air pollutants, including sulfates, dropped by 53% in the US. By 2010, it valued the healthcare savings from these reductions at $50 billion annually. In Europe, it was estimated in 2021 that the 18 coal-fired power plants in the western Balkans which lack controls on sulfur dioxide pollution have emitted two-and-half times more of it than all 221 coal plants in the European Union which are fitted with these technologies. Globally, the uptake of treaties such as the 1985 Helsinki Protocol on the Reduction of Sulfur Emissions and its successors had gradually spread from the developed to the developing countries. While China and India have seen decades in rapid growth of sulfur emissions while they declined in the U.S. and Europe, they have also peaked in the recent years. In 2005, China was the largest polluter, with its estimated emissions increasing by 27% since 2000 alone and roughly matching the U.S. emissions in 1980. That year was also the peak, and a consistent decline was recorded since then. Similarly, India's sulfur dioxide emissions appear to have been largely flat in the 2010s, as more coal-fired power plants were fitted with pollution controls even as the newer ones were still coming online. Yet, around the time these treaties and technology improvements were taking place, evidence was coming in that sulfate aerosols were affecting both the visible light received by the Earth and its surface temperature. On one hand, the study of volcanic eruptions, notably 1991 eruption of Mount Pinatubo in the Philippines, had shown that the mass formation of sulfate aerosols by these eruptions formed a subtle whitish haze in the sky, reducing the amount of Sun's radiation reaching the Earth's surface and rapidly losing the heat they absorb back to space, as well increasing clouds' albedo (i.e. making them more reflective) by changing their consistency to a larger amount of smaller droplets, which was the principal reason for a clear drop in global temperatures for several years in their wake. On the other hand, multiple studies have shown that between 1950s and 1980s, the amount of sunlight reaching the surface declined by around 4–5% per decade, even though the changes in solar radiation at the top of the atmosphere were never more than 0.1-0.3%. Yet, this trend (commonly described as global dimming) began to reverse in the 1990s, consistent with the reductions in anthropogenic sulfate pollution, while at the same time, climate change accelerated. Areas like eastern United States went from seeing cooling in contrast to the global trend to becoming global warming hotspots as their enormous levels of air pollution were reduced, even as sulfate particles still accounted for around 25% of all particulates. As the real world had shown the importance of sulfate aerosol concentrations to the global climate, research into the subject accelerated. Formation of the aerosols and their effects on the atmosphere can be studied in the lab, with methods like ion-chromatography and mass spectrometry Samples of actual particles can be recovered from the stratosphere using balloons or aircraft, and remote satellites were also used for observation. This data is fed into the climate models, as the necessity of accounting for aerosol cooling to truly understand the rate and evolution of warming had long been apparent, with the IPCC Second Assessment Report being the first to include an estimate of their impact on climate, and every major model able to simulate them by the time IPCC Fourth Assessment Report was published in 2007. Many scientists also see the other side of this research, which is learning how to cause the same effect artificially. While discussed around the 1990s, if not earlier, stratospheric aerosol injection as a solar geoengineering method is best associated with Paul Crutzen's detailed 2006 proposal. Deploying in the stratosphere ensures that the aerosols are at their most effective, and that the progress of clean air measures would not be reversed: more recent research estimated that even under the highest-emission scenario RCP 8.5, the addition of stratospheric sulfur required to avoid relative to now (and relative to the preindustrial) would be effectively offset by the future controls on tropospheric sulfate pollution, and the amount required would be even less for less drastic warming scenarios. This spurred a detailed look at its costs and benefits, but even with hundreds of studies into the subject completed by the early 2020s, some notable uncertainties remain. Proposed methods Materials Various forms of sulfur were proposed as the injected substance, as this is in part how volcanic eruptions cool the planet. Precursor gases such as sulfur dioxide and hydrogen sulfide have been considered. According to estimates, "one kilogram of well placed sulfur in the stratosphere would roughly offset the warming effect of several hundred thousand kilograms of carbon dioxide." One study calculated the impact of injecting sulfate particles, or aerosols, every one to four years into the stratosphere in amounts equal to those lofted by the volcanic eruption of Mount Pinatubo in 1991, but did not address the many technical and political challenges involved in potential solar geoengineering efforts. Use of gaseous sulfuric acid appears to reduce the problem of aerosol growth. Materials such as photophoretic particles, metal oxides (as in Welsbach seeding, and titanium dioxide), and diamond are also under consideration. Delivery Various techniques have been proposed for delivering the aerosol or precursor gases. The required altitude to enter the stratosphere is the height of the tropopause, which varies from 11 kilometres (6.8 mi/36,000 ft) at the poles to 17 kilometers (11 mi/58,000 ft) at the equator. Civilian aircraft including the Boeing 747-400 and Gulfstream G550/650, could be modified at relatively low cost to deliver sufficient amounts of required material according to one study, but a later metastudy suggests a new aircraft would be needed but easy to develop. Military aircraft such as the F15-C variant of the F-15 Eagle have the necessary flight ceiling, but limited payload. Military tanker aircraft such as the KC-135 Stratotanker and KC-10 Extender also have the necessary ceiling at latitudes closer to the poles and have greater payload capacity. Modified artillery might have the necessary capability, but requires a polluting and expensive propellant charge to loft the payload. Railgun artillery could be a non-polluting alternative. High-altitude balloons can be used to lift precursor gases, in tanks, bladders or in the balloons' envelope. Injection system The latitude and distribution of injection locations has been discussed by various authors. While a near-equatorial injection regime will allow particles to enter the rising leg of the Brewer-Dobson circulation, several studies have concluded that a broader, and higher-latitude, injection regime will reduce injection mass flow rates and/or yield climatic benefits. Concentration of precursor injection in a single longitude appears to be beneficial, with condensation onto existing particles reduced, giving better control of the size distribution of aerosols resulting. The long residence time of carbon dioxide in the atmosphere may require a millennium-timescale commitment to aerosol injection if aggressive emissions abatement is not pursued simultaneously. Welsbach seeding (metal oxide particles) Welsbach seeding is a patented solar radiation modification method, involving seeding the stratosphere with small (10 to 100 micron) metal oxide particles (thorium dioxide, aluminium oxide). The purpose of the Welsbach seeding would be to "(reduce) atmospheric warming due to the greenhouse effect resulting from a greenhouse gases layer," by converting radiative energy at near-infrared wavelengths into radiation at far-infrared wavelengths, permitting some of the converted radiation to escape into space, thus cooling the atmosphere. The seeding as described would be performed by airplanes at altitudes between 7 and 13 kilometres. The method was patented by Hughes Aircraft Company in 1991, US patent 5003186. Quote from the patent: "This invention relates to a method for the reduction of global warming resulting from the greenhouse effect, and in particular to a method which involves the seeding of the earth's stratosphere with Welsbach-like materials." This is not considered to be a viable option by current geoengineering experts. Advantages The advantages of this approach in comparison to other possible means of solar geoengineering are: Mimics a natural process: Stratospheric sulfur aerosols are created by existing natural processes (especially volcanoes), whose impacts have been studied via observations. This contrasts with other, more speculative solar geoengineering techniques which do not have natural analogs (e.g., space sunshade). Technological feasibility: In contrast to other proposed solar geoengineering techniques, such as marine cloud brightening, much of the required technology is pre-existing: chemical manufacturing, artillery shells, high-altitude aircraft, weather balloons, etc. Unsolved technical challenges include methods to deliver the material in controlled diameter with good scattering properties. Scalability: Some solar geoengineering techniques, such as cool roofs and ice protection, can only provide a limited intervention in the climate due to insufficient scale—one cannot reduce the temperature by more than a certain amount with each technique. Research has suggested that this technique may have a high radiative 'forcing potential'., yet can be finely tuned according to how much cooling is needed. Speed: A common argument is that stratospheric aerosol injection can take place quickly, and would be able to buy time for carbon sequestration projects such as carbon dioxide air capture to be implemented and start acting over decades and centuries. Cost A study in 2020 looked at the cost of SAI through to the year 2100. It found that relative to other climate interventions and solutions, SAI remains inexpensive. However, at about $18 billion per year per degree Celsius of warming avoided (in 2020 USD), a solar geoengineering program with substantial climate impact would lie well beyond the financial reach of individuals, small states, or other non-state potential rogue actors. The annual cost of delivering a sufficient amount of sulfur to counteract expected greenhouse warming is estimated at $5–10 billion US dollars. SAI is expected to have low direct financial costs of implementation, relative to the expected costs of both unabated climate change and aggressive mitigation. Early studies suggest that stratospheric aerosol injection might have a relatively low direct cost. One analysis estimated the annual cost of delivering 5 million tons of an albedo enhancing aerosol to an altitude of 20 to 30 km is at US$2 billion to 8 billion, an amount which they suggest would be sufficient to offset the expected warming during the next century. In comparison, the annual cost estimates for climate damage or emission mitigation range from US$200 billion to 2 trillion. A 2016 study found the cost per 1 W/m2 of cooling to be between 5–50 billion USD/yr. Because larger particles are less efficient at cooling and drop out of the sky faster, the unit-cooling cost is expected to increase over time as increased dose leads to larger, but less efficient, particles by mechanism such as coalescence and Ostwald ripening. Assume RCP8.5, -5.5 W/m2 of cooling would be required by 2100 to maintain 2020 climate. At the dose level required to provide this cooling, the net efficiency per mass of injected aerosols would reduce to below 50% compared to low-level deployment (below 1W/m2). At a total dose of -5.5 W/m2, the cost would be between 55–550 billion USD/yr when efficiency reduction is also taken into account, bringing annual expenditure to levels comparable to other mitigation alternatives. Problematic aspects Uncertainties It is uncertain how effective any solar geoengineering technique would be, due to the difficulties modeling their impacts and the complex nature of the global climate system. Certain efficacy issues are specific to stratospheric aerosols. Lifespan of aerosols: Tropospheric sulfur aerosols are short-lived. Delivery of particles into the lower stratosphere in the arctic will typically ensure that they remain aloft only for a few weeks or months, as air in this region is predominantly descending. To ensure endurance, higher-altitude delivery is needed, ensuring a typical endurance of several years by enabling injection into the rising leg of the Brewer-Dobson circulation above the tropical tropopause. Further, sizing of particles is crucial to their endurance. Aerosol delivery: There are two proposals for how to create a stratospheric sulfate aerosol cloud, either through the release of a precursor gas () or the direct release of sulfuric acid () and these face different challenges. If gas is released it will oxidize to form and then condense to form droplets far from the injection site. Releasing would not allow control over the size of the particles that are formed but would not require a sophisticated release mechanism. Simulations suggest that as the release rate is increased there would be diminishing returns on the cooling effect, as larger particles would be formed which have a shorter lifetime and are less effective scatterers of light. If is released directly then the aerosol particles would form very quickly and in principle the particle size could be controlled although the engineering requirements for this are uncertain. Assuming a technology for direct release could be conceived and developed, it would allow control over the particle size to possibly alleviate some of the inefficiencies associated with release. Strength of cooling: The magnitude of the effect of forcing from aerosols by decreasing insolation received at the surface is not completely certain, as its scientific modelling involves complex calculations due to different confounding factors and parameters such as optical properties, spatial and temporal distribution of emission or injection, albedo, geography, loading, rate of transport of sulfate, global burden, atmospheric chemistry, mixing and reactions with other compounds and aerosols, particle size, relative humidity, and clouds. Along with others, aerosol size distribution and hygroscopicity have particularly high uncertainty due to being closely related to sulfate aerosol interactions with other aerosols which affects the amount of radiation reflected. As of 2021, state-of-the-art CMIP6 models estimate that total cooling from the currently present aerosols is between to ; the IPCC Sixth Assessment Report uses the best estimate of , but there's still a lot of contradictory research on the impacts of aerosols of clouds which can alter this estimate of aerosol cooling, and consequently, our knowledge of how many millions of tons must be deployed annually to achieve the desired effect. Hydrological cycle: Since the historical global dimming from tropospheric sulfate pollution is already well-known to have reduced rainfall in certain areas, and is likely to have weakened Monsoon of South Asia and contributed to or even outright caused the 1984 Ethiopian famine, the impact on the hydrological cycle and patterns is one of the most-discussed uncertainties of the different stratospheric aerosol injection proposals. It has been suggested that while changes in precipitation from stratospheric aerosol injection are likely to be more manageable than the changes expected under future warming, one of the main impacts it would have on mortality is by shifting the habitat of mosquitoes and thus substantially affecting the distribution and spread of vector-borne diseases. Considering the already-extensive present-day mosquito habitat, it is currently unclear whether those changes are likely to be positive or negative. Unintended possible side effects Solar geoengineering in general poses various problems and risks. However, certain problems are specific to or more pronounced with stratospheric sulfide injection. Ozone depletion: a potential side effect of sulfur aerosols; and these concerns have been supported by modelling. However, this may only occur if high enough quantities of aerosols drift to, or are deposited in, polar stratospheric clouds before the levels of CFCs and other ozone destroying gases fall naturally to safe levels because stratospheric aerosols, together with the ozone destroying gases, are responsible for ozone depletion. The injection of other aerosols that may be safer such as calcite has therefore been proposed. The injection of non-sulfide aerosols like calcite (limestone) would also have a cooling effect while counteracting ozone depletion and would be expected to reduce other side effects. Whitening of the sky: Volcanic eruptions are known to affect the appearance of sunsets significantly, and a change in sky appearance after the eruption of Mount Tambora in 1816 "The Year Without A Summer" was the inspiration for the paintings of J. M. W. Turner. Since stratospheric aerosol injection would involve smaller quantities of aerosols, it is expected to cause a subtler change to sunsets and a slight hazing of blue skies. How stratospheric aerosol injection may affect clouds remains uncertain. Stratospheric temperature change: Aerosols can also absorb some radiation from the Sun, the Earth, and the surrounding atmosphere. This changes the surrounding air temperature and could potentially impact the stratospheric circulation, which in turn may impact the surface circulation. Deposition and acid rain: The surface deposition of sulfate injected into the stratosphere may also have an impact on ecosystems. However, the amount and wide dispersal of injected aerosols means that their impact on particulate concentrations and acidity of precipitation would be very small. Ecological consequences: The consequences of stratospheric aerosol injection on ecological systems are unknown and potentially vary by ecosystem with differing impacts on marine versus terrestrial biomes. Mixed effects on agriculture: A historical study in 2018 found that stratospheric sulfate aerosols injected by the volcanic eruptions of Chicón (1982) and Mount Pinatubo (1991) had mixed effects on global crop yields of certain major crops. Based on several studies, the IPCC Sixth Assessment Report suggests that crop yields and carbon sinks would be largely unaffected or may even increase slightly, because reduced photosynthesis due to lower sunlight would be offset by CO2 fertilization effect and the reduction in thermal stress, but there's less confidence about how the specific ecosystems may be affected. Inhibition of Solar Energy Technologies: Uniformly reduced net shortwave radiation would hurt solar photovoltaics by the same 2–5% as for plants. the increased scattering of collimated incoming sunlight would more drastically reduce the efficiencies (by 11% for RCP8.5) of concentrating solar thermal power for both electricity production and chemical reactions, such as solar cement production. Governance aspects Most of the existing governance of stratospheric sulfate aerosols is from that which is applicable to solar radiation management more broadly. However, some existing legal instruments would be relevant to stratospheric sulfate aerosols specifically. At the international level, the Convention on Long-Range Transboundary Air Pollution (CLRTAP Convention) obligates those countries which have ratified it to reduce their emissions of particular transboundary air pollutants. Notably, both solar radiation management and climate change (as well as greenhouse gases) could satisfy the definition of "air pollution" which the signatories commit to reduce, depending on their actual negative effects. Commitments to specific values of the pollutants, including sulfates, are made through protocols to the CLRTAP Convention. Full implementation or large scale climate response field tests of stratospheric sulfate aerosols could cause countries to exceed their limits. However, because stratospheric injections would be spread across the globe instead of concentrated in a few nearby countries, and could lead to net reductions in the "air pollution" which the CLRTAP Convention is to reduce so they may be allowed. The stratospheric injection of sulfate aerosols would cause the Vienna Convention for the Protection of the Ozone Layer to be applicable due to their possible deleterious effects on stratospheric ozone. That treaty generally obligates its Parties to enact policies to control activities which "have or are likely to have adverse effects resulting from modification or likely modification of the ozone layer." The Montreal Protocol to the Vienna Convention prohibits the production of certain ozone depleting substances, via phase outs. Sulfates are presently not among the prohibited substances. In the United States, the Clean Air Act might give the United States Environmental Protection Agency authority to regulate stratospheric sulfate aerosols. Outdoors research In 2009, a Russian team tested aerosol formation in the lower troposphere using helicopters. In 2015, David Keith and Gernot Wagner described a potential field experiment, the Stratospheric Controlled Perturbation Experiment (SCoPEx), using stratospheric calcium carbonate injection, but as of October 2020 the time and place had not yet been determined. SCoPEx is in part funded by Bill Gates. Sir David King, a former chief scientific adviser to the government of the United Kingdom, stated that SCoPEX and Gates' plans to dim the sun with calcium carbonate could have disastrous effects. In 2012, the Bristol University-led Stratospheric Particle Injection for Climate Engineering (SPICE) project planned on a limited field test to evaluate a potential delivery system. The group received support from the EPSRC, NERC and STFC to the tune of £2.1 million and was one of the first UK projects aimed at providing evidence-based knowledge about solar radiation management. Although the field testing was cancelled, the project panel decided to continue the lab-based elements of the project. Furthermore, a consultation exercise was undertaken with members of the public in a parallel project by Cardiff University, with specific exploration of attitudes to the SPICE test. This research found that almost all of the participants in the poll were willing to allow the field trial to proceed, but very few were comfortable with the actual use of stratospheric aerosols. A campaign opposing geoengineering led by the ETC Group drafted an open letter calling for the project to be suspended until international agreement is reached, specifically pointing to the upcoming convention of parties to the Convention on Biological Diversity in 2012. History Mikhail Budyko is believed to have been the first, in 1974, to put forth the concept of artificial solar radiation management with stratospheric sulfate aerosols if global warming ever became a pressing issue. Such controversial climate engineering proposals for global dimming have sometimes been called a "Budyko Blanket". See also References External links What can we do about climate change?, Oceanography magazine Global Warming and Ice Ages: Prospects for Physics-Based Modulation of Global Change, Lawrence Livermore National Laboratory The Geoengineering Option:A Last Resort Against Global Warming?, Council on Foreign Relations Geo-Engineering Climate Change with Sulfate Aerosols, Pacific Northwest National Laboratory Geo-Engineering Research, Parliamentary Office of Science and Technology Geo-engineering Options for Mitigating Climate Change, Department of Energy and Climate Change Unilateral Geoengineering, Council on Foreign Relations PBS NewsHour published on 27 March 2019 animation of SCoPEx Climate change policy Planetary engineering Climate engineering Aerosols
Stratospheric aerosol injection
[ "Chemistry", "Engineering" ]
5,449
[ "Planetary engineering", "Geoengineering", "Aerosols", "Colloids" ]
21,685,565
https://en.wikipedia.org/wiki/Karlsruhe%20Nuclide%20Chart
The Karlsruhe Nuclide Chart is a widespread table of nuclides in print. Characteristics It is a two-dimensional graphical representation in the Segrè-arrangement with the neutron number N on the abscissa and the proton number Z on the ordinate. Each nuclide is represented at the intersection of its respective neutron and proton number by a small square box with the chemical symbol and the nucleon number A. By columnar subdivision of such a field, in addition to ground states also nuclear isomers can be shown. The coloring of a field (segmented if necessary) shows in addition to the existing text entries the observed types of radioactive decay of the nuclide and a rough classification of their relative shares: stable, nonradioactive nuclides completely black, primordial radionuclides partially black, proton emission orange, alpha decay yellow, beta plus decay/electron capture red, isomeric transition (gamma decay, internal conversion) white, beta minus decay blue, spontaneous fission green, cluster emission violet, neutron emission light blue. For each radionuclide its field includes (if known) information about its half-life and essential energies of the emitted radiation, for stable nuclides and primordial radionuclides there are data on mole fraction abundances in the natural isotope mixture of the corresponding chemical element. Furthermore, for many nuclides cross sections for nuclear reactions with thermal neutrons are quoted, usually for the (n, γ)-reaction (neutron capture), partly fission cross sections for the induced nuclear fission and cross sections for the (n, α)-reaction or (n, p)-reaction. For the chemical elements cross sections and standard atomic weights (both averaged over natural isotopic composition) are specified (the relative atomic masses partially as an interval to reflect the variability of the composition of the element's natural isotope mixture). For the nuclear fission of 235U and 239Pu with thermal neutrons, percentage isobaric chain yields of fission products are listed. History, editions The first printed edition of the Karlsruhe Nuclide Chart of 1958 in the form of a wall chart was created by Walter Seelmann-Eggebert and his assistant Gerda Pfennig. Walter Seelmann-Eggebert was director of the Radiochemistry Institute in the 1956 founded "Kernreaktor Bau- und Betriebsgesellschaft mbH" in Karlsruhe, Germany (a predecessor institution of the later "(Kern-)Forschungszentrum Karlsruhe", nowadays Karlsruhe Institute of Technology) and appointed professor of radiochemistry at the Karlsruhe Technical University. Radiochemical isotope courses were held at the institute, and in the context of these teaching courses the Karlsruhe Nuclide Chart arose, which was intended to be a well-structured overview of the essential properties of the nuclides already known at that time. In the following decades, the Karlsruhe Nuclide Chart was published and revised several times. In addition to other co-authors, Seelmann-Eggebert († 1988) was involved up to the 5th edition in 1981, Pfennig († 2017) up to the 9th edition in 2015. In 2006, the management of the Karlsruhe Nuclide Chart changed over from Forschungszentrum Karlsruhe to the Institute for Transuranium Elements (ITU) of the Joint Research Centre (JRC) of the European Commission (EC), then in 2012 to Nucleonica GmbH, a spin-off company of the JRC-ITU. The following summary table regarding the individual editions of the Karlsruhe Nuclide Chart also expresses the scientific progress in the field of discovery/exploration of the nuclides and new chemical elements. ? = Sources incongruent or explicit/implicit numerical data missing or inclusion of nuclear isomers in figures unclear. Versions The Karlsruhe Nuclide Chart is primarily published as a fold-out chart (size A4) or as a wall chart (size 0.96 m × 1.40 m). There are also larger sizes (roll map, auditorium chart and "carpet"). Since 2014, an internet-based version "Karlsruhe Nuclide Chart Online (KNCO)" with regular updates is offered via the Nucleonica nuclear science internet portal. To support nuclear education, a simplified school version, the KNClight has been developed. The largest known version of the Karlsruhe Nuclide Chart is located in the Reactor Institute Delft, being 13 m × 19 m in size. References Tables of nuclides
Karlsruhe Nuclide Chart
[ "Chemistry" ]
933
[ "Tables of nuclides", "Isotopes" ]
21,689,094
https://en.wikipedia.org/wiki/Sommerfeld%20expansion
A Sommerfeld expansion is an approximation method developed by Arnold Sommerfeld for a certain class of integrals which are common in condensed matter and statistical physics. Physically, the integrals represent statistical averages using the Fermi–Dirac distribution. When the inverse temperature is a large quantity, the integral can be expanded in terms of as where is used to denote the derivative of evaluated at and where the notation refers to limiting behavior of order . The expansion is only valid if vanishes as and goes no faster than polynomially in as . If the integral is from zero to infinity, then the integral in the first term of the expansion is from zero to and the second term is unchanged. Application to the free electron model Integrals of this type appear frequently when calculating electronic properties, like the heat capacity, in the free electron model of solids. In these calculations the above integral expresses the expected value of the quantity . For these integrals we can then identify as the inverse temperature and as the chemical potential. Therefore, the Sommerfeld expansion is valid for large (low temperature) systems. Derivation to second order in temperature We seek an expansion that is second order in temperature, i.e., to , where is the product of temperature and the Boltzmann constant. Begin with a change variables to : Divide the range of integration, , and rewrite using the change of variables : Next, employ an algebraic 'trick' on the denominator of , to obtain: Return to the original variables with in the first term of . Combine to obtain: The numerator in the second term can be expressed as an approximation to the first derivative, provided is sufficiently small and is sufficiently smooth: to obtain, The definite integral is known to be: . Hence, Higher order terms and a generating function We can obtain higher order terms in the Sommerfeld expansion by use of a generating function for moments of the Fermi distribution. This is given by Here and Heaviside step function subtracts the divergent zero-temperature contribution. Expanding in powers of gives, for example A similar generating function for the odd moments of the Bose function is Notes References Equations of physics Statistical mechanics Particle statistics
Sommerfeld expansion
[ "Physics", "Mathematics" ]
440
[ "Equations of physics", "Particle statistics", "Mathematical objects", "Equations", "Statistical mechanics" ]
21,689,632
https://en.wikipedia.org/wiki/Market%20design
Market design is an interdisciplinary, engineering-driven approach to economics and a practical methodology for creation of markets of certain properties, which is partially based on mechanism design. In market design, the focus is on the rules of exchange, meaning who gets allocated what and by what procedure. Market design is concerned with the workings of particular markets in order to fix them when they are broken or to build markets when they are missing. Practical applications of market design theory has included labor market matching (e.g. the national residency match program), organ transplantation, school choice, university admissions, and more. Auction theory Early research on auctions focused on two special cases: common value auctions in which buyers have private signals of an items true value and private value auctions in which values are identically and independently distributed. Milgrom and Weber (1982) present a much more general theory of auctions with positively related values. Each of n buyers receives a private signal . Buyer i’s value is strictly increasing in and is an increasing symmetric function of . If signals are independently and identically distributed, then buyer i’s expected value is independent of the other buyers’ signals. Thus, the buyers’ expected values are independently and identically distributed. This is the standard private value auction. For such auctions the revenue equivalence theorem holds. That is, expected revenue is the same in the sealed first-price and second-price auctions. Milgrom and Weber assumed instead that the private signals are “affiliated”. With two buyers, the random variables and with probability density function are affiliated if , for all and all . Applying Bayes’ Rule it follows that , for all and all . Rearranging this inequality and integrating with respect to it follows that , for all and all. (1) It is this implication of affiliation that is critical in the discussion below. For more than two symmetrically distributed random variables, let be a set of random variables that are continuously distributed with joint probability density function f(v) . The n random variables are affiliated if for all and in where . Revenue Ranking Theorem (Milgrom and Weber) ) Suppose each of n buyers receives a private signal . Buyer i’s value is strictly increasing in and is an increasing symmetric function of . If signals are affiliated, the equilibrium bid function in a sealed first-price auction is smaller than the equilibrium expected payment in the sealed second price auction. The intuition for this result is as follows: In the sealed second-price auction the expected payment of a winning bidder with value v is based on their own information. By the revenue equivalence theorem if all buyers had the same beliefs, there would be revenue equivalence. However, if values are affiliated, a buyer with value v knows that buyers with lower values have more pessimistic beliefs about the distribution of values. In the sealed high-bid auction such low value buyers therefore bid lower than they would if they had the same beliefs. Thus the buyer with value v does not have to compete so hard and bids lower as well. Thus the informational effect lowers the equilibrium payment of the winning bidder in the sealed first-price auction. Equilibrium bidding in the sealed first- and second-price auctions We consider here the simplest case in which there are two buyers and each buyer’s value depends only on his own signal. Then the buyers’ values are private and affiliated. In the sealed second-price (or Vickrey auction), it is a dominant strategy for each buyer to bid his value. If both buyers do so, then a buyer with value v has an expected payment of (2) . In the sealed first-price auction, the increasing bid function B(v) is an equilibrium if bidding strategies are mutual best responses. That is, if buyer 1 has value v, their best response is to bid b = B(v) if they believes that their opponent is using this same bidding function. Suppose buyer 1 deviates and bids b = B(z) rather than B(v) . Let U(z) be their resulting payoff. For B(v) to be an equilibrium bid function, U(z) must take on its maximum at x = v. With a bid of b = B(z) buyer 1 wins if , that is, if . The win probability is then so that buyer 1's expected payoff is . Taking logs and differentiating by z, . (3) The first term on the right hand side is the proportional increase in the win probability as the buyer raises his bid from to . The second term is the proportional drop in the payoff if the buyer wins. We have argued that, for equilibrium, U(z) must take on its maximum at z = v . Substituting for z in (3) and setting the derivative equal to zero yields the following necessary condition. . (4) Proof of the revenue ranking theorem Buyer 1 with value x has conditional p.d.f. . Suppose that he naively believes that all other buyers have the same beliefs. In the sealed high bid auction he computes the equilibrium bid function using these naive beliefs. Arguing as above, condition (3) becomes . (3’) Since x > v it follows by affiliation (see condition (1)) that the proportional gain to bidding higher is bigger under the naive beliefs that place higher mass on higher values. Arguing as before, a necessary condition for equilibrium is that (3’) must be zero at x = v. Therefore, the equilibrium bid function satisfies the following differential equation. . (5) Appealing to the revenue equivalence theorem, if all buyers have values that are independent draws from the same distribution then the expected payment of the winner is the same in the two auctions. Therefore, . Thus, to complete the proof we need to establish that . Appealing to (1), it follows from (4) and (5) that for all v < x. Therefore, for any v in the interval [0,x] . Suppose that . Since the equilibrium bid of a buyer with value 0 is zero, there must be some y < x such that and . But this is impossible since we have just shown that over such an interval, is decreasing. Since it follows that the winner bidder's expected payment is lower in the sealed high-bid auction. Ascending auctions with package bidding Milgrom has also contributed to the understanding of combinatorial auctions. In work with Larry Ausubel (Ausubel and Milgrom, 2002), auctions of multiple items, which may be substitutes or complements, are considered. They define a mechanism, the “ascending proxy auction,” constructed as follows. Each bidder reports his values to a proxy agent for all packages that the bidder is interested in. Budget constraints can also be reported. The proxy agent then bids in an ascending auction with package bidding on behalf of the real bidder, iteratively submitting the allowable bid that, if accepted, would maximize the real bidder's profit (value minus price), based on the reported values. The auction is conducted with negligibly small bid increments. After each round, provisionally winning bids are determined that maximize the total revenue from feasible combinations of bids. All of a bidder's bids are kept live throughout the auction and are treated as mutually exclusive. The auction ends after a round occurs with no new bids. The ascending proxy auction may be viewed either as a compact representation of a dynamic combinatorial auction or as a practical direct mechanism, the first example of what Milgrom would later call a “core selecting auction.” They prove that, with respect to any reported set of values, the ascending proxy auction always generates a core outcome, i.e. an outcome that is feasible and unblocked. Moreover, if bidders’ values satisfy the substitutes condition, then truthful bidding is a Nash equilibrium of the ascending proxy auction and yields the same outcome as the Vickrey–Clarke–Groves (VCG) mechanism. However, the substitutes condition is robustly a necessary as well as a sufficient condition: if just one bidder's values violate the substitutes condition, then with appropriate choice of three other bidders with additively-separable values, the outcome of the VCG mechanism lies outside the core; and so the ascending proxy auction cannot coincide with the VCG mechanism and truthful bidding cannot be a Nash equilibrium. They also provide a complete characterization of substitutes preferences: Goods are substitutes if and only if the indirect utility function is submodular. Ausubel and Milgrom (2006a, 2006b) exposit and elaborate on these ideas. The first of these articles, entitled "The Lovely but Lonely Vickrey Auction", made an important point in market design. The VCG mechanism, while highly attractive in theory, suffers from a number of possible weaknesses when the substitutes condition is violated, making it a poor candidate for empirical applications. In particular, the VCG mechanism may exhibit: low (or zero) seller revenues; non-monotonicity of the seller's revenues in the set of bidders and the amounts bid; vulnerability to collusion by a coalition of losing bidders; and vulnerability to the use of multiple bidding identities by a single bidder. This may explain why the VCG auction design, while so lovely in theory, is so lonely in practice. Additional work in this area by Milgrom together with Larry Ausubel and Peter Cramton has been particularly influential in practical market design. Ausubel, Cramton and Milgrom (2006) together proposed a new auction format that is now called the combinatorial clock auction (CCA), which consists of a clock auction stage followed by a sealed-bid supplementary round. All of the bids are interpreted as package bids; and the final auction outcome is determined using a core selecting mechanism. The CCA was first used in the United Kingdom's 10–40 GHz spectrum auction of 2008. Since then, it has become a new standard for spectrum auctions: it has been utilized for major spectrum auctions in Austria, Denmark, Ireland, the Netherlands, Switzerland and the UK; and it is slated to be used in forthcoming auctions in Australia and Canada. At the 2008 Nemmers Prize conference, Penn State University economist Vijay Krishna and Larry Ausubel highlighted Milgrom's contributions to auction theory and their subsequent impact on auction design. Matching theory According to economic theory, under certain conditions, the voluntary exchanges of all economic agents will lead to the maximum welfare of those engaged in the exchanges. In reality, however, the situation is different; We usually face market failures, and of course, we sometimes face conditions or constraints such as congested markets, repugnant markets, and unsafe markets. This is where market designers try to create interactive platforms with specific rules and constraints to achieve optimal situations. It is claimed that such platforms provide maximum efficiency and benefit to society. Matching refers to the idea of establishing a proper relationship between the two sides of the market, the demanders of a good or service and its suppliers. This theory explores who achieves what in economic interactions. The idea for the matching emerged in the form of theoretical efforts by mathematicians such as Shapley and Gale. It matured with the efforts of economists such as Roth, and now market design and matching are of the most important branches of microeconomics and game theory. Milgrom has also contributed to the understanding of matching market design. In work with John Hatfield (Hatfield and Milgrom, 2005), he shows how to generalize the stable marriage matching problem to allow for “matching with contracts”, where the terms of the match between agents on either side of the market arise endogenously through the matching process. They show that a suitable generalization of the deferred acceptance algorithm of David Gale and Lloyd Shapley finds a stable matching in their setting; moreover, the set of stable matchings forms a lattice, and similar vacancy chain dynamics are present. The observation that stable matchings are a lattice was a well known result that provided the key to their insight into generalizing the matching model. They observed (as did some other contemporary authors) that the lattice of stable matchings was reminiscent of the conclusion of Tarski's fixed point theorem, which states that an increasing function from a complete lattice to itself has a nonempty set of fixed points that form a complete lattice. But it wasn't apparent what was the lattice, and what was the increasing function. Hatfield and Milgrom observed that the accumulated offers and rejections formed a lattice, and that the bidding process in an auction and the deferred acceptance algorithm were examples of a cumulative offer process that was an increasing function in this lattice. Their generalization also shows that certain package auctions (see also: Paul Milgrom: Policy) can be thought of as a special case of matching with contracts, where there is only one agent (the auctioneer) on one side of the market and contracts include both the items to be transferred and the total transfer price as terms. Thus, two of market design's great success stories, the deferred acceptance algorithm as applied to the medical match, and the simultaneous ascending auction as applied to the FCC spectrum auctions, have a deep mathematical connection. In addition, this work (in particular, the "cumulative offer" variation of the deferred acceptance algorithm) has formed the basis of recently proposed redesigns of the mechanisms used to match residents to hospitals in Japan and cadets to branches in the US Army. Application In general, the topics studied by market designers related to various problems in matching markets. Alvin Roth has divided the obstacles in the matching of the market participants into three main categories: Sometimes, the market participants do not know about each other because of "market thinness." In this case, the market suffers from a lack of enough thickness. In some cases, the cause of dysfunctionality is market congestion and the lack of opportunities for market participants to know each other. In these cases, the excessive market thickness causes the market parties not to have enough time to choose their preferred options. In some markets, due to special arrangements, there is a possibility of strategic behavior by market participants, and therefore people do not really reflect their preferences. In these cases, the market is not safe for expressing actual preferences. The solution of market designers in the face of these problems is to propose the creation of a Centralized Clearing House to receive the preference information of market participants and use appropriate matching algorithms. The aggregation of information, the design of some rules, and the use of these algorithms lead to the appropriate matching of market participants, the safeness of the market environment, and improving market allocation. In this formulation, the mechanism acts as a communication system between the parties of an economic interaction that determines the outcome of this interaction based on pre-determined rules and the signals received from market participants. Therefore, the purpose of market design is simply to determine the rule of the game to optimize the game's outcome. Market design and matching in the labor market As mentioned, in some markets, the pricing mechanism may not allocate resources optimally. One such market is the labor market. Usually, employers or firms do not reduce the offered wage to such an extent that supply and demand in the labor market are equal. What is important for firms is to choose exactly "the most appropriate worker." In some labor markets, choosing "the most appropriate employer" is also important for job seekers. Since the process of informing market participants about each other's preferences is disrupted, rules should be designed to improve market performance. Market design and matching in the kidney transplant market Another important application of the matching is the kidney transplant market. Kidney transplant applicants often face the problem of the lack of compatible kidneys. Market designers try to make the kidney exchange market more efficient by designing systems to match kidney applicants and kidney donors. Two general types of communication between kidney applicants and donors are chain and cyclical systems of exchanges. In cyclic exchange, kidney donors and recipients form a cycle for kidney exchange. Simplifying participants’ messages Milgrom has contributed to the understanding of the effect of simplifying the message space in practical market design. He observed and developed as an important design element of many markets the notion of conflation—the idea of restricting a participant's ability to convey rich preferences by forcing them to enter the same value for different preferences. An example of conflation arises in Gale and Shapley's deferred acceptance algorithm for hospital and doctors matching when hospitals are allowed to submit only responsive preferences (i.e., the ranking of doctors and capacities) even though they could be conceivably asked to submit general substitutes preferences. In the Internet sponsored-search auctions, advertisers are allowed to submit a single per-click bid, regardless of which ad positions they win. A similar, earlier idea of a conflated generic-item auction is an important component of the Combinatorial Clock Auction (Ausubel, Cramton and Milgrom, 2006), widely used in spectrum auctions including the UK's recent 800 MHz / 2.6 GHz auction, and has also been proposed for Incentive Auctions. Bidders are allowed to express only the quantity of frequencies in the allocation stage of the auction without regard to the specific assignment (which is decided in a later assignment stage). Milgrom (2010) shows that with a certain “outcome closure property,” conflation adds no new unintended outcome as equilibrium and argued that, by thickening the markets, may intensify price competition and increase revenue. As a concrete application of the idea of simplifying messages, Milgrom (2009) defines assignment messages of preferences. In assignment messages, an agent can encode certain nonlinear preferences involving various substitution possibilities into linear objectives by allowing agents to describe multiple “roles” that objects can play in generating utility, with utility thus generated being added up. The valuation over a set of objects is the maximum value that can be achieved by optimally assigning them to various roles. Assignment messages can also be applied to resource allocation without money; see, for example, the problem of course allocation in schools, as analyzed by Budish, Che, Kojima, and Milgrom (2013). In doing so, the paper has provided a generalization of the Birkhoff-von Neumann Theorem (a mathematical property about Doubly Stochastic Matrices) and applied it to analyze when a given random assignment can be "implemented" as a lottery over feasible deterministic outcomes. A more general language, endowed assignment message, is studied by Hatfield and Milgrom (2005). Milgrom provides an overview of these issues in Milgrom (2011). See also Designing Economic Mechanisms References External links Nemmer's Prize Lecture, 2008 National Science Foundations LiveScience Program Interview, 2012 Auction theory Economic theories Labour economics Mathematical and quantitative methods (economics) Microeconomic theories Mechanism design
Market design
[ "Mathematics" ]
3,891
[ "Game theory", "Mechanism design", "Auction theory" ]
21,691,608
https://en.wikipedia.org/wiki/Circuit%20total%20limitation
Circuit total limitation (CTL) is one of the present-day standards for electrical panels sold in the United States according to the National Electrical Code. This standard requires an electrical panel to provide a physical mechanism to prevent installing more circuit breakers than it was designed for. This has generally been implemented by restricting the use of tandem (duplex) breakers to replace standard single pole breakers. Code Requirement The 1965 edition of the NEC, article 384-15 was the first reference to the circuit total limitation of panelboards. , the location of this language is at Article 408.54 now titled "Maximum Number of Overcurrent Devices." Non-CTL panels have not been made by reputable manufacturers since 1965. Though this may change due to the 2008 repeal. Non-CTL for replacement only Circuitboards and panelboards built prior to 1965 did not have circuit total limiting devices or features built-in. To support these old panels, non-CTL circuit breakers that bypass the rejection feature are still sold "for replacement use only." As a result, numerous unsafe situations have resulted where panels were dangerously overloaded because these non-CTL breakers continue to be used. With the use of non-CTL breakers, panels can be configured with the total number of circuits in excess of the designed capacity of that panel. The 2008 code did away with the previous 42 circuit limitation on panelboards. One can now order panelboards with as many as 84 circuit places, and a corresponding ampacity rating. If a panelboard with a sufficient number of breaker positions is installed in the first place, the need for non-CTL breakers should be eliminated. In their 2019 catalog Eaton now specifies that their non-CTL breakers are "Suitable for use in plug-on neutral style loadcenters" which negates the replacement only rule. Gallery See also National Electrical Code References External links Electrical engineering Electrical wiring Electric power distribution
Circuit total limitation
[ "Physics", "Engineering" ]
391
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
37,351,627
https://en.wikipedia.org/wiki/Astrophysics%20Research%20Institute
The Astrophysics Research Institute (ARI) is an astronomy and astrophysics research institute in Merseyside, UK. Formed in 1992, it stood on the Twelve Quays site in Birkenhead from 1998 until June 2013 when it relocated to the Liverpool Science Park in Liverpool. in the top 1% of institutions in the field of space science as measured by total citations. there are over 90 staff members and research students working at the institute, which lies within the administration of the Liverpool John Moores University's Faculty of Engineering and Technology. Research The research conducted at the Institute covers many areas of astronomy and astrophysics, such as supernovae, star formation and galaxy clusters. This work is funded by external organisations, such as the Science and Technology Facilities Council, and the Higher Education Funding Council for England. The institute also maintains the Liverpool Telescope which is located on the island of La Palma in the Canary Islands. Education The institute offers two undergraduate courses: a 3-year BSc (Hons) in Physics and Astronomy, as well as a 4-year MPhys (Hons) in Astrophysics. Both the undergraduate courses are taught as a joint degree by the Astrophysics Research Institute of Liverpool John Moores University and the Department of Physics at the University of Liverpool. The courses are accredited by the Institute of Physics. Postgraduate courses are made available at PhD and Master's level, with two MSc courses taught via distance learning. Unaccredited short courses are also made available to those who do not have a scientific or mathematical background. The Astronomy by Distance Learning courses are taught by CD-ROM, DVD and website material without the need for classroom sessions. Each of the courses provides an introduction to astronomy as well as to specialist areas such as supernovae. Awards In 2006, the institute received the "Queen's Anniversary Prize" for higher education in recognition for its development of the robotic telescope. In 2007, the "Times Higher Education Supplement Award" for 'project of the year' was given for the use of the RINGO optical polarimeter at the Liverpool Telescope in measuring gamma-ray bursts. RINGO has since been decommissioned and an updated polarimeter named MOPTOP has since entered operation. Director External links Liverpool John Moores University References Astronomy in the United Kingdom Astrophysics research institutes Research institutes established in 1992 Research institutes in Merseyside Science and Technology Facilities Council 1992 establishments in the United Kingdom
Astrophysics Research Institute
[ "Physics" ]
484
[ "Astrophysics research institutes", "Astrophysics" ]
37,358,308
https://en.wikipedia.org/wiki/Piola%20transformation
The Piola transformation maps vectors between Eulerian and Lagrangian coordinates in continuum mechanics. It is named after Gabrio Piola. Definition Let with an affine transformation. Let with a domain with Lipschitz boundary. The mapping is called Piola transformation. The usual definition takes the absolute value of the determinant, although some authors make it just the determinant. Note: for a more general definition in the context of tensors and elasticity, as well as a proof of the property that the Piola transform conserves the flux of tensor fields across boundaries, see Ciarlet's book. See also Piola–Kirchhoff stress tensor Raviart–Thomas basis functions Raviart–Thomas Element References Continuum mechanics
Piola transformation
[ "Physics" ]
154
[ "Classical mechanics stubs", "Classical mechanics", "Continuum mechanics" ]
41,553,887
https://en.wikipedia.org/wiki/Janet%20basis
In mathematics, a Janet basis is a normal form for systems of linear homogeneous partial differential equations (PDEs) that removes the inherent arbitrariness of any such system. It was introduced in 1920 by Maurice Janet. It was first called the Janet basis by Fritz Schwarz in 1998. The left hand sides of such systems of equations may be considered as differential polynomials of a ring, and Janet's normal form as a special basis of the ideal that they generate. By abuse of language, this terminology will be applied both to the original system and the ideal of differential polynomials generated by the left hand sides. A Janet basis is the predecessor of a Gröbner basis introduced by Bruno Buchberger for polynomial ideals. In order to generate a Janet basis for any given system of linear PDEs a ranking of its derivatives must be provided; then the corresponding Janet basis is unique. If a system of linear PDEs is given in terms of a Janet basis its differential dimension may easily be determined; it is a measure for the degree of indeterminacy of its general solution. In order to generate a Loewy decomposition of a system of linear PDEs its Janet basis must be determined first. Generating a Janet basis Any system of linear homogeneous PDEs is highly non-unique, e.g. an arbitrary linear combination of its elements may be added to the system without changing its solution set. A priori it is not known whether it has any nontrivial solutions. More generally, the degree of arbitrariness of its general solution is not known, i.e. how many undetermined constants or functions it may contain. These questions were the starting point of Janet's work; he considered systems of linear PDEs in any number of dependent and independent variables and generated a normal form for them. Here mainly linear PDEs in the plane with the coordinates and will be considered; the number of unknown functions is one or two. Most results described here may be generalized in an obvious way to any number of variables or functions. In order to generate a unique representation for a given system of linear PDEs, at first a ranking of its derivatives must be defined. Definition: A ranking of derivatives is a total ordering such that for any two derivatives , and , and any derivation operator the relations and are valid. A derivative is called higher than if . The highest derivative in an equation is called its leading derivative. For the derivatives up to order two of a single function depending on and with two possible order are the LEX order and the GRLEX order . Here the usual notation is used. If the number of functions is higher than one, these orderings have to be generalized appropriately, e.g. the orderings or may be applied. The first basic operation to be applied in generating a Janet basis is the reduction of an equation w.r.t. another one . In colloquial terms this means the following: Whenever a derivative of may be obtained from the leading derivative of by suitable differentiation, this differentiation is performed and the result is subtracted from . Reduction w.r.t. a system of PDEs means reduction w.r.t. all elements of the system. A system of linear PDEs is called autoreduced if all possible reductions have been performed. The second basic operation for generating a Janet basis is the inclusion of integrability conditions. They are obtained as follows: If two equations and are such that by suitable differentiations two new equations may be obtained with like leading derivatives, by cross-multiplication with its leading coefficients and subtraction of the resulting equations a new equation is obtained, it is called an integrability condition. If by reduction w.r.t. the remaining equations of the system it does not vanish it is included as a new equation to the system. It may be shown that repeating these operations always terminates after a finite number of steps with a unique answer which is called the Janet basis for the input system. Janet has organized them in terms of the following algorithm. Janet's algorithm: Given a system of linear differential polynomials , the Janet basis corresponding to is returned. S1: (Autoreduction) Assign S2: (Completion) Assign S3: (Integrability conditions) Find all pairs of leading terms of and of such that differentiation w.r.t. a nonmultiplier and multipliers leads to and determine the integrability conditions S4: (Reduction of integrability conditions). For all assign S5: (Termination?) If all are zero return , otherwise make the assignment , reorder properly and goto S1 Here is a subalgorithm that returns its argument with all possible reductions performed, adds certain equations to the system in order to facilitate determining the integrability conditions. To this end the variables are divides into multipliers and non-multipliers; details may be found in the above references. Upon successful termination a Janet basis for the input system will be returned. Example 1: Let the system be given with ordering GRLEX and . Step S1 returns the autoreduced system Steps S3 and S4 generate the integrability condition and reduces it to , i.e. the Janet basis for the originally given system is with the trivial solution . The next example involves two unknown functions and , both depending on and . Example 2: Consider the system in GRLEX, ordering. The system is already autoreduced, i.e. step S1 returns it unchanged. Step S3 generates the two integrability conditions Upon reduction in step S4 they are In step S5 they are included into the system and the algorithms starts again with step S1 with the extended system. After a few more iterations finally the Janet basis is obtained. It yields the general solution with two undetermined constants and . Application of Janet bases The most important application of a Janet basis is its use for deciding the degree of indeterminacy of a system of linear homogeneous partial differential equations. The answer in the above Example 1 is that the system under consideration allows only the trivial solution. In the second Example 2 a two-dimensional solution space is obtained. In general, the answer may be more involved, there may be infinitely many free constants in the general solution; they may be obtained from the Loewy decomposition of the respective Janet basis. Furthermore, the Janet basis of a module allows to read off a Janet basis for the syzygy module. Janet's algorithm has been implemented in Maple. External links www.alltypes.de – Implementation of Janet's basis References Computer algebra Differential algebra
Janet basis
[ "Mathematics", "Technology" ]
1,361
[ "Differential algebra", "Computational mathematics", "Computer algebra", "Fields of abstract algebra", "Computer science", "Algebra" ]
41,555,364
https://en.wikipedia.org/wiki/C19H29NO2
{{DISPLAYTITLE:C19H29NO2}} The molecular formula C19H29NO2 (molar mass: 303.44 g/mol, exact mass: 303.2198 u) may refer to: Bornaprolol Nexeridine Molecular formulas
C19H29NO2
[ "Physics", "Chemistry" ]
61
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
41,555,646
https://en.wikipedia.org/wiki/State-transition%20equation
The state-transition equation is defined as the solution of the linear homogeneous state equation. The linear time-invariant state equation given by with state vector , control vector , vector of additive disturbances, and fixed matrices can be solved by using either the classical method of solving linear differential equations or the Laplace transform method. The Laplace transform solution is presented in the following equations. The Laplace transform of the above equation yields where denotes initial-state vector evaluated at . Solving for gives So, the state-transition equation can be obtained by taking inverse Laplace transform as where is the state transition matrix. The state-transition equation as derived above is useful only when the initial time is defined to be at . In the study of control systems, specially discrete-data control systems, it is often desirable to break up a state-transition process into a sequence of transitions, so a more flexible initial time must be chosen. Let the initial time be represented by and the corresponding initial state by , and assume that the input and the disturbance are applied at . Starting with the above equation by setting , and solving for , we get Once the state-transition equation is determined, the output vector can be expressed as a function of the initial state. See also Control theory Control engineering Automatic control Feedback Process control PID loop External links Control System Toolbox for design and analysis of control systems. http://web.mit.edu/2.14/www/Handouts/StateSpaceResponse.pdf Wikibooks:Control Systems/State-Space Equations http://planning.cs.uiuc.edu/node411.html Control theory
State-transition equation
[ "Mathematics" ]
334
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
41,555,934
https://en.wikipedia.org/wiki/Knowledge%20Based%20Software%20Assistant
The Knowledge Based Software Assistant (KBSA) was a research program funded by the United States Air Force. The goal of the program was to apply concepts from artificial intelligence to the problem of designing and implementing computer software. Software would be described by models in very high level languages (essentially equivalent to first order logic) and then transformation rules would transform the specification into efficient code. The air force hoped to be able to generate the software to control weapons systems and other command and control systems using this method. As software was becoming ever more critical to USAF weapons systems it was realized that improving the quality and productivity of the software development process could have significant benefits for the military, as well as for information technology in other major US industries. History In the early 1980s the United States Air Force realized that they had received significant benefits from applying artificial intelligence technologies to solving expert problems such as the diagnosis of faults in aircraft. The air force commissioned a group of researchers from the artificial intelligence and formal methods communities to develop a report on how such technologies might be used to aid in the more general problem of software development. The report described a vision for a new approach to software development. Rather than define specifications with diagrams and manually transform them to code as was the current process, the Knowledge Based Software Assistant (KBSA) vision was to define specifications in very high level languages and then to use transformation rules to gradually refine the specification into efficient code on heterogeneous platforms. Each step in the design and refinement of the system would be recorded as part of an integrated repository. In addition to the artifacts of software development the processes, the various definitions and transformations, would also be recorded in a way that they could be analyzed and also replayed later as needed. The idea was that each step would be a transformation that took into account various non-functional requirements for the implemented system. For example, requirements to use specific programming languages such as Ada or to harden code for real time mission critical fault tolerance. The air force decided to fund further research on this vision through their Rome Air Development Center laboratory at Griffiss air force base in New York. The majority of the early research was conducted at the Kestrel Institute in Northern California (with Stanford University) and the Information Sciences Institute (ISI) in Southern California (with USC and UCLA). The Kestrel Institute focused primarily on the provably correct transformation of logical models to efficient code. ISI focused primarily on the front end of the process on defining specifications that could map to logical formalisms but were in formats that were intuitive and familiar to systems analysts. In addition, Raytheon did a project to investigate informal requirements gathering and Honeywell and Harvard University did work on underlying frameworks, integration, and activity coordination. Although not primarily funded by the KBSA program the MIT Programmer's Apprentice project also had many of the same goals and used the same techniques as KBSA. In the later stages of the KBSA program (starting in 1991) researchers developed prototypes that were used on medium to large scale software development problems. Also, in these later stages the emphasis shifted from a pure KBSA approach to more general questions of how to use knowledge-based technology to supplement and augment existing and future computer-aided software engineering (CASE) tools. In these later stages there was significant interaction between the KBSA community and the object-oriented and software engineering communities. For example, KBSA concepts and researchers played an important role in the mega-programming and user centered software engineering programs sponsored by the Defense Advanced Research Projects Agency (DARPA). In these later stages the program changed its name to Knowledge-Based Software Engineering (KBSE). The name change reflected the different research goal, no longer to create a totally new all encompassing tool that would cover the complete software life cycle but to gradually work knowledge-based technology into existing tools. Companies such as Andersen Consulting (one of the largest system integrators and at the time vendor of their own CASE tool) played a major role in the program in these later stages. Key concepts Transformation rules The transformation rules that KBSA used were different than traditional rules for expert systems. Transformation rules matched against specification and implementation languages rather than against facts in the world. It was possible to specify transformations using patterns, wildcards, and recursion on both the right and left hand sides of a rule. The left hand expression would specify patterns in the existing knowledge base to search for. The right hand expression could specify a new pattern to transform the left hand side into. For example, transform a set theoretic data type into code using an Ada set library. The initial purpose for transformation rules was to refine a high level logical specification into well designed code for a specific hardware and software platform. This was inspired by early work on theorem proving and automatic programming. However, researchers at the Information Sciences Institute (ISI) developed the concept of evolution transformations. Rather than transforming a specification into code an evolution transformation was meant to automate various stereotypical changes at the specification level, for example developing a new superclass by extracting various capabilities from an existing class that can be shared more generally. Evolution transformations were developed at approximately the same time as the emergence of the software patterns community and the two groups shared concepts and technology. Evolution transformations were essentially what is known as refactoring in the object-oriented software patterns community. Knowledge-based repository A key concept of KBSA was that all artifacts: requirements, specifications, transformations, designs, code, process models, etc. were represented as objects in a knowledge-based repository. The original KBSA report describes what was called a Wide Spectrum Language. The requirement was for a knowledge representation framework that could support the entire life cycle: requirements, specification, and code as well as the software process itself. The core representation for the knowledge base was meant to utilize the same framework although various layers could be added to support specific presentations and implementations. These early knowledge-base frameworks were developed primarily by ISI and Kestrel building on top of Lisp and Lisp machine environments. The Kestrel environment was eventually turned into a commercial product called Refine which was developed and supported by a spin-off company from Kestrel called Reasoning Systems Incorporated. The Refine language and environment also proved to be applicable to the problem of software reverse engineering: taking legacy code that is critical to the business but that lacks proper documentation and using tools to analyze it and transform it to a more maintainable form. With the growing concern of the Y2K problem reverse engineering was a major business concern for many large US corporations and it was a focus area for KBSA research in the 1990s. There was significant interaction between the KBSA communities and the Frame language and object-oriented communities. The early KBSA knowledge-bases were implemented in object-based languages rather than object-oriented. Objects were represented as classes and sub-classes but it was not possible to define methods on the objects. In later versions of KBSA such as the Andersen Consulting Concept Demo the specification language was expanded to support message passing as well. Intelligent Assistant KBSA took a different approach than traditional expert systems when it came to how to solve problems and work with users. In the traditional expert system approach the user answers a series of interactive questions and the system provides a solution. The KBSA approach left the user in control. Where as an expert system tried to, to some extent replace and remove the need for the expert the intelligent assistant approach in KBSA sought to re-invent the process with technology. This led to a number of innovations at the user interface level. An example of the collaboration between the object-oriented community and KBSA was the architecture used for KBSA user interfaces. KBSA systems utilized a model-view-controller (MVC) user interface. This was an idea incorporated from Smalltalk environments. The MVC architecture was especially well suited to the KBSA user interface. KBSA environments featured multiple heterogeneous views of the knowledge-base. It might be useful to look at an emerging model from the standpoint of entities and relations, object interactions, class hierarchies, dataflow, and many other possible views. The MVC architecture facilitated this. With the MVC architecture the underlying model was always the knowledge base which was a meta-model description of the specification and implementation languages. When an analyst made some change via a particular diagram (e.g. added a class to the class hierarchy) that change was made at the underlying model level and the various views of the model were all automatically updated. One of the benefits of using a transformation was that many aspects of the specification and implementation could be modified at once. For small scale prototypes the resulting diagrams were simple enough that basic layout algorithms combined with reliance on users to clean up diagrams was sufficient. However, when a transformation can radically redraw models with tens or even hundreds of nodes and links the constant updating of the various views becomes a task in itself. Researchers at Andersen Consulting incorporated work from the University of Illinois on graph theory to automatically update the various views associated with the knowledge base and to generate graphs that have minimal intersection of links and also take into account domain and user specific layout constraints. Another concept used to provide intelligent assistance was automatic text generation. Early research at ISI investigated the feasibility of extracting formal specifications from informal natural language text documents. They determined that the approach was not viable. Natural language is by nature simply too ambiguous to serve as a good format for defining a system. However, natural language generation was seen to be feasible as a way to generate textual descriptions that could be read by managers and non-technical personnel. This was especially appealing to the air force since by law they required all contractors to generate various reports that describe the system from different points of view. Researchers at ISI and later Cogentext and Andersen Consulting demonstrated the viability of the approach by using their own technology to generate the documentation required by their air force contracts. References Expert systems Formal methods Specification languages United States Air Force Theoretical computer science
Knowledge Based Software Assistant
[ "Mathematics", "Technology", "Engineering" ]
2,029
[ "Specification languages", "Theoretical computer science", "Applied mathematics", "Software engineering", "Information systems", "Expert systems", "Formal methods" ]
41,558,399
https://en.wikipedia.org/wiki/Decomposable%20measure
In mathematics, a decomposable measure (also known as a strictly localizable measure) is a measure that is a disjoint union of finite measures. This is a generalization of σ-finite measures, which are the same as those that are a disjoint union of countably many finite measures. There are several theorems in measure theory such as the Radon–Nikodym theorem that are not true for arbitrary measures but are true for σ-finite measures. Several such theorems remain true for the more general class of decomposable measures. This extra generality is not used much as most decomposable measures that occur in practice are σ-finite. Examples Counting measure on an uncountable measure space with all subsets measurable is a decomposable measure that is not σ-finite. Fubini's theorem and Tonelli's theorem hold for σ-finite measures but can fail for this measure. Counting measure on an uncountable measure space with not all subsets measurable is generally not a decomposable measure. The one-point space of measure infinity is not decomposable. References Bibliography Second printing. Measures (measure theory)
Decomposable measure
[ "Physics", "Mathematics" ]
253
[ "Measures (measure theory)", "Quantity", "Physical quantities", "Size" ]
41,558,978
https://en.wikipedia.org/wiki/Patchy%20particles
Patchy particles are micron- or nanoscale colloidal particles that are anisotropically patterned, either by modification of the particle surface chemistry ("enthalpic patches"), through particle shape ("entropic patches"), or both. The particles have a repulsive core and highly interactive surfaces that allow for this assembly. The placement of these patches on the surface of a particle promotes bonding with patches on other particles. Patchy particles are used as a shorthand for modelling anisotropic colloids, proteins and water and for designing approaches to nanoparticle synthesis. Patchy particles range in valency from two (Janus particles) or higher. Patchy particles of valency three or more experience liquid-liquid phase separation. Some phase diagrams of patchy particles do not follow the law of rectilinear diameters. Assembly of patchy particles Simulations The interaction between patchy particles can be described by a combination of two discontinuous potentials. A hard sphere potential accounting for the repulsion between the cores of the particles and an attractive square potential for the attraction between the patches. With the interaction potential in hand one can use different methods to compute thermodynamic properties. Molecular dynamics Using a continuous representation of the discontinuous potential described above enables the simulation of patchy particles using molecular dynamics. Monte Carlo One simulation done involves a Monte Carlo method, where the best “move” ensures equilibrium in the particle. One type of move is rototranslation. This is carried out by choosing a random particle, random angular and radial displacements, and a random axis of rotation. Rotational degrees of freedom need to be determined prior to the simulation. The particle is then rotated/moved according to these values. Also, the integration time step needs to be controlled because it will affect the resulting shape/size of the particle. Another simulation done is the grand-canonical ensemble. In the grand-canonical ensemble, the system is in equilibrium with a thermal bath and reservoir of particles. Volume, temperature, and chemical potential are fixed. Because of these constants, a number of particles (n) changes. This is typically used to monitor phase behaviour. With these additional moves, the particle is added at a random orientation and random position. Other simulations involve biased Monte Carlo moves. One type is aggregation volume-bias moves. It consists of 2 moves; the first tries to form bond between two previously unbonded particles, the second tries to break an existing bond by separation. Aggregation volume-bias moves reflects the following procedure: two particles are chosen, I and J, which are not neighboring particles, particle J is moved inside the bonding volume of particle I. This process is carried out uniformly. Another aggregation volume-bias move follows a method of randomly choosing a particle J that is bonded to I. Particle J is then moved outside the bonding volume of particle I, resulting in the two particles no longer being bonded. A third type of aggregation volume-bias move takes a particle I bonded to particle J and inserts it into a third particle. Grand canonical ensemble is improved by aggregation volume-bias moves. When aggregation volume-bias moves are applied, the rate of monomer formation and depletion in enhanced and the grand-canonical ensemble moves increase. A second biased Monte Carlo simulation is virtual move Monte Carlo. This is a cluster move algorithm. It was made to improve relaxation times in strongly interacting, low density systems and to better approximate diffusive dynamics in the system. This simulation is good for self-assembling and polymeric systems that can find natural moves that relax the system. Self-assembly Self-assembly is also a method to create patchy particles. This method allows formation of complex structures like chains, sheets, rings, icosahedra, square pyramids, tetrahedra, and twisted staircase structures. By coating the surface of particles with highly anisotropic, highly directional, weakly interacting patches, the arrangement of the attractive patches can organize disordered particles into structures. The coating and the arrangement of the attractive patches is what contributes to the size, shape, and structure of the resulting particle. Emergent valence self-assembly Developing entropic patches that will self-assemble into simple cubic, body-centered cubic (bcc), diamond, and dodecagonal quasicrystal structures. The local coordination shell partially dictates the structure that is assembled. Spheres are simulated with cubic, octahedral, and tetrahedral faceting. This allows for entropic patches to self-assemble. Tetrahedral faceted spheres are targeted by beginning with simple spheres. In coordination with the faces of a tetrahedron, the sphere is sliced at four equal facets. Monte Carlo simulations were performed to determine different forms of α, the faceting amount. The particular faceting amount determines the lattice that assembles. Simple cubic lattices are achieved in a similar way by slicing cubic facets into spheres. This allows for the assembly of simple cubic lattices. A bcc crystal is achieved by faceting a sphere octahedrally. The faceting amount, α, is used in the emergent valence self-assembly to determine what crystal structure will form. A perfect sphere is set as α=0. The shape that is faceted to the sphere is defined at α=1. By fluctuating the faceting amount between α=0 and α=1, the lattice can change. Changes include effects on self-assembly, packing structure, amount of coordination of the faceting patch to the sphere, shape of the faceting patch, type of crystal lattice formed, and the strength of the entropic patch. See also Self-assembly Janus particles Entropic force References Related reading Amar B. Pawar and Ilona Kretzschmar "Fabrication, Assembly, and Application of Patchy Particles", Macromolecular Rapid Communications 31 pp. 150-168 (2010) Willem K. Kegel and Henk N. W. Lekkerkerker "Colloidal gels: Clay goes patchy", Nature Materials 10 pp. 5-6 (2011) Zhenping He and Ilona Kretzschmar "Template-Assisted Fabrication of Patchy Particles with Uniform Patches", Langmuir 28 pp. 9915-9919 (2011) SKlogWiki page on Patchy Particles Colloids Soft matter
Patchy particles
[ "Physics", "Chemistry", "Materials_science" ]
1,315
[ "Soft matter", "Chemical mixtures", "Condensed matter physics", "Colloids" ]
41,559,308
https://en.wikipedia.org/wiki/Sampling%20in%20order
In statistics, some Monte Carlo methods require independent observations in a sample to be drawn from a one-dimensional distribution in sorted order. In other words, all n order statistics are needed from the n observations in a sample. The naive method performs a sort and takes O(n log n) time. There are also O(n) algorithms which are better suited for large n. The special case of drawing n sorted observations from the uniform distribution on [0,1] is equivalent to drawing from the uniform distribution on an n-dimensional simplex; this task is a part of sequential importance resampling. Further reading Monte Carlo methods
Sampling in order
[ "Physics" ]
129
[ "Monte Carlo methods", "Computational physics" ]
41,561,062
https://en.wikipedia.org/wiki/Base%20anhydride
A base anhydride is an oxide of a chemical element from group 1 or 2 (the alkali metals and alkaline earth metals, respectively). They are obtained by removing water from the corresponding hydroxide base. If water is added to a base anhydride, a corresponding hydroxide salt can be [re]-formed. Base anhydrides are not Brønsted–Lowry bases because they are not proton acceptors. However, they are Lewis bases, because they will share an electron pair with some Lewis acids, most notably acidic oxides. They are potent alkalis and will produce alkali burns on skin, because their affinity for water (that is, their affinity for being slaked) makes them react with body water. Examples Quicklime (calcium oxide) is a base anhydride. It reacts with water to become hydrated lime (calcium hydroxide), which is a strong base, chemically akin to lye. This reaction is exothermic. CaO + H2O → Ca(OH)2 (ΔHr = −63.7kJ/mol of CaO) Sodium oxide reacts readily and irreversibly with water to give sodium hydroxide: Na2O + H2O → 2 NaOH See also Acid anhydride Acidic oxide References Acid–base chemistry
Base anhydride
[ "Chemistry" ]
281
[ "Equilibrium chemistry", "Acid–base chemistry", "nan" ]
41,564,074
https://en.wikipedia.org/wiki/C14H16N2O3
{{DISPLAYTITLE:C14H16N2O3}} The molecular formula C14H16N2O3 (molar mass: 260.29 g/mol, exact mass: 260.1161 u) may refer to: Nadoxolol Phetharbital, or phenetharbital Molecular formulas
C14H16N2O3
[ "Physics", "Chemistry" ]
73
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,181,831
https://en.wikipedia.org/wiki/Hyalobarrier
Hyalobarrier is a substance to keep tissue apart post surgery and therefore prevent adhesions. It contains autocross-linked hyaluronan. Highly viscous due to condensation. Hyaluronan is present in cartilage and skin hence there is a natural metabolic pathway for it. This gel is used to separate organs and tissue after surgery. Scientific documentation so far covers the gynaecology speciality. IE Laparoscopic surgery, hysteroscopy/hysteroscopic surgery but also open surgery. According to data in a Cochrane collaboration review barrier agents may be a little more effective in preventing adhesions than no intervention. The Cochrane report also states that the incidence of postsurgical adhesions is as high as 50 to 100%. In a recent review by C Sutton (University of Surrey, Guilford UK), it is stated that Hyalobarrier is the only anti adhesive substance that has published data for intrauterine use. Additional information Laparoscopy Asherman's syndrome References Biomaterials Implants (medicine) Medical equipment
Hyalobarrier
[ "Physics", "Biology" ]
227
[ "Biomaterials", "Materials stubs", "Biotechnology stubs", "Medical equipment", "Materials", "Medical technology stubs", "Matter", "Medical technology" ]
23,183,287
https://en.wikipedia.org/wiki/Lamb%E2%80%93M%C3%B6ssbauer%20factor
In physics, the Lamb–Mössbauer factor (LMF, after Willis Lamb and Rudolf Mössbauer) or elastic incoherent structure factor (EISF) is the ratio of elastic to total incoherent neutron scattering, or the ratio of recoil-free to total nuclear resonant absorption in Mössbauer spectroscopy. The corresponding factor for coherent neutron or X-ray scattering is the Debye–Waller factor; often, that term is used in a more generic way to include the incoherent case as well. When first reporting on recoil-free resonance absorption, Mössbauer (1959) cited relevant theoretical work by Lamb (1939). The first use of the term "Mössbauer–Lamb factor" seems to be by Tzara (1961); from 1962 on, the form "Lamb–Mössbauer factor" came into widespread use. Singwi and Sjölander (1960) pointed out the close relation to incoherent neutron scattering. With the invention of backscattering spectrometers, it became possible to measure the Lamb–Mössbauer factor as a function of the wavenumber (whereas Mössbauer spectroscopy operates at a fixed wavenumber). Subsequently, the term elastic incoherent structure factor became more frequent. References Scattering Spectroscopy Condensed matter physics
Lamb–Mössbauer factor
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
275
[ "Spectroscopy stubs", "Materials science stubs", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Scattering stubs", "Phases of matter", "Astronomy stubs", "Materials science", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "C...
38,799,183
https://en.wikipedia.org/wiki/Electron-hole%20droplets
Electron-hole droplets are a condensed phase of excitons in semiconductors. The droplets are formed at low temperatures and high exciton densities, the latter of which can be created with intense optical excitation or electronic excitation in a p-n junction. Discovery Evidence for electron-hole droplets was first observed by J. R. Haynes of Bell Labs in 1966, who observed a frequency shift in the spectrum radiated by silicon at low temperatures (~3 K). The shift was attributed to the recombination of a bound state of two excitons (electron-hole pairs). V. M. Asnin and A. A. Rogachev discovered metallic conduction in germanium at low temperatures when the density of excitons exceeded the amount required to transition into a metallic state. References Condensed matter physics
Electron-hole droplets
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
172
[ "Materials science stubs", "Phases of matter", "Materials science", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
38,802,940
https://en.wikipedia.org/wiki/List%20of%20bioluminescent%20organisms
Bioluminescence is the production of light by living organisms. This list of bioluminescent organisms is organized by the environment, covering terrestrial, marine, and microorganisms. Terrestrial animals Certain arthropods Coleoptera (beetles) Lampyridae (Fireflies) Phengodidae (Glowworm beetles) railroad worms Rhagophthalmidae Sinopyrophoridae Certain Elateridae (Click beetles) Pyrophorini Balgus Campyloxenus Diptera (true flies) Certain keroplatid fungus gnats Myriapoda Certain centipedes Geophilus carpophagus Orphnaeus brevilabiatus Certain millipedes Genus Motyxia Some land snails Quantula striata Quantula weinkauffiana Genus Phuphania Annelids Octochaetus multiporus Microscolex phosphoreus Pontodrilus litoralis Marine animals Fish Anglerfish Gulper eel Lanternfish Stomiiformes Marine hatchetfish Viperfish Black dragonfish Midshipman fish Pineconefish Lanterneye fish Some Squaliformes Lantern sharks Kitefin sharks Velvet dogfish Invertebrates A deep-sea species of carnivorous sponge (Cladorhizidae) Many Cnidarians Sea pens Renilla reniformis Coral Certain Jellyfish Aequorea victoria Atolla jellyfish Helmet jellyfish Certain Ctenophores (comb jellies) Some Tunicates: Larvaceans Salps Ascidiacea Doliolida Pyrosomes Certain echinoderms (e.g. Ophiurida) Amphiura filiformis Ophiopsila aranea Ophiopsila californica Amphipholis squamata Many Crustaceans: Seed shrimp (Myodocopa) Copepods Lophogastrids (Gnathophausia) Amphipods Krill Decapods (shrimp and prawn) Genus Heterocarpus. Two species of Chaetognaths (arrow worms) Caecosagitta macrocephala Eukrohnia fowleri Annelida Genus Tomopteris Genus Swima Certain Polynoidae Mollusca Certain Bivalves (clams) Plocamopherus maderae Pholas dactylus Certain Nudibranchs (sea slugs) Certain sea snails Hinea brasiliana Many Cephalopods Certain Octopuses Bolitaenidae Stauroteuthis Vampire squid Sepiolida Many Teuthida (squid) Cranchiidae Colossal Squid Mastigoteuthidae Histioteuthidae Enoploteuthoidea Firefly squid Freshwater animals Latia, a genus of four species of freshwater snail Fungi Bacteria Photorhabdus luminescens Certain species of the family Vibrionaceae (e.g. Vibrio fischeri, Vibrio harveyi, Photobacterium phosphoreum) Certain species of the family Shewanellaceae, (e.g. Shewanella hanedai and Shewanella woodyi) Other microorganisms Protists Certain Dinoflagellates (e.g. Noctiluca scintillans, Pyrodinium bahamense, Pyrocystis fusiformis and Lingulodinium polyedra References Bioluminescent List of
List of bioluminescent organisms
[ "Chemistry", "Biology" ]
697
[ "Organisms by adaptation", "Lists of biota", "Bioluminescent organisms", "Bioluminescence" ]
26,065,582
https://en.wikipedia.org/wiki/Preclinical%20imaging
Preclinical imaging is the visualization of living animals for research purposes, such as drug development. Imaging modalities have long been crucial to the researcher in observing changes, either at the organ, tissue, cell, or molecular level, in animals responding to physiological or environmental changes. Imaging modalities that are non-invasive and in vivo have become especially important to study animal models longitudinally. Broadly speaking, these imaging systems can be categorized into primarily morphological/anatomical and primarily molecular imaging techniques. Techniques such as high-frequency micro-ultrasound, magnetic resonance imaging (MRI) and computed tomography (CT) are usually used for anatomical imaging, while optical imaging (fluorescence and bioluminescence), positron emission tomography (PET), and single photon emission computed tomography (SPECT) are usually used for molecular visualizations. These days, many manufacturers provide multi-modal systems combining the advantages of anatomical modalities such as CT and MR with the functional imaging of PET and SPECT. As in the clinical market, common combinations are SPECT/CT, PET/CT and PET/MR. Micro-ultrasound Principle: High-frequency micro-ultrasound works through the generation of harmless sound waves from transducers into living systems. As the sound waves propagate through tissue, they are reflected back and picked up by the transducer, and can then be translated into 2D and 3D images. Micro-ultrasound is specifically developed for small animal research, with frequencies ranging from 15 MHz to 80 MHz. Strengths: Micro-ultrasound is the only real-time imaging modality per se, capturing data at up to 1000 frames per second. This means that not only is it more than capable of visualizing blood flow in vivo, it can even be used to study high speed events such as blood flow and cardiac function in mice. Micro-ultrasound systems are portable, do not require any dedicated facilities, and is extremely cost-effective compared to other systems. It also does not run the risk of confounding results through side-effects of radiation. Currently, imaging of up to 30 μm is possible, allowing the visualization of tiny vasculature in cancer angiogenesis. To image capillaries, this resolution can be further increased to 3–5 μm with the injection of microbubble contrast agents. Furthermore, microbubbles can be conjugated to markers such as activated glycoprotein IIb/IIIa (GPIIb/IIIa) receptors on platelets and clots, αvβ3 integrin, as well as vascular endothelial growth factor receptors (VEGFR), in order to provide molecular visualization. Thus, it is capable of a wide range of applications that can only be achieved through dual imaging modalities such as micro-MRI/PET. Micro-ultrasound devices have unique properties pertaining to an ultrasound research interface, where users of these devices get access to raw data typically unavailable on most commercial ultrasound (micro and non-micro) systems. Weaknesses: Unlike micro-MRI, micro-CT, micro-PET, and micro-SPECT, micro-ultrasound has a limited depth of penetration. As frequency increases (and so does resolution), maximum imaging depth decreases. Typically, micro-ultrasound can image tissue of around 3 cm below the skin, and this is more than sufficient for small animals such as mice. The performance of ultrasound imaging is often perceived as to be linked with the experience and skills of the operator. However, this is changing rapidly as systems are being designed into user-friendly devices that produce highly reproducible results. One other potential disadvantage of micro-ultrasound is that the targeted microbubble contrast agents cannot diffuse out of vasculature, even in tumors. However, this may actually be advantageous for applications such as tumor perfusion and angiogenesis imaging. Cancer Research: The advances in micro-ultrasound has been able to aid cancer research in a plethora of ways. For example, researchers can easily quantify tumor size in two and three dimensions. Not only so, blood flow speed and direction can also be observed through ultrasound. Furthermore, micro-ultrasound can be used to detect and quantify cardiotoxicity in response to anti-tumor therapy, since it is the only imaging modality that has instantaneous image acquisition. Because of its real-time nature, micro-ultrasound can also guide micro-injections of drugs, stem cells, etc. into small animals without the need for surgical intervention. Contrast agents can be injected into the animal to perform real-time tumor perfusion and targeted molecular imaging and quantification of biomarkers. Recently, micro-ultrasound has even been shown to be an effective method of gene delivery. Functional ultrasound brain imaging Unlike conventional micro-ultrasound device with limited blood-flow sensitivity, dedicated real-time ultra fast ultrasound scanners with appropriate sequence and processing have been shown to be able to capture very subtle hemodynamic changes in the brain of small animals in real-time. This data can then be used to infer neuronal activity through the neurovascular coupling. The functional ultrasound imaging (fUS) technique can be seen as an analogue to functional magnetic resonance imaging (fMRI). fUS can be used for brain angiography, brain functional activity mapping, brain functional connectivity from mice to primates including awake animals. Micro-PAT Principle: Photoacoustic tomography (PAT) works on the natural phenomenon of tissues to thermalelastically expand when stimulated with externally applied electromagnetic waves, such as short laser pulses. This causes ultrasound waves to be emitted from these tissues, which can then be captured by an ultrasound transducer. The thermoelastic expansion and the resulting ultrasound wave is dependent on the wavelength of light used. PAT allows for complete non-invasiveness when imaging the animal. This is especially important when working with brain tumor models, which are notoriously hard to study. Strengths: Micro-PAT can be described as an imaging modality that is applicable in a wide variety of functions. It combines the high sensitivity of optical imaging with the high spatial resolution of ultrasound imaging. For this reason, it can not only image structure, but also separate between different tissue types, study hemodynamic responses, and even track molecular contrast agents conjugated to specific biological molecules. Furthermore, it is non-invasive and can be quickly performed, making it ideal for longitudinal studies of the same animal. Weaknesses: Because micro-PAT is still limited by the penetrating strength of light and sound, it does not have unlimited depth of penetration. However, it is sufficient to pass through rat skull and image up to a few centimeters down, which is more than sufficient for most animal research. One other drawback of micro-PAT is that it relies on optical absorbance of tissue to receive feedback, and thus poorly vascularized tissue such as the prostate is difficult to visualize. To date, 3 commercially available systems are on the market, namely by VisualSonics, iThera and Endra, the last one being the only machine doing real 3D image acquisition. Cancer research: The study of brain cancers has been significantly hampered by the lack of an easy imaging modality to study animals in vivo. To do so, a craniotomy is often needed, in addition to hours of anesthesia, mechanical ventilation, etc. which significantly alters experimental parameters. For this reason, many researchers have been content to sacrifice animals at different time points and study brain tissue with traditional histological methods. Compared to an in vivo longitudinal study, many more animals are needed to obtain significant results, and the sensitivity of the entire experiment is cast in doubt. As stated earlier, the problem is not reluctance by researchers to use in vivo imaging modalities, but rather a lack of suitable ones. For example, although optical imaging provides fast functional data and oxy- and deoxyhemoglobin analysis, it requires a craniotomy and only provides a few hundred micrometres of penetration depth. Furthermore, it is focused on one area of the brain, while research has made it apparently clear that brain function is interrelated as a whole. On the other hand, micro-fMRI is extremely expensive, and offers dismal resolution and image acquisition times when scanning the entire brain. It also provides little vasculature information. Micro-PAT has been demonstrated to be a significant enhancement over existing in vivo neuro-imaging devices. It is fast, non-invasive, and provides a plethora of data output. Micro-PAT can image the brain with high spatial resolution, detect molecular targeted contrast agents, simultaneously quantify functional parameters such as SO2 and HbT, and provide complementary information from functional and molecular imaging which would be extremely useful in tumor quantification and cell-centered therapeutic analysis. Micro-MRI Principle: Magnetic resonance imaging (MRI) exploits the nuclear magnetic alignments of different atoms inside a magnetic field to generate images. MRI machines consist of large magnets that generate magnetic fields around the target of analysis. These magnetic fields cause atoms with non-zero spin quantum number such as hydrogen, gadolinium, and manganese to align themselves with the magnetic dipole along the magnetic field. A radio frequency (RF) signal is applied closely matching the Larmor precession frequency of the target nuclei, perturbing the nuclei's alignment with the magnetic field. After the RF pulse the nuclei relax and emit a characteristic RF signal, which is captured by the machine. With this data a computer will generate an image of the subject based on the resonance characteristics of different tissue types. Since 2012, the use of cryogen-free magnet technology has greatly reduced infrastructure requirements and dependency on the availability of increasingly hard to obtain cryogenic coolants. Strengths: The advantage of micro-MRI is that it has good spatial resolution, up to 100 μm and even 25 μm in very high strength magnetic fields. It also has excellent contrast resolution to distinguish between normal and pathological tissue. Micro-MRI can be used in a wide variety of applications, including anatomical, functional, and molecular imaging. Furthermore, since micro-MRI's mechanism is based on a magnetic field, it is much safer compared to radiation based imaging modalities such as micro-CT and micro-PET. Weaknesses: One of the biggest drawbacks of micro-MRI is its cost. Depending on the magnetic strength (which determines resolution), systems used for animal imaging between 1.5 and 14 teslas in magnetic flux density range from $1 million to over $6 million, with most systems costing around $2 million. Furthermore, the image acquisition time is extremely long, spanning into minutes and even hours. This may negatively affect animals that are anesthetized for long periods of time. In addition, micro-MRI typically captures a snapshot of the subject in time, and thus it is unable to study blood flow and other real-time processes well. Even with recent advances in high strength functional micro-MRI, there is still around a 10–15 second lag time to reach peak signal intensity, making important information such as blood flow velocity quantification difficult to access. Cancer research: Micro-MRI is often used to image the brain because of its ability to non-invasively penetrate the skull. Because of its high resolution, micro-MRI can also detect early small-sized tumors. Antibody-bound paramagnetic nanoparticles can also be used to increase resolution and to visualize molecular expression in the system. Stroke and traumatic brain injury research: Micro-MRI is often used for anatomical imaging in stroke and traumatic brain injury research. Molecular imaging is a new area of research. Micro-CT Principle: Computed tomography (CT) imaging works through X-rays that are emitted from a focused radiation source that is rotated around the test subject placed in the middle of the CT scanner. The X-ray is attenuated at different rates depending on the density of tissue it is passing through, and is then picked up by sensors on the opposite end of the CT scanner from the emission source. In contrast to traditional 2D X-ray, since the emission source in a CT scanner is rotated around the animal, a series of 2D images can then be combined into 3D structures by the computer. Strengths: Micro-CT can have excellent spatial resolution, which can be up to 6 μm when combined with contrast agents. However, the radiation dose needed to achieve this resolution is lethal to small animals, and a 50 μm spatial resolution is a better representation of the limits of micro-CT. It is also decent in terms of image acquisition times, which can be in the range of minutes for small animals. In addition, micro-CT is excellent for bone imaging. Weaknesses: One of the major drawbacks of micro-CT is the radiation dosage placed on test animals. Although this is generally not lethal, the radiation is high enough to affect the immune system and other biological pathways, which may ultimately change experimental outcomes. Also, radiation may affect tumor size in cancer models as it mimics radiotherapy, and thus extra control groups might be needed to account for this potential confounding variable. In addition, the contrast resolution of micro-CT is quite poor, and thus it is unsuitable for distinguishing between similar tissue types, such as normal vs. diseased tissues. Cancer research: Micro-CT is most often used as an anatomical imaging system in animal research because of the benefits that were mentioned earlier. Contrast agents can also be injected to study blood flow. However, contrast agents for micro-CT, such as iodine, are difficult to conjugate molecular targets1 with, and thus it is rarely used in molecular imaging techniques. As such, micro-CT is often combined with micro-PET/SPECT for anatomical and molecular imaging in research. Micro-PET Principle: Positron Emission Tomography (PET) images living systems by recording high-energy γ-rays emitted from within the subject. The source of the radiation comes from positron-emitting-bound biological molecules, such as 18F-FDG (fludeoxyglucose), which is injected into the test subject. As the radioisotopes decay, they emit positrons which annihilates with electrons found naturally in the body. This produces 2 γ-rays at ~180° apart, which are picked up by sensors on opposite ends of the PET machine. This allows individual emission events to be localized within the body, and the data set is reconstructed to produce images. Strengths: The strength of micro-PET is that because the radiation source is within the animal, it has practically unlimited depth of imaging. The acquisition time is also reasonably fast, usually around minutes. Since different tissues have different rates of uptake radiolabelled molecular probes, micro-PET is also extremely sensitive to molecular details, and thus only nanograms of molecular probes are needed for imaging. Weaknesses: Radioactive isotopes used in micro-PET have very short half-lives (110 min for 18F-FDG). In order to generate these isotopes, cyclotrons in radiochemistry laboratories are needed in close proximity of the micro-PET machines. Also, radiation may affect tumor size in cancer models as it mimics radiotherapy, and thus extra control groups might be needed to account for this potential confounding variable. Micro-PET also has poor spatial resolution of around 1 mm. In order to conduct a well rounded research that involves not only molecular imaging but also anatomical imaging, micro-PET needs to be used in conjunction with micro-MRI or micro-CT, which further decreases accessibility to many researchers because of high cost and specialized facilities. Cancer research: PET is usually widely used in clinical oncology, and thus results from small animal research are easily translated. Because of the way 18F-FDG is metabolized by tissues, it results in intense radiolabelling in most cancers, such as brain and liver tumors. Almost any biological compound can be traced by micro-PET, as long as it can be conjugated to a radioisotope, which makes it suitable towards studying novel pathways. Micro-SPECT Principle: Similar to PET, single photon emission computed tomography (SPECT) also images living systems through γ-rays emitted from within the subject. Unlike PET, the radioisotopes used in SPECT (such as technetium-99m) emit γ-rays directly, instead of from annihilation events of a positron and electron. These rays are then captured by a γ-camera rotated around the subject and subsequently rendered into images. Strengths: The benefit of this approach is that the nuclear isotopes are much more readily available, cheaper, and have longer half-lives as compared to micro-PET isotopes. Like micro-PET, micro-SPECT also has very good sensitivity and only nanograms of molecular probes are needed. Furthermore, by using different energy radioisotopes conjugated to different molecular targets, micro-SPECT has the advantage over micro-PET in being able to image several molecular events simultaneously. At the same time, unlike micro-PET, micro-SPECT can reach very high spatial resolution by exploring pinhole collimation principle (Beekman et al.) In this approach, by placing the object (e.g. rodent) close to the aperture of the pinhole, one can reach high magnification of its projection on detector surface and effectively compensate for intrinsic resolution of the crystal. Weaknesses: Micro-SPECT still has considerable radiation which may affect physiological and immunological pathways in the small animals. Also, radiation may affect tumor size in cancer models as it mimics radiotherapy, and thus extra control groups might be needed to account for this potential confounding variable. Micro-SPECT can also be up to two orders of magnitude less sensitive than PET. Furthermore, labeling compounds with micro-SPECT isotopes require chelating molarities which may alter their biochemical or physical properties. Cancer research: Micro-SPECT is often used in cancer research for molecular imaging of cancer-specific ligands. It can also be used to image the brain because of its penetration power. Since newer radioisotopes involve nanoparticles such as 99mTC-labelled iron oxide nanoparticles, they could potentially be combined with drug delivery systems in the future. The following small-animal SPECT systems have been developed in different groups and are available commercially: Combined PET-MR Principle: The PET-MR technology for small animal imaging offers a major breakthrough in high performance functional imaging technology, particularly when combined with a cryogen-free MRI system. A PET-MR system provides superior soft tissue contrast and molecular imaging capability for great visualisation, quantification and translational studies. A PET-MR preclinical system can be used for simultaneous multi-modality imaging. Use of cryogen-free magnet technology also greatly reduces infrastructure requirements and dependency on the availability of increasingly hard to obtain cryogenic coolants. Strengths: Researchers can use standalone PET or MRI operation, or use multi-modality imaging. PET and MRI techniques can be carried out either independently (using either the PET or MRI systems as standalone devices), or in sequence (with a clip-on PET) in front of the bore of the MRI system, or simultaneously (with the PET inserted inside the MRI magnet). This provides a much more accurate picture far more quickly. By operating the PET and MRI systems simultaneously workflow within a laboratory can be increased. The MR-PET system from MR Solutions incorporates the latest technology in Silicon Photomultipliers (SiPM), which significantly reduces the size of the system and avoids the problems of using photomultipliers or other legacy detector types within the magnetic field of the MRI. The performance characteristics of SiPM are similar to a conventional PMT, but with the practical advantages of solid-state technology. Weaknesses: As this is a combination of imaging systems the weaknesses associated with each imaging modality are largely compensated for by the other. In sequential PET-MR, the operator needs to allow a little time to transfer the subject between the PET and MR acquisition positions. This is negated in simultaneous PET-MR. However, in sequential PET-MR systems, the PET ring itself is easy to clip-on or off and transfer between rooms for independent use. The researcher requires sufficient knowledge to interpret images and data from the two different systems and would require training for this. Cancer research: The combination of MR and PET imaging is far more time efficient than using one technique at a time. Images from the two modalities may also be registered far more precisely, since the time delay between modalities is limited for sequential PET-MR systems, and effectively non-existent for simultaneous systems. This means that there is little to no opportunity for gross movement of the subject between acquisitions. Combined SPECT-MR Principle: The new SPECT-MR for small animal imaging is based on multi-pinhole technology, allowing high resolution and high sensitivity. When coupled with cryogen-free MRI the combined SPECT-MR technology dramatically increases the workflow in research laboratories whilst reducing laboratory infrastructure requirements and vulnerability to cryogen supply. Strengths: Research facilities no longer need to purchase multiple systems and may choose between different system imaging configurations. The SPECT or MRI equipment can each be used as a standalone device on a bench, or sequential imaging can be carried out by clipping the SPECT module on to the MRI system. The animal translates automatically from one modality to the other along the same axis. By inserting a SPECT module inside the MRI magnet simultaneous acquisition of SPECT and MRI data is possible. The workflow of the laboratory can be increased by acquiring multiple modalities of the same subject in one session or by operating the SPECT and MRI systems separately, imaging different subjects at the same time. SPECT-MR is available in different configurations with different trans-axial field of views, allowing imaging from mice to rats. Weaknesses: As this is a combination of imaging systems the weaknesses associated with one or other imaging modality are no longer applicable. In sequential SPECT-MR, the operator needs to allow a little time to transfer the subject between the SPECT and MR acquisition positions. This is negated in simultaneous SPECT-MR. However, for sequential SPECT-MR, when the SPECT module is clipped on it is easy to clip-on or off and transfer between rooms. The researcher has to have sufficient knowledge to interpret two different system outputs and would require training for this. Cancer research: The combination of MRI, which is used as a non-invasive imaging technique, and SPECT provide results far more quickly when compared to using one technique at a time. Images from the two modalities may also be registered far more precisely, since the time delay between modalities is limited for sequential SPECT-MR systems, and effectively non-existent for simultaneous systems. This means that there is little to no opportunity for gross movement of the subject between acquisitions. With separate, independent operation of the MRI and SPECT systems workflow can easily be increased. Optical imaging Principle: Optical imaging is divided into fluorescence and bioluminescence. Fluorescence imaging works on the basis of fluorochromes inside the subject that are excited by an external light source, and which emit light of a different wavelength in response. Traditional fluorochromes include GFP, RFP, and their many mutants. However significant challenges emerge in vivo due to the autofluorescence of tissue at wavelengths below 700 nm. This has led to a transition to near-infrared dyes and infrared fluorescent proteins (700 nm–800 nm) which have demonstrated much more feasibility for in vivo imaging due to the much lower autofluorescence of tissue and deeper tissue penetration at these wavelengths. Bioluminescence imaging, on the other hand, is based on light generated by chemiluminescent enzymatic reactions. In both fluorescence and bioluminescence imaging, the light signals are captured by charge-coupled device (CCD) cameras cooled up to −150 °C, making them extremely light-sensitive. In events where more light is produced, less sensitive cameras or even the naked eye can be used to visualize the image. Strengths: Optical imaging is fast and easy to perform, and is relatively inexpensive compared to many of the other imaging modalities. Furthermore, it is extremely sensitive, being able to detect molecular events in the 10–15 M range. In addition, since bioluminescence imaging does not require excitation of the reporter, but rather the catalysis reaction itself, it is indicative of the biological / molecular process and has almost no background noise. Weaknesses: A major weakness of optical imaging has been the depth of penetration, which, in the case of visible dyes is only a few millimeters. Near-infrared fluorescence has allowed depths of several centimeters to be feasible. Since light in the infrared region has the best penetration depth, numerous fluorochromes have been specifically designed to be optimally excited in this area. Optical imaging, fluorescence has a resolution limited to the diffraction of light of ~270 nm and bioluminescence has a resolution of ~1–10 mm, depending on time of acquisition, compared to MRI at 100 μm, and micro-ultrasound at 30 μm. Cancer research: Because of poor depth of penetration, optical imaging is typically only used for molecular purposes, and not anatomical imaging. Due to poor depth of penetration in visible wavelengths, it is used for subcutaneous models of cancer, however near-infrared fluorescence has enabled orthotopic models to now be feasible. Often, investigation of specific protein expression in cancer and drug effects on these expressions are studied in vivo with genetically engineered light-emitting reporter genes. This also allows the identification of mechanisms for tissue-selective gene targeting in cancer and beyond. Combined PET-optical imaging, fluorescence Principle: Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies or red blood cells, which allows for positron emission tomography (PET) and fluorescence imaging of cancer and hemorrhages, respectively. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (boron bound 18F) and fluorescent for dual modality PET and fluorescence imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The combining of these imaging modalities was predicted by 2008 Nobel Laureate, Roger Y. Tsien, to compensate for the weaknesses of single imaging techniques. Strengths: Combines the strengths of PET and optical Imaging, fluorescence. PET allows for anatomical imaging for location of labelled cells in entire animals or humans because the radiolabel, 18F, is within the animal or human for nearly unlimited depth of penetration. 18F has a half-life of 110 min and limits the radioactive exposure to the animal or human. Optical imaging allows for higher resolution with sub-cellular resolution of ~270 nm, or the diffraction limit of light, to allow for imaging of single cells and localizing cellular location on the cell membrane, endosomes, cytoplasm, or nuclei (see FIgure of multicolor HeLa cellls). The technique can label small molecules, antibodies, cells (cancer and red blood cells), cerebrospinal fluid, hemorrhages, prostate cancer removal, and genome edited cells expressing a genetically encoded human protein, PSMA, for imaging CRISPR/Cas9 edited and CAR T-cells. Weaknesses: Combining PET and optical imaging allows for two imaging agents that compensate for the weakness of the others. 18F has a half-life of 110 min and the PET signal is not permanent. Fluorescent small molecules allow for permanent signal when stored in the dark and not photobleached. Currently, there is not a single instrument that can image the PET signal and image fluorescence with subcellular resolution (see Figure of multicolor HeLa cells). Multiple instruments are required to image PET, whole organ fluorescence, and single cell fluorescence with sub-cellular resolution. References Imaging Medical physics Medical imaging
Preclinical imaging
[ "Physics" ]
5,835
[ "Applied and interdisciplinary physics", "Medical physics" ]
26,067,017
https://en.wikipedia.org/wiki/Warburg%20coefficient
The Warburg coefficient (or Warburg constant; denoted or ) is the diffusion coefficient of ions in solution, associated to the Warburg element, . The Warburg coefficient has units of The value of can be obtained by the gradient of the Warburg plot, a linear plot of the real impedance () against the reciprocal of the square root of the frequency (). This relation should always yield a straight line, as it is unique for a Warburg. Alternatively, the value of can be found by: where is the ideal gas constant; is the thermodynamic temperature; is the Faraday constant; is the valency; is the diffusion coefficient of the species, where subscripts and stand for the oxidized and reduced species respectively; is the concentration of the and species in the bulk; is the concentration of the electrolyte; denotes the surface area; denotes the fraction of the and species present. The equation for applies to both reversible and quasi-reversible reactions for which both halves of the couple are soluble. References Electrochemistry
Warburg coefficient
[ "Chemistry" ]
220
[ "Electrochemistry", "Physical chemistry stubs", "Electrochemistry stubs" ]
26,068,514
https://en.wikipedia.org/wiki/Leo%20V%20%28dwarf%20galaxy%29
Leo V is a dwarf spheroidal galaxy situated in the Leo constellation and discovered in 2007 in the data obtained by the Sloan Digital Sky Survey. The galaxy is located at a distance of about 180 kpc from the Sun and moves away from the Sun with the velocity of about 173 km/s. It is classified as a dwarf spheroidal galaxy (dSph) meaning that it has an approximately spherical shape with the half-light radius of about 130 pc. Leo V is one of the smallest and faintest satellites of the Milky Way—its integrated luminosity is about 10,000 times that of the Sun (absolute visible magnitude of about ), which is much lower than the luminosity of a typical globular cluster. However, its mass is about 330 thousand solar masses, which means that Leo's V mass to light ratio is around 75. A relatively high mass to light ratio implies that Leo V is dominated by dark matter. The stellar population of Leo V consists mainly of old stars formed more than 12 billion years ago. The metallicity of these stars is also very low at , which means that they contain 100 times less heavy elements than the Sun. The galaxy is located only 3 degrees away from another Milky Way satellite, Leo IV. The latter is also closer to the Sun by 20 kpc. These two galaxies may be physically associated with each other. There is evidence that they are connected by a star bridge. Notes References Dwarf spheroidal galaxies 4713563 Leo (constellation) Local Group Milky Way Subgroup ?
Leo V (dwarf galaxy)
[ "Astronomy" ]
316
[ "Leo (constellation)", "Constellations" ]