id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
30,536,078
https://en.wikipedia.org/wiki/Triangular%20matrix%20ring
In algebra, a triangular matrix ring, also called a triangular ring, is a ring constructed from two rings and a bimodule. Definition If and are rings and is a -bimodule, then the triangular matrix ring consists of 2-by-2 matrices of the form , where and with ordinary matrix addition and matrix multiplication as its operations. References Ring theory
Triangular matrix ring
[ "Mathematics" ]
77
[ "Fields of abstract algebra", "Ring theory" ]
30,542,341
https://en.wikipedia.org/wiki/RST%20model
The Russo–Susskind–Thorlacius model or RST model in short is a modification of the CGHS model to take care of conformal anomalies and render it analytically soluble. In the CGHS model, if we include Faddeev–Popov ghosts to gauge-fix diffeomorphisms in the conformal gauge, they contribute an anomaly of -24. Each matter field contributes an anomaly of 1. So, unless N=24, we will have gravitational anomalies. To the CGHS action , the following term is added, where κ is either or depending upon whether ghosts are considered. The nonlocal term leads to nonlocality. In the conformal gauge, . It might appear as if the theory is local in the conformal gauge, but this overlooks the fact that the Raychaudhuri equations are still nonlocal. References Anomalies (physics) Conformal field theory General relativity
RST model
[ "Physics" ]
202
[ "General relativity", "Relativity stubs", "Theory of relativity" ]
34,577,295
https://en.wikipedia.org/wiki/Latexin%20family
In molecular biology, the latexin family is a family of proteins which family consists of several animal specific latexin and proteins related to latexin that belong to MEROPS proteinase inhibitor family I47, clan IH. Latexin, a protein possessing inhibitory activity against rat carboxypeptidase A1 (CPA1) and CPA2 (MEROPS peptidase family M14A), is expressed in a neuronal subset in the cerebral cortex and cells in other neural and non-neural tissues of rat. OCX-32, the 32 kDa eggshell matrix protein, is present at high levels in the uterine fluid during the terminal phase of eggshell formation, and is localised predominantly in the outer eggshell. The timing of OCX-32 secretion into the uterine fluid suggests that it may play a role in the termination of mineral deposition. OCX-32 protein possesses limited identity (32%) to two unrelated proteins: latexin and to a skin protein that is encoded by a retinoic acid receptor-responsive gene, TIG1. Tazarotene Induced Gene 1 (TIG1) is a putative transmembrane protein with a small N-terminal intracellular region, a single membrane-spanning hydrophobic region, and a large C-terminal extracellular region containing a glycosylation signal. TIG1 is up-regulated by retinoic acid receptor but not by retinoid X receptor-specific synthetic retinoids. TIG1 may be a tumour suppressor gene whose diminished expression is involved in the malignant progression of prostate cancer. References Protein families
Latexin family
[ "Biology" ]
341
[ "Protein families", "Protein classification" ]
34,577,664
https://en.wikipedia.org/wiki/Phosphoryl%20fluoride
Phosphoryl fluoride (commonly called phosphorus oxyfluoride) is a compound with the chemical formula . It is a colorless gas that hydrolyzes rapidly. It has a critical temperature of 73 °C and a critical pressure of 4.25 bars. Synthesis and reactions Phosphorus oxyfluoride is prepared by partial hydrolysis of phosphorus pentafluoride. Phosphorus oxyfluoride is the progenitor of the simple fluorophosphoric acids by hydrolysis. The sequence starts with difluorophosphoric acid: The next steps give monofluorophosphoric acid and phosphoric acid: Phosphoryl fluoride combines with dimethylamine to produce dimethylaminophosphoryl difluoride and difluorophosphate and hexafluorophosphate ions. References Oxyfluorides Phosphorus oxohalides Phosphorus(V) compounds
Phosphoryl fluoride
[ "Chemistry" ]
201
[ "Inorganic compounds", "Inorganic compound stubs" ]
34,578,727
https://en.wikipedia.org/wiki/Variable%20renewable%20energy
Variable renewable energy (VRE) or intermittent renewable energy sources (IRES) are renewable energy sources that are not dispatchable due to their fluctuating nature, such as wind power and solar power, as opposed to controllable renewable energy sources, such as dammed hydroelectricity or bioenergy, or relatively constant sources, such as geothermal power. The use of small amounts of intermittent power has little effect on grid operations. Using larger amounts of intermittent power may require upgrades or even a redesign of the grid infrastructure. Options to absorb large shares of variable energy into the grid include using storage, improved interconnection between different variable sources to smooth out supply, using dispatchable energy sources such as hydroelectricity and having overcapacity, so that sufficient energy is produced even when weather is less favourable. More connections between the energy sector and the building, transport and industrial sectors may also help. Background and terminology The penetration of intermittent renewables in most power grids is low: global electricity generation in 2021 was 7% wind and 4% solar. However, in 2021 Denmark, Luxembourg and Uruguay generated over 40% of their electricity from wind and solar. Characteristics of variable renewables include their unpredictability, variability, and low operating costs. These, along with renewables typically being asynchronous generators, provide a challenge to grid operators, who must make sure supply and demand are matched. Solutions include energy storage, demand response, availability of overcapacity and sector coupling. Smaller isolated grids may be less tolerant to high levels of penetration. Matching power demand to supply is not a problem specific to intermittent power sources. Existing power grids already contain elements of uncertainty including sudden and large changes in demand and unforeseen power plant failures. Though power grids are already designed to have some capacity in excess of projected peak demand to deal with these problems, significant upgrades may be required to accommodate large amounts of intermittent power. Several key terms are useful for understanding the issue of intermittent power sources. These terms are not standardized, and variations may be used. Most of these terms also apply to traditional power plants. Intermittency or variability is the extent to which a power source fluctuates. This has two aspects: a predictable variability, such as the day-night cycle, and an unpredictable part (imperfect local weather forecasting). The term intermittent can be used to refer to the unpredictable part, with variable then referring to the predictable part. Dispatchability is the ability of a given power source to add output on demand. The concept is distinct from intermittency; dispatchability is one of several ways system operators match supply (generator's output) to system demand (technical loads). Penetration is the amount of electricity generated from a particular source as a percentage of annual consumption. Nominal power or nameplate capacity is the theoretical output registered with authorities for classifying the unit. For intermittent power sources, such as wind and solar, nameplate power is the source's output under ideal conditions, such as maximum usable wind or high sun on a clear summer day. Capacity factor, average capacity factor, or load factor is the ratio of actual electrical generation over a given period of time, usually a year, to actual generation in that time period. Basically, it is the ratio between the how much electricity a plant produced and how much electricity a plant would have produced if were running at its nameplate capacity for the entire time period. Firm capacity or firm power is "guaranteed by the supplier to be available at all times during a period covered by a commitment". Capacity credit: the amount of conventional (dispatchable) generation power that can be potentially removed from the system while keeping the reliability, usually expressed as a percentage of the nominal power. Foreseeability or predictability is how accurately the operator can anticipate the generation: for example tidal power varies with the tides but is completely foreseeable because the orbit of the moon can be predicted exactly, and improved weather forecasts can make wind power more predictable. Sources Dammed hydroelectricity, biomass and geothermal are dispatchable as each has a store of potential energy; wind and solar without storage can be decreased (curtailed) but are not dispatchable. Wind power Grid operators use day ahead forecasting to determine which of the available power sources to use the next day, and weather forecasting is used to predict the likely wind power and solar power output available. Although wind power forecasts have been used operationally for decades, the IEA is organizing international collaboration to further improve their accuracy. Wind-generated power is a variable resource, and the amount of electricity produced at any given point in time by a given plant will depend on wind speeds, air density, and turbine characteristics, among other factors. If wind speed is too low then the wind turbines will not be able to make electricity, and if it is too high the turbines will have to be shut down to avoid damage. While the output from a single turbine can vary greatly and rapidly as local wind speeds vary, as more turbines are connected over larger and larger areas the average power output becomes less variable. Intermittence: Regions smaller than synoptic scale, less than about 1000 km long, the size of an average country, have mostly the same weather and thus around the same wind power, unless local conditions favor special winds. Some studies show that wind farms spread over a geographically diverse area will as a whole rarely stop producing power altogether. This is rarely the case for smaller areas with uniform geography such as Ireland, Scotland and Denmark which have several days per year with little wind power. Capacity factor: Wind power typically has an annual capacity factor of 25–50%, with offshore wind outperforming onshore wind. Dispatchability: Because wind power is not by itself dispatchable wind farms are sometimes built with storage. Capacity credit: At low levels of penetration, the capacity credit of wind is about the same as the capacity factor. As the concentration of wind power on the grid rises, the capacity credit percentage drops. Variability: Site dependent. Sea breezes are much more constant than land breezes. Seasonal variability may reduce output by 50%. Reliability: A wind farm has high technical reliability when the wind blows. That is, the output at any given time will only vary gradually due to falling wind speeds or storms, the latter necessitating shut downs. A typical wind farm is unlikely to have to shut down in less than half an hour at the extreme, whereas an equivalent-sized power station can fail totally instantaneously and without warning. The total shutdown of wind turbines is predictable via weather forecasting. The average availability of a wind turbine is 98%, and when a turbine fails or is shut down for maintenance it only affects a small percentage of the output of a large wind farm. Predictability: Although wind is variable, it is also predictable in the short term. There is an 80% chance that wind output will change less than 10% in an hour and a 40% chance that it will change 10% or more in 5 hours. Because wind power is generated by large numbers of small generators, individual failures do not have large impacts on power grids. This feature of wind has been referred to as resiliency. Solar power Intermittency inherently affects solar energy, as the production of renewable electricity from solar sources depends on the amount of sunlight at a given place and time. Solar output varies throughout the day and through the seasons, and is affected by dust, fog, cloud cover, frost or snow. Many of the seasonal factors are fairly predictable, and some solar thermal systems make use of heat storage to produce grid power for a full day. Variability: In the absence of an energy storage system, solar does not produce power at night, little in bad weather and varies between seasons. In many countries, solar produces most energy in seasons with low wind availability and vice versa. Capacity factor Standard photovoltaic solar has an annual average capacity factor of 10-20%, but panels that move and track the sun have a capacity factor up to 30%. Thermal solar parabolic trough with storage 56%. Thermal solar power tower with storage 73%. The impact of intermittency of solar-generated electricity will depend on the correlation of generation with demand. For example, solar thermal power plants such as Nevada Solar One are somewhat matched to summer peak loads in areas with significant cooling demands, such as the south-western United States. Thermal energy storage systems like the small Spanish Gemasolar Thermosolar Plant can improve the match between solar supply and local consumption. The improved capacity factor using thermal storage represents a decrease in maximum capacity, and extends the total time the system generates power. Run-of-the-river hydroelectricity In many countries new large dams are no longer being built, because of the environmental impact of reservoirs. Run of the river projects have continued to be built. The absence of a reservoir results in both seasonal and annual variations in electricity generated. Tidal power Tidal power is the most predictable of all the variable renewable energy sources. The tides reverse twice a day, but they are never intermittent, on the contrary they are completely reliable. Wave power Waves are primarily created by wind, so the power available from waves tends to follow that available from wind, but due to the mass of the water is less variable than wind power. Wind power is proportional to the cube of the wind speed, while wave power is proportional to the square of the wave height. Solutions for their integration The displaced dispatchable generation could be coal, natural gas, biomass, nuclear, geothermal or storage hydro. Rather than starting and stopping nuclear or geothermal, it is cheaper to use them as constant base load power. Any power generated in excess of demand can displace heating fuels, be converted to storage or sold to another grid. Biofuels and conventional hydro can be saved for later when intermittents are not generating power. Some forecast that “near-firm” renewables (batteries with solar and/or wind) power will be cheaper than existing nuclear by the late 2020s: therefore they say base load power will not be needed. Alternatives to burning coal and natural gas which produce fewer greenhouse gases may eventually make fossil fuels a stranded asset that is left in the ground. Highly integrated grids favor flexibility and performance over cost, resulting in more plants that operate for fewer hours and lower capacity factors. All sources of electrical power have some degree of variability, as do demand patterns which routinely drive large swings in the amount of electricity that suppliers feed into the grid. Wherever possible, grid operations procedure are designed to match supply with demand at high levels of reliability, and the tools to influence supply and demand are well-developed. The introduction of large amounts of highly variable power generation may require changes to existing procedures and additional investments. The capacity of a reliable renewable power supply, can be fulfilled by the use of backup or extra infrastructure and technology, using mixed renewables to produce electricity above the intermittent average, which may be used to meet regular and unanticipated supply demands. Additionally, the storage of energy to fill the shortfall intermittency or for emergencies can be part of a reliable power supply. In practice, as the power output from wind varies, partially loaded conventional plants, which are already present to provide response and reserve, adjust their output to compensate. While low penetrations of intermittent power may use existing levels of response and spinning reserve, the larger overall variations at higher penetrations levels will require additional reserves or other means of compensation. Operational reserve All managed grids already have existing operational and "spinning" reserve to compensate for existing uncertainties in the power grid. The addition of intermittent resources such as wind does not require 100% "back-up" because operating reserves and balancing requirements are calculated on a system-wide basis, and not dedicated to a specific generating plant. Some gas, or hydro power plants are partially loaded and then controlled to change as demand changes or to replace rapidly lost generation. The ability to change as demand changes is termed "response". The ability to quickly replace lost generation, typically within timescales of 30 seconds to 30 minutes, is termed "spinning reserve". Generally thermal plants running as peaking plants will be less efficient than if they were running as base load. Hydroelectric facilities with storage capacity, such as the traditional dam configuration, may be operated as base load or peaking plants. Grids can contract for grid battery plants, which provide immediately available power for an hour or so, which gives time for other generators to be started up in the event of a failure, and greatly reduces the amount of spinning reserve required. Demand response Demand response is a change in consumption of energy to better align with supply. It can take the form of switching off loads, or absorb additional energy to correct supply/demand imbalances. Incentives have been widely created in the American, British and French systems for the use of these systems, such as favorable rates or capital cost assistance, encouraging consumers with large loads to take them offline whenever there is a shortage of capacity, or conversely to increase load when there is a surplus. Certain types of load control allow the power company to turn loads off remotely if insufficient power is available. In France large users such as CERN cut power usage as required by the System Operator - EDF under the encouragement of the EJP tariff. Energy demand management refers to incentives to adjust use of electricity, such as higher rates during peak hours. Real-time variable electricity pricing can encourage users to adjust usage to take advantage of periods when power is cheaply available and avoid periods when it is more scarce and expensive. Some loads such as desalination plants, electric boilers and industrial refrigeration units, are able to store their output (water and heat). Several papers also concluded that Bitcoin mining loads would reduce curtailment, hedge electricity price risk, stabilize the grid, increase the profitability of renewable energy power stations and therefore accelerate transition to sustainable energy. But others argue that Bitcoin mining can never be sustainable. Instantaneous demand reduction. Most large systems also have a category of loads which instantly disconnect when there is a generation shortage, under some mutually beneficial contract. This can give instant load reductions or increases. Storage At times of low load where non-dispatchable output from wind and solar may be high, grid stability requires lowering the output of various dispatchable generating sources or even increasing controllable loads, possibly by using energy storage to time-shift output to times of higher demand. Such mechanisms can include: Pumped storage hydropower is the most prevalent existing technology used, and can substantially improve the economics of wind power. The availability of hydropower sites suitable for storage will vary from grid to grid. Typical round trip efficiency is 80%. Traditional lithium-ion is the most common type used for grid-scale battery storage . Rechargeable flow batteries can serve as a large capacity, rapid-response storage medium. Hydrogen can be created through electrolysis and stored for later use. Flywheel energy storage systems have some advantages over chemical batteries. Along with substantial durability which allows them to be cycled frequently without noticeable life reduction, they also have very fast response and ramp rates. They can go from full discharge to full charge within a few seconds. They can be manufactured using non-toxic and environmentally friendly materials, easily recyclable once the service life is over. Thermal energy storage stores heat. Stored heat can be used directly for heating needs or converted into electricity. In the context of a CHP plant a heat storage can serve as a functional electricity storage at comparably low costs. Ice storage air conditioning Ice can be stored inter seasonally and can be used as a source of air-conditioning during periods of high demand. Present systems only need to store ice for a few hours but are well developed. Storage of electrical energy results in some lost energy because storage and retrieval are not perfectly efficient. Storage also requires capital investment and space for storage facilities. Geographic diversity and complementing technologies The variability of production from a single wind turbine can be high. Combining any additional number of turbines, for example, in a wind farm, results in lower statistical variation, as long as the correlation between the output of each turbine is imperfect, and the correlations are always imperfect due to the distance between each turbine. Similarly, geographically distant wind turbines or wind farms have lower correlations, reducing overall variability. Since wind power is dependent on weather systems, there is a limit to the benefit of this geographic diversity for any power system. Multiple wind farms spread over a wide geographic area and gridded together produce power more constantly and with less variability than smaller installations. Wind output can be predicted with some degree of confidence using weather forecasts, especially from large numbers of turbines/farms. The ability to predict wind output is expected to increase over time as data is collected, especially from newer facilities. Electricity produced from solar energy tends to counterbalance the fluctuating supplies generated from wind. Normally it is windiest at night and during cloudy or stormy weather, and there is more sunshine on clear days with less wind. Besides, wind energy has often a peak in the winter season, whereas solar energy has a peak in the summer season; the combination of wind and solar reduces the need for dispatchable backup power. In some locations, electricity demand may have a high correlation with wind output, particularly in locations where cold temperatures drive electric consumption, as cold air is denser and carries more energy. The allowable penetration may be increased with further investment in standby generation. For instance some days could produce 80% intermittent wind and on the many windless days substitute 80% dispatchable power like natural gas, biomass and Hydro. Areas with existing high levels of hydroelectric generation may ramp up or down to incorporate substantial amounts of wind. Norway, Brazil, and Manitoba all have high levels of hydroelectric generation, Quebec produces over 90% of its electricity from hydropower, and Hydro-Québec is the largest hydropower producer in the world. The U.S. Pacific Northwest has been identified as another region where wind energy is complemented well by existing hydropower. Storage capacity in hydropower facilities will be limited by size of reservoir, and environmental and other considerations. Connecting grid internationally It is often feasible to export energy to neighboring grids at times of surplus, and import energy when needed. This practice is common in Europe and between the US and Canada. Integration with other grids can lower the effective concentration of variable power: for instance, Denmark's high penetration of VRE, in the context of the German/Dutch/Scandinavian grids with which it has interconnections, is considerably lower as a proportion of the total system. Hydroelectricity that compensates for variability can be used across countries. The capacity of power transmission infrastructure may have to be substantially upgraded to support export/import plans. Some energy is lost in transmission. The economic value of exporting variable power depends in part on the ability of the exporting grid to provide the importing grid with useful power at useful times for an attractive price. Sector coupling Demand and generation can be better matched when sectors such as mobility, heat and gas are coupled with the power system. The electric vehicle market is for instance expected to become the largest source of storage capacity. This may be a more expensive option appropriate for high penetration of variable renewables, compared to other sources of flexibility. The International Energy Agency says that sector coupling is needed to compensate for the mismatch between seasonal demand and supply. Electric vehicles can be charged during periods of low demand and high production, and in some places send power back from the vehicle-to-grid. Penetration Penetration refers to the proportion of a primary energy (PE) source in an electric power system, expressed as a percentage. There are several methods of calculation yielding different penetrations. The penetration can be calculated either as: the nominal capacity (installed power) of a PE source divided by the peak load within an electric power system; or the nominal capacity (installed power) of a PE source divided by the total capacity of the electric power system; or the electrical energy generated by a PE source in a given period, divided by the demand of the electric power system in this period. The level of penetration of intermittent variable sources is significant for the following reasons: Power grids with significant amounts of dispatchable pumped storage, hydropower with reservoir or pondage or other peaking power plants such as natural gas-fired power plants are capable of accommodating fluctuations from intermittent power more easily. Relatively small electric power systems without strong interconnection (such as remote islands) may retain some existing diesel generators but consuming less fuel, for flexibility until cleaner energy sources or storage such as pumped hydro or batteries become cost-effective. In the early 2020s wind and solar produce 10% of the world's electricity, but supply in the 40-55% penetration range has already been implemented in several systems, with over 65% planned for the UK by 2030. There is no generally accepted maximum level of penetration, as each system's capacity to compensate for intermittency differs, and the systems themselves will change over time. Discussion of acceptable or unacceptable penetration figures should be treated and used with caution, as the relevance or significance will be highly dependent on local factors, grid structure and management, and existing generation capacity. For most systems worldwide, existing penetration levels are significantly lower than practical or theoretical maximums. Maximum penetration limits Maximum penetration of combined wind and solar is estimated at around 70% to 90% without regional aggregation, demand management or storage; and up to 94% with 12 hours of storage. Economic efficiency and cost considerations are more likely to dominate as critical factors; technical solutions may allow higher penetration levels to be considered in future, particularly if cost considerations are secondary. Economic impacts of variability Estimates of the cost of wind and solar energy may include estimates of the "external" costs of wind and solar variability, or be limited to the cost of production. All electrical plant has costs that are separate from the cost of production, including, for example, the cost of any necessary transmission capacity or reserve capacity in case of loss of generating capacity. Many types of generation, particularly fossil fuel derived, will have cost externalities such as pollution, greenhouse gas emission, and habitat destruction, which are generally not directly accounted for. The magnitude of the economic impacts is debated and will vary by location, but is expected to rise with higher penetration levels. At low penetration levels, costs such as operating reserve and balancing costs are believed to be insignificant. Intermittency may introduce additional costs that are distinct from or of a different magnitude than for traditional generation types. These may include: Transmission capacity: transmission capacity may be more expensive than for nuclear and coal generating capacity due to lower load factors. Transmission capacity will generally be sized to projected peak output, but average capacity for wind will be significantly lower, raising cost per unit of energy actually transmitted. However transmission costs are a low fraction of total energy costs. Additional operating reserve: if additional wind and solar does not correspond to demand patterns, additional operating reserve may be required compared to other generating types, however this does not result in higher capital costs for additional plants since this is merely existing plants running at low output - spinning reserve. Contrary to statements that all wind must be backed by an equal amount of "back-up capacity", intermittent generators contribute to base capacity "as long as there is some probability of output during peak periods". Back-up capacity is not attributed to individual generators, as back-up or operating reserve "only have meaning at the system level". Balancing costs: to maintain grid stability, some additional costs may be incurred for balancing of load with demand. Although improvements to grid balancing can be costly, they can lead to long term savings. In many countries for many types of variable renewable energy, from time to time the government invites companies to tender sealed bids to construct a certain capacity of solar power to connect to certain electricity substations. By accepting the lowest bid the government commits to buy at that price per kWh for a fixed number of years, or up to a certain total amount of power. This provides certainty for investors against highly volatile wholesale electricity prices. However they may still risk exchange rate volatility if they borrowed in foreign currency. Examples by country Great Britain The operator of the British electricity system has said that it will be capable of operating zero-carbon by 2025, whenever there is enough renewable generation, and may be carbon negative by 2033. The company, National Grid Electricity System Operator, states that new products and services will help reduce the overall cost of operating the system. Germany In countries with a considerable amount of renewable energy, solar energy causes price drops around noon every day. PV production follows the higher demand during these hours. The images below show two weeks in 2022 in Germany, where renewable energy has a share of over 40%. Prices also drop every night and weekend due to low demand. In hours without PV and wind power, electricity prices rise. This can lead to demand side adjustments. While industry is dependent on the hourly prices, most private households still pay a fixed tariff. With smart meters, private consomers can also be motivated i.e. to load an electric car when enough renewable energy is available and prices are cheap. Steerable flexibility in electricity production is essential to back up variable energy sources. The German example shows that pumped hydro storage, gas plants and hard coal jump in fast. Lignite varies on a daily basis. Nuclear power and biomass can theoretically adjust to a certain extent. However, in this case incentives still seem not be high enough. See also Combined cycle hydrogen power plant Cost of electricity by source Energy security and renewable technology Ground source heat pump List of energy storage power plants Spark spread: calculating the cost of back up References Further reading External links Grid Integration of Wind Energy Electric power distribution Energy storage Renewable energy
Variable renewable energy
[ "Engineering" ]
5,256
[ "Power engineering", "Electrical engineering", "Energy engineering" ]
34,580,174
https://en.wikipedia.org/wiki/Antisymmetric%20exchange
In Physics, antisymmetric exchange, also known as the Dzyaloshinskii–Moriya interaction (DMI), is a contribution to the total magnetic exchange interaction between two neighboring magnetic spins, and . Quantitatively, it is a term in the Hamiltonian which can be written as . In magnetically ordered systems, it favors a spin canting of otherwise parallel or antiparallel aligned magnetic moments and thus, is a source of weak ferromagnetic behavior in an antiferromagnet. The interaction is fundamental to the production of magnetic skyrmions and explains the magnetoelectric effects in a class of materials termed multiferroics. History The discovery of antisymmetric exchange originated in the early 20th century from the controversial observation of weak ferromagnetism in typically antiferromagnetic -FeO crystals. In 1958, Igor Dzyaloshinskii provided evidence that the interaction was due to the relativistic spin lattice and magnetic dipole interactions based on Lev Landau's theory of phase transitions of the second kind. In 1960, Toru Moriya identified the spin-orbit coupling as the microscopic mechanism of the antisymmetric exchange interaction. Moriya referred to this phenomenon specifically as the "antisymmetric part of the anisotropic superexchange interaction." The simplified naming of this phenomenon occurred in 1962, when D. Treves and S. Alexander of Bell Telephone Laboratories simply referred to the interaction as antisymmetric exchange. Because of their seminal contributions to the field, antisymmetric exchange is sometimes referred to as the Dzyaloshinskii–Moriya interaction. Derivation The functional form of the DMI can be obtained through a second-order perturbative analysis of the spin-orbit coupling interaction, between ions in Anderson's superexchange formalism. Note the notation used implies is a 3-dimensional vector of angular momentum operators on ion , and is a 3-dimensional spin operator of the same form: where is the exchange integral, with the ground orbital wavefunction of the ion at , etc. If the ground state is non-degenerate, then the matrix elements of are purely imaginary, and we can write out as Effects of crystal symmetry In an actual crystal, symmetries of neighboring ions dictate the magnitude and direction of the vector . Considering the coupling of ions 1 and 2 at locations and , with the point bisecting denoted , The following rules may be obtained: When a center of inversion is located at , When a mirror plane perpendicular to passes through , When there is a mirror plane including and , When a two-fold rotation axis perpendicular to passes through , When there is an -fold axis () along , The orientation of the vector is constrained by symmetry, as discussed already in Moriya’s original publication. Considering the case that the magnetic interaction between two neighboring ions is transferred via a single third ion (ligand) by the superexchange mechanism (see Figure), the orientation of is obtained by the simple relation . This implies that is oriented perpendicular to the triangle spanned by the involved three ions. if the three ions are in line. Measurement The Dzyaloshinskii–Moriya interaction has proven difficult to experimentally measure directly due to its typically weak effects and similarity to other magnetoelectric effects in bulk materials. Attempts to quantify the DMI vector have utilized X-ray diffraction interference, Brillouin scattering, electron spin resonance, and neutron scattering. Many of these techniques only measure either the direction or strength of the interaction and make assumptions on the symmetry or coupling of the spin interaction. A recent advancement in broadband electron spin resonance coupled with optical detection (OD-ESR) allows for characterization of the DMI vector for rare-earth ion materials with no assumptions and across a large spectrum of magnetic field strength. Material examples The image on the right displays a coordinated heavy metal-oxide complex that can display ferromagnetic or antiferromagnetic behavior depending on the metal ion. The structure shown is referred to as the corundum crystal structure, named after the primary form of Aluminum oxide (), which displays the Rc trigonal space group. The structure also contains the same unit cell as -FeO and -CrO which possess D63d space group symmetry. The upper half unit cell displayed shows four M3+ ions along the space diagonal of the rhombohedron. In the FeO structure, the spins of the first and last metal ion are positive while the center two are negative. In the -CrO structure, the spins of the first and third metal ion are positive while the second and fourth are negative. Both compounds are antiferromagnetic at cold temperatures (<250K), however -FeO above this temperature undergoes a structural change where its total spin vector no longer points along the crystal axis but at a slight angle along the basal (111) plane. This is what causes the iron-containing compound to display an instantaneous ferromagnetic moment above 250K, while the chromium-containing compound shows no change. It is thus the combination of the distribution of ion spins, the misalignment of the total spin vector, and the resulting antisymmetry of the unit cell that gives rise to the antisymmetric exchange phenomenon seen in these crystal structures. Applications Magnetic skyrmions A magnetic skyrmion is a magnetic texture that occurs in the magnetization field. They exist in spiral or hedgehog configurations that are stabilized by the Dzyaloshinskii-Moriya interaction. Skyrmions are topological in nature, making them promising candidates for future spintronic devices. Multiferroics Antisymmetric exchange is of importance for the understanding of magnetism induced electric polarization in a recently discovered class of multiferroics. Here, small shifts of the ligand ions can be induced by magnetic ordering, because the systems tend to enhance the magnetic interaction energy at the cost of lattice energy. This mechanism is called "inverse Dzyaloshinskii–Moriya effect". In certain magnetic structures, all ligand ions are shifted into the same direction, leading to a net electric polarization. Because of their magneto electric coupling, multiferroic materials are of interest in applications where there is a need to control magnetism through applied electric fields. Such applications include tunnel magnetoresistance (TMR) sensors, spin valves with electric field tunable functions, high-sensitivity alternating magnetic field sensors, and electrically tunable microwave devices. Most multiferroic materials are transition metal oxides due to the magnetization potential of the 3d electrons. Many can also be classified as perovskites and contain the Fe3+ ion alongside a lanthanide ion. Below is an abbreviated table of common multiferroic compounds. For more examples and applications see also multiferroics. See also Exchange interaction Spin–orbit coupling Superexchange Landau theory Skyrmions Multiferroics References Magnetic exchange interactions Spintronics
Antisymmetric exchange
[ "Physics", "Materials_science" ]
1,445
[ "Spintronics", "Condensed matter physics" ]
34,582,972
https://en.wikipedia.org/wiki/Quantum%20paraelectricity
Quantum paraelectricity is a type of incipient ferroelectricity where the onset of ferroelectric order is suppressed by quantum fluctuations. From the soft mode theory of ferroelectricity, this occurs when a ferroelectric instability is stabilized by quantum fluctuations. In this case the soft-mode frequency never becomes unstable (Fig. 1a) as opposed to a regular ferroelectric. Experimentally this is associated with an anomalous behaviour of the dielectric susceptibility, for example in SrTiO3. In a normal ferroelectric, close to the onset of the phase transition the dielectric susceptibility diverges as the temperature approaches the Curie temperature. However, in the case of a quantum paraelectric the dielectric susceptibility diverges until it reaches a temperature low enough for quantum effects to cancel out the ferroelectricity (Fig. 1b). In the case of SrTiO3 this is around 4K. Other known quantum paraelectrics are KTaO3 and potentially CaTiO3. References Electric and magnetic fields in matter
Quantum paraelectricity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
235
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
38,555,995
https://en.wikipedia.org/wiki/Argus%20retinal%20prosthesis
Argus retinal prosthesis, also known as a bionic eye, is an electronic retinal implant manufactured by the American company Second Sight Medical Products. It is used as a visual prosthesis to improve the vision of people with severe cases of retinitis pigmentosa. The Argus II version of the system was approved for marketing in the European Union in March 2011, and it received approval in the US in February 2013 under a humanitarian device exemption. The Argus II system costs about US$150,000, excluding the cost of the implantation surgery and training to learn to use the device. Second Sight had its IPO in 2014 and was listed on Nasdaq. Production and development of the prosthesis was discontinued in 2020, but taken over by the company Cortigent in 2023. Medical use The Argus II is specifically designed to treat people with retinitis pigmentosa. The device was approved with data from a single-arm clinical trial that enrolled thirty people with severe retinitis pigmentosa; the longest follow-up on a trial subject was 38.3 months. People in the trial received the implant in only one eye and tests were conducted with the device switched on, or switched off as a control. With the device switched on, about 23% of the subjects had improvements in their ability to see; all had been at 2.9 or higher on the LogMAR scale and improvements ranged from just under 2.9 to 1.6 LogMARthe equivalent of 20/1262 reading ability. 96% of the subjects were better able to identify a white square on a black computer screen; 57% were more able to determine the direction in which a white bar moved across a black computer screen. With the device switched on, about 60% were able to accurately walk to a door that was away, as opposed to only 5% with the device switched off; 93% had no change in their perception of light. Side effects Among the thirty subjects in the clinical trial, there were nine serious adverse events recorded, including lower than normal intraocular pressure, erosion of the conjunctiva, reopening of the surgical wound, inflammation inside the eye, and retinal detachments. There is also a risk of bacterial infection from the implanted cables that connect the implant to the signal processor. Surgical procedure The implantation procedure takes several hours, with the person receiving the implant under general anaesthesia. The surgeon removes the vitreous humor and any membranes on the retina where the implant will be placed. The implant is attached to the surface of the retina with a tack. The cables connecting the implant to the processor are run through the pars plana, a region near where the iris and sclera touch. Device The Argus implant's primary external element is a digital camera mounted on eyeglass frames, which obtains images of the user's surroundings; signals from the camera are transmitted wirelessly to a computerised image processor. The processor is in turn connected by cables to the implant itself, which is surgically implanted on the surface of the person's retina and tacked into place. The implant consists of 60 electrodes, each 200 microns in diameter. The resolution of the 6 dot by 10 dot rectangular grid image (produced by the 6 by 10 array of 60 electrode, of which 55 are enabled ) in a person's vision is very low relative to normal visual acuity. This allows visual detection of edges of large areas of high contrast, such as door frames and sidewalks, to give the individual the capability to navigate in their environment more safely. History The implant's manufacturer, Second Sight Medical Products, was founded in Sylmar, California, in 1998, by Alfred Mann, Samuel Williams, and Gunnar Bjorg. Williams, an investor in a cochlear implant company operated by Mann, approached Mann about founding a company to develop a similar product for the eye, and Mann called a meeting with the two of them and Robert Greenberg, who worked at Mann's foundation. Greenberg had previously worked on retinal prosthetics as a graduate student at Johns Hopkins University, wrote the business plan, and was appointed as CEO of the new company when it was launched. Greenberg led the company as CEO through 2015 (and was chairman of the board through 2018). The first version of the prosthesis, the Argus I, was clinically tested on six people starting in 2002. The second version, the Argus II, was designed to be smaller and easier to implant, and was co-invented by Mark Humayun of the USC Eye Institute, who had been involved in the clinical testing of the Argus I. The Argus II was first tested in Mexico in 2006, and then a 30-person clinical trial was conducted in 10 medical centers across Europe and the United States. Society and culture Regulatory status The Argus II received approval for commercial use in the European Union in March 2011. In February 2013, the FDA approved the Argus II under a humanitarian device exemption, authorizing its use for up to 4,000 people in the US per year. Pricing and insurance The Argus II was initially available at a limited number of clinics in France, Germany, Italy, the Netherlands, the United Kingdom and Saudi Arabia, at an EU market price of US$115,000. When the Argus II launched in the United States in February 2013, Second Sight announced that it would be priced at around $150,000, excluding the cost of surgery and usage training. In August 2013, Second Sight announced that reimbursement payments had been approved for the Argus II for blind Medicare recipients in the United States. Research A trial in England funded by NHS England for ten patients began in 2017. Aftermath In 2020, Second Sight stopped providing technical support for the Argus, as well as for the successor device, Argus II, and for the brain implant, Orion; an investigation by IEEE Spectrum revealed that users riskand in some cases, have already experienceda return to blindness. Second Sight merged with Nano Precision Medical in August 2023 with a commitment to provide technical support for the Argus II. See also Bionic contact lens References External links Second Sight official website Bionics Biomedical engineering Neuroprosthetics Implants (medicine) Medical equipment Blindness American inventions 2011 introductions
Argus retinal prosthesis
[ "Engineering", "Biology" ]
1,280
[ "Biological engineering", "Biomedical engineering", "Bionics", "Medical equipment", "Medical technology" ]
38,556,220
https://en.wikipedia.org/wiki/Q%20Scorpii
Q Scorpii, also designated as HD 159433, is an astrometric binary (100% chance) located in the southern zodiac constellation Scorpius. It has an apparent magnitude of 4.27, making it readily visible to the naked eye under ideal conditions. It lies in the tail of Scorpius, between the stars λ Scorpii and μ Scorpii and is located away from the faint globular cluster Tonantzintla 2. Based on parallax measurements from Gaia DR3, the system is estimated to be 158 light years distant, but is approaching the Solar System with a heliocentric radial velocity of . The visible component is a red giant with a stellar classification of K0 IIIb. The IIIb luminosity class indicates that it is a lower luminosity giant star. Q Scorpii is a red clump star located on the cool end of the horizontal branch, fusing helium at its core. It has 110% the mass of the Sun but has expanded to 12.4 times its girth. It radiates 62 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it an orange hue. Q Scorpii has an iron abundance half of the Sun's, making it metal deficient. Like most giant stars, it spins slowly, having a projected rotational velocity lower than . References K-type giants Horizontal-branch stars Scorpius Scorpii, Q CD-38 12044 159433 086170 6546 Scorpii, 159
Q Scorpii
[ "Astronomy" ]
333
[ "Scorpius", "Constellations" ]
38,567,601
https://en.wikipedia.org/wiki/Rotational%20viscosity
Viscosity is usually described as the property of a fluid which determines the rate at which local momentum differences are equilibrated. Rotational viscosity is a property of a fluid which determines the rate at which local angular momentum differences are equilibrated. In the classical case, by the equipartition theorem, at equilibrium, if particle collisions can transfer angular momentum as well as linear momentum, then these degrees of freedom will have the same average energy. If there is a lack of equilibrium between these degrees of freedom, then the rate of equilibration will be determined by the rotational viscosity coefficient. Rotational viscosity has traditionally been thought to require rotational degrees of freedom for the fluid particles, such as in liquid crystals. In these fluids, the rotational degrees of freedom allow angular momentum to become a dynamical quantity that can be locally relaxed, leading to rotational viscosity. However, recent theoretical work has predicted that rotational viscosity ought to also be present in viscous electron fluids (see Gurzhi effect) in anisotropic metals. In these cases, the ionic lattice explicitly breaks rotational symmetry and applies torques to the electron fluid, implying non-conservation of angular momentum and hence rotational viscosity. Derivation and Use The angular momentum density of a fluid element is written either as an antisymmetric tensor () or, equivalently, as a pseudovector. As a tensor, the equation for the conservation of angular momentum for a simple fluid with no external forces is written: where is the fluid velocity and is the total pressure tensor (or, equivalently, the negative of the total stress tensor). Note that the Einstein summation convention is used, where summation is assumed over pairs of matched indices. The angular momentum of a fluid element can be separated into extrinsic angular momentum density due to the flow () and intrinsic angular momentum density due to the rotation of the fluid particles about their center of mass (): where the extrinsic angular momentum density is: and is the mass density of the fluid element. The conservation of linear momentum equation is written: and it can be shown that this implies that: Subtracting this from the equation for the conservation of angular momentum yields: Any situation in which this last term is zero will result in the total pressure tensor being symmetric, and the conservation of angular momentum equation will be redundant with the conservation of linear momentum. If, however, the internal rotational degrees of freedom of the particles are coupled to the flow (via the velocity term in the above equation), then the total pressure tensor will not be symmetric, with its antisymmetric component describing the rate of angular momentum exchange between the flow and the particle rotation. In the linear approximation for this transport of angular momentum, the rate of flow is written: where is the average angular velocity of the rotating particles (as an antisymmetric tensor rather than a pseudovector) and is the rotational viscosity coefficient. References Category Fluid dynamics Viscosity
Rotational viscosity
[ "Physics", "Chemistry", "Engineering" ]
615
[ "Physical phenomena", "Physical quantities", "Chemical engineering", "Piping", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties", "Fluid dynamics" ]
37,116,237
https://en.wikipedia.org/wiki/Petersen%20matrix
The Petersen matrix is a comprehensive description of systems of biochemical reactions used to model reactors for pollution control (engineered decomposition) as well as in environmental systems. It has as many columns as the number of relevant involved components (chemicals, pollutants, biomasses, gases) and as many rows as the number of involved processes (biochemical reactions and physical degradation). One further column is added to host the description of the kinetics of each transformation (rate equation). Matrix structure The mass conservation principle for each process is expressed in the rows of the matrix. If all components are included (none omitted) then the mass conservation principle states that, for each process: where is the density rate of each component. This can also be seen as the process stoichiometric relation. Moreover, the rate of variation of each component for all processes simultaneous effect can be easily assessed by summing the columns: where are the reaction rates of each process. Example A system of a third order reaction followed by a Michaelis–Menten enzyme reaction. {A} + 2B -> S {E} + S <=>[k_f][k_r] ES ->[k_\mathrm{cat}] {E} + P where the reagents A and B combine forming the substrate S (S = AB2), which with the help of enzyme E is transformed into the product P. Production rates for each substance is: Therefore, the Petersen matrix reads as The Petersen matrix can be used to write the system's rate equation References Biodegradation Biodegradable waste management Chemical processes
Petersen matrix
[ "Chemistry" ]
331
[ "Biodegradable waste management", "Chemical processes", "Biodegradation", "nan", "Chemical process engineering" ]
37,120,076
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20direct%20thrombin%20inhibitors
Direct thrombin inhibitors (DTIs) are a class of anticoagulant drugs that can be used to prevent and treat embolisms and blood clots caused by various diseases. They inhibit thrombin, a serine protease which affects the coagulation cascade in many ways. DTIs have undergone rapid development since the 90's. With technological advances in genetic engineering the production of recombinant hirudin was made possible which opened the door to this new group of drugs. Before the use of DTIs the therapy and prophylaxis for anticoagulation had stayed the same for over 50 years with the use of heparin derivatives and warfarin which have some well known disadvantages. DTIs are still under development, but the research focus has shifted towards factor Xa inhibitors, or even dual thrombin and fXa inhibitors that have a broader mechanism of action by both inhibiting factor IIa (thrombin) and Xa. A recent review of patents and literature on thrombin inhibitors has demonstrated that the development of allosteric and multi-mechanism inhibitors might lead the way to a safer anticoagulant. History Anticoagulation therapy has a long history. In 1884 John Berry Haycraft described a substance found in the saliva of leeches, Hirudo medicinalis, that had anticoagulant effects. He named the substance ‘Hirudine’ from the Latin name. The use of medicinal leeches can be dated back all the way to ancient Egypt. In the early 20th century Jay McLean, L. Emmet Holt Jr. and William Henry Howell discovered the anticoagulant heparin, which they isolated from the liver (hepar). Heparin remains one of the most effective anticoagulants and is still used today, although it has its disadvantages, such as requiring intravenous administration and having a variable dose-response curve due to substantial protein binding. In the 1980s low molecular-weight heparin (LMWH) were developed. They are derived from heparin by enzymatic or chemical depolymerization and have better pharmacokinetic properties than heparin. In 1955 the first clinical use of warfarin, a vitamin K antagonist, was reported. Warfarin was originally used as a rat poison in 1948 and thought to be unsafe for humans, but a suicide attempt suggested that it was relatively safe for humans. Vitamin K antagonists are the most commonly used oral anticoagulants today and warfarin was the 11th most prescribed drug in the United States in 1999 and is actually the most widely prescribed oral anticoagulant worldwide. Warfarin has its disadvantages though, just like heparin, such as a narrow therapeutic index and multiple food and drug interactions and it requires routine anticoagulation monitoring and dose adjustment. Since both heparin and warfarin have their downsides the search for alternative anticoagulants has been ongoing and DTIs are proving to be worthy competitors. The first DTI was actually hirudin, which became more easily available with genetic engineering. It is now available in a recombinant form as lepirudin (Refludan) and desirudin (Revasc, Iprivask). Development of other DTIs followed with the hirudin analog, bivalirudin, and then the small molecular DTIs. However, such DTIs were also having side effects such as bleeding complications and liver toxicity, and their long-term effects were in doubt. Mechanism of action Blood clotting cascade When a blood vessel ruptures or gets injured, factor VII comes into contact with tissue factors which starts a process called the blood coagulation cascade. Its purpose is to stop bleeding and repair tissue damage. When this process is too active due to various problems the risk of blood clots or embolisms increases. As the name indicates the cascade is a multi-step procedure where the main product thrombin is made by activating various proenzymes (mainly serine proteases) in each step of the cascade. Thrombin has multiple purposes, but mainly it converts soluble fibrinogen to an insoluble fibrin complex. Furthermore, it activates factors V, VIII and XI, all by cleaving the sequences GlyGlyGlyValArg-GlyPro and PhePheSerAlaArg-GlyHis, selectively between Arginine (Arg) and Glycine (Gly). These factors generate more thrombin. Thrombin also activates factor XIII that stabilizes the fibrin complex and therefore the clot and it stimulates platelets, which help with the coagulation. Given this broad action of thrombin it stands as a good drug target for anticoagulant drugs such as heparin, warfarin and DTIs and antiplatelet drugs like aspirin. Binding sites Thrombin is in the serine protease family. It has 3 binding domains in which thrombin-inhibition drugs bind to. Those proteases have a deep narrow gap as an active binding site that consists of two β-barrel subdomains that make up the surface gap which binds substrate peptides. The surface in the gap seems to have limiting access to molecules by steric hindrance, this binding site consists of 3 amino acids, Asp-102, His-57 and Ser-195. Thrombin also has two exosites (1 and 2). Thrombin is a little different from other serine proteases as exosite 1 is anion-binding and binds to fibrin and other similar substrates while exosite 2 is a heparin-binding domain. DTIs inhibition . DTIs inhibit thrombin by two ways; bivalent DTIs block simultaneously the active site and exosite 1 and act as competitive inhibitors of fibrin, while univalent DTIs block only the active site and can therefore both inhibit unbound and fibrin-bound thrombin. In contrast, heparin drugs bind in exosite 2 and form a bridge between thrombin and antithrombin, a natural anticoagulant substrate formed in the body, and strongly catalyzes its function. But heparin can also form a bridge between thrombin and fibrin which binds to exosite 1 which protects the thrombin from inhibiting function of heparin-antithrombin complex and increases thrombin's affinity to fibrin. DTIs that bind to the anion-binding site have shown to inactivate thrombin without disconnecting thrombin from fibrin, which points to a separate binding site for fibrin. DTIs aren't dependent to cofactors like antithrombin to inhibit thrombin so they can both inhibit free/soluble thrombin as well as fibrin bound thrombin unlike heparins. The inhibition is either irreversible or reversible. Reversible inhibition is often linked to lesser risk of bleeding. Due to this action of DTIs they can both be used for prophylaxis as well as treatment for embolisms/clots. Active site's pockets DTIs that fit in the active binding site have to fit in the hydrophobic pocket (S1) that contains aspartic acid residue at the bottom which recognizes the basic side chain. The S2 site has a loop around tryptophan which occludes a hydrophobic pocket that can recognize larger aliphatic residues. The S3 site is flat and the S4 site is hydrophobic, it has tryptophan lined by leucine and isoleucine. Nα-(2-naphthyl-sulphonyl-glycyl)-DL-p-amidinophenylalanyl-piperidine (NAPAP) binds thrombin in the S1, S2 and S4 pockets. The amidine group on NAPAP forms a bidentate salt bridge with Asp deep in the S1 pocket, the piperidine group takes the role of proline residue and binds in the S2 pocket, and the naphthyl rings of the molecule forms a hydrophobic interaction with Trp in the S4 pocket. Pharmaceutical companies have used the structural knowledge of NAPAP to develop DTIs. Dabigatran, like NAPAP binds to S1, S2 and S4 pockets. Benzamidine group on the dabigatran structure binds deep in the S1 pocket, the methylbenzimidazole fits nicely in the hydrophobic S2 pocket and the Ile and Leu at the bottom of the S4 pocket binds to the aromatic group of dabigatran. Drug development Hirudin derivatives Hirudin derivatives are all bivalent DTIs, they block both the active site and exosite 1 in an irreversible 1:1 stoichiometric complex. The active site is the binding site for the globular amino-terminal domain and exosite 1 is the binding site for the acidic carboxy-terminal domain of hirudin. Native hirudin, a 65-amino-acid polypeptide, is produced in the parapharyngeal glands of medicinal leeches. Hirudins today are produced by recombinant biotechnology using yeast. These recombinant hirudins lack a sulfate group at Tyr-63 and are therefore called desulfatohirudins. They have a 10-fold lower binding affinity to thrombin compared to native hirudin, but remain a highly specific inhibitor of thrombin and have an inhibition constant for thrombin in the picomolar range. Renal clearance and degradation account for the most part for the systemic clearance of desulfatohirudins and there is accumulation of the drug in patients with chronic kidney disease. These drugs should not be used in patients with impaired renal function, since there is no specific antidote available to reverse the effects. Hirudins are given parenterally, usually by intravenous injection. 80% of hirudin is distributed in the extravascular compartment and only 20% is found in the plasma. The most common desulfatohirudins today are lepirudin and desirudin. Hirudin In a meta-analysis of 11 randomized trials involving hirudin and other DTIs versus heparin in the treatment of acute coronary syndrome (ACS) it was found that hirudin has a significantly higher incidence of bleeding compared with heparin. Hirudin is therefore not recommended for treatment of ACS and currently it has no clinical indications. Lepirudin Lepirudin is approved for the treatment of heparin-induced thrombocytopenia (HIT) in the USA, Canada, Europe and Australia. HIT is a very serious adverse event related to heparin and occurs with both unfractionated heparin and LMWH, although to a lesser extent with the latter. It is an immune-mediated, prothrombotic complication which results from a platelet-activating immune response triggered by the interaction of heparin with platelet factor 4 (PF4). The PF4-heparin complex can activate platelets and may cause venous and arterial thrombosis. When lepirudin binds to thrombin it hinders its prothrombic activity. Three prospective studies, called the Heparin-Associated-Thrombocytopenia (HAT) 1,2, and 3, were performed that compared lepirudin with historical controls in the treatment of HIT. All three studies showed that the risk of new thrombosis was decreased with the use of lepirudin, but the risk for major bleeding was increased. The higher incidence of major bleeding is thought to be the result of an approved dosing regimen that was too high, consequently the recommended dose was halved from the initial dose. As of May 2012 Bayer HealthCare, the only manufacturer of lepirudin injections, discontinued its production. They expect supplies from wholesalers to be depleted by mid-2013. Desirudin Desirudin is approved for treatment of venous thromboembolism (VTE) in Europe and multiple phase III trials are presently ongoing in the USA. Two studies comparing desirudin with enoxaparin (a LMWH) or unfractionated heparin have been performed. In both studies desirudin was considered to be superior in preventing VTE. Desirudin also reduced the rate of proximal deep vein thrombosis. Bleeding rates were similar with desirudin and heparin. Bivalirudin Bivalirudin, a 20 amino acid polypeptide, is a synthetic analog of hirudin. Like the hirudins it is also a bivalent DTI. It has an amino-terminal D-Phe-Pro-Arg-Pro domain that is linked via four Gly residues to a dodecapeptide analog of the carboxy-terminal of hirudin. The amino-terminal domain binds to the active site and the carboxy-terminal domain binds to exosite 1 on thrombin. Different from the hirudins, once bound thrombin cleaves the Arg-Pro bond at the amino-terminal of bivalirudin and as a result restores the functions to the active site of the enzyme. Even though the carboxy-terminal domain of bivalirudin is still bound to exosite 1 on thrombin, the affinity of the bond is decreased after the amino-terminal is released. This allows substrates to substrates to compete with cleaved bivalirudin for access to exosite 1 on thrombin. The use of bivalirudin has mostly been studied in the setting of acute coronary syndrome. A few studies indicate that bivalirudin is non-inferior compared to heparin and that bivalirudin is associated with a lower rate of bleeding. Unlike the hirudins, bivalirudin is only partially (about 20%) excreted by the kidneys, other sites such as hepatic metabolism and proteolysis also contribute to its metabolism, making it safer to use in patients with renal impairment; however, dose adjustments are needed in severe renal impairment. Small molecular direct thrombin inhibitors Small molecular direct thrombin inhibitors (smDTIs) are non-peptide small molecules that specifically and reversibly inhibit both free and clot-bound thrombin by binding to the active site of the thrombin molecule. They prevent VTE in patients undergoing hip- and knee replacement surgery. The advantages of this type of DTIs are that they do not need monitoring, have a wide therapeutic index and the possibility of oral administration route. They are theoretically more convenient than both vitamin K antagonist and LMWH. Researches will, however, have to show the indication of the use and their safety. The smDTIs where derived using a peptidomimetic design with either P1 residue from arginine itself (e.g. argatroban) or arginine-like substrates such as benzamidine (e.g. NAPAP). Argatroban Argatroban is a small univalent DTI formed from P1 residue from arginine. It binds to the active site on thrombin. The X-ray crystal structure shows that the piperidine ring binds in the S2 pocket and the guanidine group binds with hydrogen bonds with Asp189 into the S1 pocket. It’s given as an intravenous bolus because the highly basic guanidine with pKa 13 prevents it to be absorbed from the gastrointestinal tract. The plasma half-life is approximately 45 minutes. As argatroban is metabolized via hepatic pathway and is mainly excreted through the biliary system, dose adjustments are necessary in patients with hepatic impairment but not renal damage. Argatroban has been approved in the USA since 2000 for the treatment of thrombosis in patients with HIT and 2002 for anticoagulation in patients with a history of HIT or are at risk of HIT undergoing percutaneous coronary interventions (PCI). It was first introduced in Japan in 1990 for treatment of peripheral vascular disorders. Ximelagatran The publication of the NAPAP-fIIa crystal structure triggered many researches on thrombin inhibitors. NAPAP is an active site thrombin inhibitor. It fills the S3 and S2 pockets with its naphthalene and piperidine groups. AstraZeneca used the information to develop melagatran. The compound was poorly orally available, but after renovation they got a double prodrug which was the first oral DTI in clinical trials, ximelagatran. Ximelagatran was on the European market for approximately 20 months when it was suspended. Studies showed that treatment for over 35 days was linked with the risk of hepatic toxicity. It was never approved by the FDA. Dabigatran etexilate Researchers at Boehringer Ingelheim also used the publicized information about the NAPAP-fIIa crystal structure, starting with the NAPAP structure that led to the discovery of dabigatran, which is a very polar compound and therefore not orally active. By masking the amidinium moiety as a carbamate-ester and turning the carboxylate into an ester they were able to make a prodrug called dabigatran etexilate, a highly lipophilic, gastrointestinally absorbed and orally bioavailable double prodrug such as ximelagatran, with the plasma half-life of approximately 12 hours. Dabigatran etexilate is rapidly absorbed, it lacks interaction with cytochrome P450 enzymes and with other food and drugs, there is no need for routine monitoring and it has a broad therapeutic index and a fixed-dose administration, which is excellent safety compared with warfarin. Unlike ximelagatran, a long-term treatment of dabigatran etexilate has not been linked with hepatic toxicity, seeing as how the drug is predominantly eliminated (>80%) by the kidneys. Dabigatran etexilate was approved in Canada and Europe in 2008 for the prevention of VTE in patients undergoing hip- and knee surgery. In October 2010 the US FDA approved dabigatran etexilate for the prevention of stroke in patients with atrial fibrillation (AF). Many pharmaceutical companies have attempted to develop orally bioavailable DTI drugs but dabigatran etexilate is the only one to reach the market. In a 2012 meta-analysis dabigatran was associated with increased risk of myocardial infarction (MI) or ACS when tested against different controls in a broad spectrum of patients. Table 1: Summary of characteristics of DTIs iv: intravenous, sc: subcutaneous, HIT: heparin-induced thrombocytopenia, VTE: Venous thromboembolism, DVT: Deep vein thrombosis, PTCA: Percutaneous transluminal coronary angioplasty, PCI: percutaneous coronary intervention, FDA: Food and Drug Administration, AF: Atrial fibrillation, TI: Therapeutic index Status 2014 In 2014 dabigatran remains the only approved oral DTI and is therefore the only DTI alternative to the vitamin K antagonists. There are, however, some novel oral anticoagulant drugs that are currently in early and advanced stages of clinical development. Most of those drugs are in the class of direct factor Xa inhibitors, but there is one DTI called AZD0837, which is a follow-up compound of ximelgatran that is being developed by AstraZeneca. It is the prodrug of a potent, competitive, reversible inhibitor of free and fibrin-bound thrombin called ARH0637. The development of AZD 0837 is discontinued. Due to a limitation identified in long-term stability of the extended-release AZD0837 drug product, a follow-up study from ASSURE on stroke prevention in patients with non-valvular atrial fibrillation, was prematurely closed in 2010 after 2 years. There was also a numerically higher mortality against warfarin. In a Phase 2 trial for AF the mean serum creatinine concentration increased by about 10% from baseline in patients treated with AZD0837, which returned to baseline after cessation of therapy. Development of other oral DTIs, such as Sofigatran from Mitsubishi Tanabe Pharma has been discontinued. Another strategy for developing oral anticoagulant drugs is that of dual thrombin and fXa inhibitors that some pharmaceutical companies, including Boehringer Ingelheim, have reported on. These compounds show favorable anticoagulant activity in vitro. See also Anticoagulation Dabigatran Bivalirudin Warfarin Heparin References direct thrombin inhibitors Anticoagulants Direct thrombin inhibitors
Discovery and development of direct thrombin inhibitors
[ "Chemistry", "Biology" ]
4,490
[ "Drug discovery", "Life sciences industry", "Medicinal chemistry" ]
37,121,257
https://en.wikipedia.org/wiki/CryptoParty
CryptoParty (Crypto-Party) is a grassroots global endeavour to introduce the basics of practical cryptography such as the Tor anonymity network, I2P, Freenet, key signing parties, disk encryption and virtual private networks to the general public. The project primarily consists of a series of free public workshops. History As a successor to the Cypherpunks of the 1990s, CryptoParty was conceived in late August 2012 by the Australian journalist Asher Wolf in a Twitter post following the passing of the Cybercrime Legislation Amendment Bill 2011 and the proposal of a two-year data retention law in that country, the Cybercrime Legislation Amendment Bill 2011. The DIY, self-organizing movement immediately went viral, with a dozen autonomous CryptoParties being organized within hours in cities throughout Australia, the US, the UK, and Germany. Many more parties were soon organized or held in Chile, The Netherlands, Hawaii, Asia, etc. Tor usage in Australia itself spiked, and CryptoParty London with 130 attendees—some of whom were veterans of the Occupy London movement—had to be moved from London Hackspace to the Google campus in east London's Tech City. As of mid-October 2012 some 30 CryptoParties have been held globally, some on a continuing basis, and CryptoParties were held on the same day in Reykjavik, Brussels, and Manila. The first draft of the 442-page CryptoParty Handbook (the hard copy of which is available at cost) was pulled together in three days using the book sprint approach, and was released 2012-10-04 under a CC BY-SA license. Edward Snowden involvement In May 2014, Wired reported that Edward Snowden, while employed by Dell as an NSA contractor, organized a local CryptoParty at a small hackerspace in Honolulu, Hawaii on December 11, six months before becoming well known for leaking tens of thousands of secret U.S. government documents. During the CryptoParty, Snowden taught 20 Hawaii residents how to encrypt their hard drives and use the Internet anonymously. The event was filmed by Snowden's then-girlfriend, but the video has never been released online. In a follow-up post to the CryptoParty wiki, Snowden pronounced the event a "huge success." Media response CryptoParty has received early messages of support from the Electronic Frontier Foundation and (purportedly) AnonyOps, as well as the NSA whistleblower Thomas Drake, WikiLeaks central editor Heather Marsh, and Wired reporter Quinn Norton. Eric Hughes, the author of A Cypherpunk's Manifesto nearly two decades before, delivered the keynote address, Putting the Personal Back in Personal Computers, at the Amsterdam CryptoParty on 2012-09-27. Marcin de Kaminski, founding member of Piratbyrån which in turn founded The Pirate Bay, regards CryptoParty as the most important civic project in cryptography today, and Cory Doctorow has characterized a CryptoParty as being "like a Tupperware party for learning crypto." in December 2014 mentioned "crypto parties" in the wake of the Edward Snowden leaks in an article about the NSA. See also Cyber self-defense References External links CryptoParty Wiki An Australian crypto primer preso Beginning of CryptoParty London's slideshow Eric Hughes's keynote address at the Amsterdam CryptoParty Asher Wolf on privacy concerns and the origin and spread of CryptoParty Anarchism Cryptography Crypto-anarchism Cypherpunks Internet privacy 21st-century social movements Internet activism Mass surveillance
CryptoParty
[ "Mathematics", "Engineering" ]
756
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
37,123,010
https://en.wikipedia.org/wiki/Matroid%20rank
In the mathematical theory of matroids, the rank of a matroid is the maximum size of an independent set in the matroid. The rank of a subset S of elements of the matroid is, similarly, the maximum size of an independent subset of S, and the rank function of the matroid maps sets of elements to their ranks. The rank function is one of the fundamental concepts of matroid theory via which matroids may be axiomatized. Matroid rank functions form an important subclass of the submodular set functions. The rank functions of matroids defined from certain other types of mathematical object such as undirected graphs, matrices, and field extensions are important within the study of those objects. Examples In all examples, E is the base set of the matroid, and B is some subset of E. Let M be the free matroid, where the independent sets are all subsets of E. Then the rank function of M is simply: r(B) = |B|. Let M be a uniform matroid, where the independent sets are the subsets of E with at most k elements, for some integer k. Then the rank function of M is: r(B) = min(k, |B|). Let M be a partition matroid: the elements of E are partitioned into categories, each category c has capacity kc, and the independent sets are those containing at most kc elements of category c. Then the rank function of M is: r(B) = sumc min(kc, |Bc|) where Bc is the subset B contained in category c. Let M be a graphic matroid, where the independent sets are all the acyclic edge-sets (forests) of some fixed undirected graph G. Then the rank function r(B) is the number of vertices in the graph, minus the number of connected components of B (including single-vertex components). Properties and axiomatization The rank function of a matroid obeys the following properties. (R1) The value of the rank function is always a non-negative integer and the rank of the empty set is 0. (R2) For any two subsets and of , . That is, the rank is a submodular set function. (R3) For any set and element , . These properties may be used as axioms to characterize the rank function of matroids: every integer-valued submodular set function on the subsets of a finite set that obeys the inequalities for all and is the rank function of a matroid. The above properties imply additional properties: If , then . That is, the rank is a monotonic function. . Other matroid properties from rank The rank function may be used to determine the other important properties of a matroid: A set is independent if and only if its rank equals its cardinality, and dependent if and only if it has greater cardinality than rank. A nonempty set is a circuit if its cardinality equals one plus its rank and every subset formed by removing one element from the set has equal rank. A set is a basis if its rank equals both its cardinality and the rank of the matroid. A set is closed if it is maximal for its rank, in the sense that there does not exist another element that can be added to it while maintaining the same rank. The difference is called the nullity of the subset . It is the minimum number of elements that must be removed from to obtain an independent set. The corank of a subset can refer to at least two different quantities: some authors use it to refer to the rank of in the dual matroid, , while other authors use corank to refer to the difference . Ranks of special matroids In graph theory, the circuit rank (or cyclomatic number) of a graph is the corank of the associated graphic matroid; it measures the minimum number of edges that must be removed from the graph to make the remaining edges form a forest. Several authors have studied the parameterized complexity of graph algorithms parameterized by this number. In linear algebra, the rank of a linear matroid defined by linear independence from the columns of a matrix is the rank of the matrix, and it is also the dimension of the vector space spanned by the columns. In abstract algebra, the rank of a matroid defined from sets of elements in a field extension L/K by algebraic independence is known as the transcendence degree. Matroid rank functions as utility functions Matroid rank functions (MRF) has been used to represent utility functions of agents in problems of fair item allocation. If the utility function of the agent is an MRF, it means that: The agent's utility has diminishing returns (this follows from the fact that the MRF is a submodular function); The agent's marginal utility for each item is dichotomous (binary) - either 0 or 1. That is, adding an item to a bundle either adds no utility, or adds a utility of 1. The following solutions are known for this setting: Babaioff, Ezra and Feige design a deterministic polynomial-time truthful mechanism called Prioritized Egalitarian, that outputs a Lorenz dominating allocation, which is consequently also EFX0, maximizes the product of utilities, attains 1/2-fraction maximin share, and attains the full maximin share when the valuations are additive. With random priorities, this mechanism is also ex-ante envy-free. They also study e-dichotomous valuations, in which the marginal utility is either non-positive or in the range [1,1+e]. Benabbou, Chakraborty, Igarashi and Zick show that, in this setting, every Pareto-optimal allocation maximizes the sum of utilities (the utilitarian welfare), the set of allocations that maximize a symmetric strictly-concave function f over all max-sum allocations does not depend on the choice of f, and all these f-maximizing allocations are EF1. This implies that the max-product allocations are the leximin-optimal allocations, and they are all max-sum and EF1. They also present a polynomial-time algorithm that computes a max-sum and EF1 allocation (which does not necessarily maximize a concave function), and a polynomial-time algorithm that maximizes a concave function for the special case of MRFs based on maximum-cardinality matching in bipartite graphs. The matroid-rank functions are a subclass of the gross substitute valuations. See also Rank oracle References Dimension Rank
Matroid rank
[ "Physics", "Mathematics" ]
1,389
[ "Geometric measurement", "Physical quantities", "Combinatorics", "Theory of relativity", "Dimension", "Matroid theory" ]
1,427,054
https://en.wikipedia.org/wiki/European%20Medicines%20Agency
The European Medicines Agency (EMA) is an agency of the European Union (EU) in charge of the evaluation and supervision of pharmaceutical products. Prior to 2004, it was known as the European Agency for the Evaluation of Medicinal Products or European Medicines Evaluation Agency (EMEA). The EMA was set up in 1995, with funding from the European Union and the pharmaceutical industry, as well as indirect subsidy from member states, its stated intention to harmonise (but not replace) the work of existing national medicine regulatory bodies. The hope was that this plan would not only reduce the €350 million annual cost drug companies incurred by having to win separate approvals from each member state but also that it would eliminate the protectionist tendencies of sovereign states unwilling to approve new drugs that might compete with those already produced by domestic drug companies. The EMA was founded after more than seven years of negotiations among EU governments and replaced the Committee for Proprietary Medicinal Products and the Committee for Veterinary Medicinal Products, though both of these were reborn as the core scientific advisory committees. The agency was located in London prior to the United Kingdom's vote for withdrawal from the European Union, relocating to Amsterdam in March 2019. Operations The European Medicines Agency (EMA) operates as a decentralised scientific agency (as opposed to a regulatory authority) of the European Union (EU) and its main responsibility is the protection and promotion of public and animal health, through the evaluation and supervision of medicines for human and veterinary use. More specifically, it coordinates the evaluation and monitoring of centrally authorised products and national referrals, develops technical guidance and provides scientific advice to sponsors. Its scope of operations is medicinal products for human and veterinary use including biologics and advanced therapies, and herbal medicinal products. The agency is composed of the Secretariat (ca. 600 staff), a management board, seven scientific committees (human, veterinary and herbal medicinal products, orphan drugs, paediatrics, advanced therapies and pharmacovigilance risk assessment) and a number of scientific working parties. The Secretariat is organised into five units: Directorate, Human Medicines Development and Evaluation, Patient Health Protection, Veterinary Medicines and Product Data Management, Information and Communications Technology and Administration. The Management Board provides administrative oversight to the Agency: including approval of budgets and plans, and selection of executive director. The Board includes one representative of each of the 27 Member States, two representatives of the European Commission, two representatives of the European Parliament, two representatives of patients' organisations, one representative of doctors' organisations and one representative of veterinarians' organisations. The Agency decentralises its scientific assessment of medicines by working through a network of about 4500 experts throughout the EU. The EMA draws on resources of over 40 National Competent Authorities (NCAs) of EU Member states. The EMA additionally engages with international agencies and non-governmental organizations on areas of mutual interest, such as its participation on the Coalition for Epidemic Preparedness Innovations' Joint Coordination Group. It is also a benefactor of Health Level Seven International, a member of the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use and the International Pharmaceutical Regulators Programme (IPRP), and a partner of the Society for Immunotherapy of Cancer and Vaccine Confidence Project. Committees Medicinal products for human use A single evaluation is carried out through the Committee for Medicinal Products for Human Use (CHMP). If the Committee concludes that the quality, safety and efficacy of the medicinal product is sufficiently proven, it adopts a positive opinion. This is sent to the European Commission to be transformed into a marketing authorisation valid for the whole of the EU. A special type of approval is the paediatric-use marketing authorisation (PUMA), which can be granted for medical products intended exclusively for paediatric use. The CHMP is obliged by the regulation to reach decisions within 210 days, though the clock is stopped if it is necessary to ask the applicant for clarification or further supporting data. The review process of the European Medicines Agency regarding medical issues has been criticized for its lack of transparency. In a rebuttal of an EMS review that included her work, Louise Brinth, a Danish physician, noted that "experts" reviewing data remain unnamed and seem to be bound to secrecy. Minutes are not released and diverging opinions are not reported suggesting that all the "experts" are of the same opinion. In her view the process is unscientific and undemocratic. Medicinal products for veterinary use The Committee for Medicinal Products for Veterinary Use (CVMP) operates in analogy to the CHMP as described above. Orphan medicinal products The Committee on Orphan Medicinal Products (COMP) administers the granting of orphan drug status since 2000. Companies intending to develop medicinal products for the diagnosis, prevention or treatment of life-threatening or very serious conditions that affect not more than five in 10,000 persons in the European Union can apply for 'orphan medicinal product designation'. The COMP evaluates the application and makes a recommendation for the designation which is then granted by the European Commission. Herbal medicinal products The Committee on Herbal Medicinal Products (HMPC) assists the harmonisation of procedures and provisions concerning herbal medicinal products laid down in EU Member States, and further integrating herbal medicinal products in the European regulatory framework since 2004. Paediatry The Paediatric Committee (PDCO) deals with the implementation of the paediatric legislation in Europe Regulation (EC) No 1901/2006 since 2007. Under this legislation, all applications for marketing authorisation of new medicinal products, or variations to existing authorisations, have to either include data from paediatric studies previously agreed with the PDCO, or obtain a PDCO waiver or a deferral of these studies. Advanced therapies The Committee for Advanced Therapies (CAT) was established in accordance with Regulation (EC) No 1394/2007 on advanced-therapy medicinal products (ATMPs) such as gene therapy, somatic cell therapy and tissue engineered products. It assesses the quality, safety and efficacy of ATMPs, and follows scientific developments in the field. Pharmacovigilance risk assessment A seventh committee, the Pharmacovigilance Risk Assessment Committee (PRAC) has come into function in 2012 with the implementation of the new EU pharmacovigilance legislation (Directive 2010/84/EU). Other activities The Agency carries out a number of activities, including: Pharmacovigilance: The Agency constantly monitors the safety of medicines through a pharmacovigilance network and EudraVigilance, so that it can take appropriate actions if adverse drug reaction reports suggest that the benefit-risk balance of a medicine has changed since it was authorised. Referrals: The Agency coordinates arbitration procedures relating to medicinal products that are approved or under consideration by Member States in non-centralized authorisation procedures. Scientific Advice: Companies wishing to receive scientific advice from the CHMP or CVMP on the appropriate tests and studies to carry out in the development of a medicinal products can request it prior to or during the development program. Telematics projects: The Agency is responsible for implementing a central set of pan-European systems and databases such as EudraVigilance, EudraCT and EudraPharm. Centralised marketing authorisations The centralised procedure allows companies to submit a single application to the agency to obtain from the European Commission a centralised (or "community") marketing authorisation (MA) valid in all European Union member states and in Iceland, Liechtenstein and Norway. The centralised procedure is compulsory for all medicines derived from biotechnology and other high-tech processes, as well as for human medicines for the treatment of HIV/AIDS, cancer, diabetes, neurodegenerative diseases, auto-immune and other immune dysfunctions, and viral diseases, and for veterinary medicines for use for growth or yield enhancers. It is also compulsory for advanced-therapy medicines such as gene-therapy, somatic cell-therapy or tissue-engineered medicines and for orphan medicines (for rare diseases). The centralised procedure is also open to products that bring a significant therapeutic, scientific or technical innovation, or is in any other respect in the interest of patient or animal health. As a result, the majority of genuinely novel medicines are authorised through the EMA. For products eligible for or requiring centralised approval, a company submits an application for a marketing authorisation to the EMA. History 1995-2004: Inception The EMA was set up in 1995, with funding from the European Union and the pharmaceutical industry, as well as indirect subsidy from member states, its stated intention to harmonise (but not replace) the work of existing national medicine regulatory bodies. The hope was that this plan would not only reduce the €350 million annual cost drug companies incurred by having to win separate approvals from each member state but also that it would eliminate the protectionist tendencies of Sovereign states unwilling to approve new drugs that might compete with those already produced by domestic drug companies. The EMA was founded after more than seven years of negotiations among EU governments and replaced the Committee for Proprietary Medicinal Products and the Committee for Veterinary Medicinal Products, though both of these were reborn as the core scientific advisory committees. The agency was located in London prior to the United Kingdom's vote for withdrawal from the European Union, relocating to Amsterdam in March 2019. 2004: Renaming Prior to 2004, it was known as the European Agency for the Evaluation of Medicinal Products or European Medicines Evaluation Agency (EMEA). The EMA contributed to the Global Vaccine Action Plan developed by the Decade of Vaccines Collaboration, endorsed by the 194 Member States of the World Health Assembly in May 2012, and published on the World Health Organization's website in February 2013. 2019: Relocation Following the 2016 decision of the United Kingdom to leave the European Union ("Brexit"), the EMA chose to search for another base of operations. According to EU Law the European Commission had to decide on the fate of the EMA's location. The EU ministers met to vote on their preferred successor. The EU's Health Commissioner Vytenis Andriukaitis said that the preferred choice would be a location where an "easy set up and guarantee of smooth operations" would be available. Member states who had expressed their bid for the new EMA location were Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Malta, the Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, and Sweden (or in other words all remaining member countries except for the Baltic States and Luxembourg). It had also been speculated that the Strasbourg-based seat for the European Parliament could be moved to Brussels, in exchange for the city to host the EMA. Others speculated on the merits of Amsterdam, well before the final decision was made. The decision on the relocation was made on 20 November 2017, during the EU General Affairs Council meeting, after three voting rounds and finally drawing of lots. After the first round of voting, Milan (25 votes), Amsterdam (20 votes) and Copenhagen (20 votes) were the only contenders left. After the second voting round, two cities were left: Milan (twelve votes) and Amsterdam (nine votes). These two cities tied in the subsequent vote (thirteen votes each), after which a drawing of lots identified Amsterdam as the host city of EMA. EMA staff left its London premises in March 2019 to relocate to a temporary building in Amsterdam, and by January 2020 the relocation to the permanent building in Amsterdam Zuidas district was finalised. 2020: COVID-19 The EMA played a significant role in the response to the COVID-19 pandemic in the European Union, seeking to expedite the development and approval of COVID-19 vaccines and treatments. It participated in the Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) public–private partnership hosted by the Foundation for the National Institutes of Health, collaborating with international government agencies and corporations to coordinate a research strategy for prioritizing and speeding up development of COVID-19 vaccines and pharmaceutical products. While in the process of evaluating the Pfizer–BioNTech COVID-19 vaccine in December 2020, the EMA suffered a cyberattack, resulting in the leak of classified regulatory documents to journalists, academics and the public via the dark web. The documents revealed internal concerns about low production quality in the mRNA vaccine candidate, and regulators' efforts to have Pfizer and BioNTech rectify these deficiencies. The EMA ultimately authorized the vaccine on 21 December 2020, satisfied that the product quality was "sufficiently consistent and acceptable." Comparison with other medical regulatory agencies As of 2016, the EMA was roughly parallel to the drug part of the U.S. Food and Drug Administration (FDA), but without centralisation. The timetable for product approval via the EMA's centralised procedure of 210 days compares well with the average of 500 days taken by the FDA in 2008 to evaluate a product. See also Ethics Committee EudraCT EudraGMP EudraLex EUDRANET EudraPharm EudraVigilance European and Developing Countries Clinical Trials Partnership European Centre for Disease Prevention and Control (ECDC) European Clinical Research Infrastructures Network European Federation of Pharmaceutical Industries and Associations European Forum for Good Clinical Practice (EFGCP) ICH Inverse benefit law Medicines and Healthcare products Regulatory Agency (MHRA, UK) Qualified person Regulation of therapeutic goods Supplementary protection certificate (SPC) References Further reading External links EMA Annual Report 2018 Heads of Medicines Agencies The Rules Governing Medicinal Products in the European Union (EudraLex) Health-EU Portal official public health portal of the European Union 1993 in the European Union Agencies of the European Union European medical and health organizations European Union health policy Government agencies established in 1993 International organisations based in the Netherlands Medical and health organisations based in the Netherlands National agencies for drug regulation Organisations based in Amsterdam Regulators of biotechnology products Regulation in the European Union
European Medicines Agency
[ "Chemistry", "Biology" ]
2,867
[ "Biotechnology products", "Regulation of biotechnologies", "National agencies for drug regulation", "Regulators of biotechnology products", "Drug safety" ]
1,427,634
https://en.wikipedia.org/wiki/Rate-determining%20step
In chemical kinetics, the overall rate of a reaction is often approximately determined by the slowest step, known as the rate-determining step (RDS or RD-step or r/d step) or rate-limiting step. For a given reaction mechanism, the prediction of the corresponding rate equation (for comparison with the experimental rate law) is often simplified by using this approximation of the rate-determining step. In principle, the time evolution of the reactant and product concentrations can be determined from the set of simultaneous rate equations for the individual steps of the mechanism, one for each step. However, the analytical solution of these differential equations is not always easy, and in some cases numerical integration may even be required. The hypothesis of a single rate-determining step can greatly simplify the mathematics. In the simplest case the initial step is the slowest, and the overall rate is just the rate of the first step. Also, the rate equations for mechanisms with a single rate-determining step are usually in a simple mathematical form, whose relation to the mechanism and choice of rate-determining step is clear. The correct rate-determining step can be identified by predicting the rate law for each possible choice and comparing the different predictions with the experimental law, as for the example of and CO below. The concept of the rate-determining step is very important to the optimization and understanding of many chemical processes such as catalysis and combustion. Example reaction: + CO As an example, consider the gas-phase reaction + CO → NO + . If this reaction occurred in a single step, its reaction rate (r) would be proportional to the rate of collisions between and CO molecules: r = k[][CO], where k is the reaction rate constant, and square brackets indicate a molar concentration. Another typical example is the Zel'dovich mechanism. First step rate-determining In fact, however, the observed reaction rate is second-order in and zero-order in CO, with rate equation r = k[]2. This suggests that the rate is determined by a step in which two molecules react, with the CO molecule entering at another, faster, step. A possible mechanism in two elementary steps that explains the rate equation is: + → NO + (slow step, rate-determining) + CO → + (fast step) In this mechanism the reactive intermediate species is formed in the first step with rate r1 and reacts with CO in the second step with rate r2. However, can also react with NO if the first step occurs in the reverse direction (NO + → 2 ) with rate r−1, where the minus sign indicates the rate of a reverse reaction. The concentration of a reactive intermediate such as [] remains low and almost constant. It may therefore be estimated by the steady-state approximation, which specifies that the rate at which it is formed equals the (total) rate at which it is consumed. In this example is formed in one step and reacts in two, so that The statement that the first step is the slow step actually means that the first step in the reverse direction is slower than the second step in the forward direction, so that almost all is consumed by reaction with CO and not with NO. That is, r−1 ≪ r2, so that r1 − r2 ≈ 0. But the overall rate of reaction is the rate of formation of final product (here ), so that r = r2 ≈ r1. That is, the overall rate is determined by the rate of the first step, and (almost) all molecules that react at the first step continue to the fast second step. Pre-equilibrium: if the second step were rate-determining The other possible case would be that the second step is slow and rate-determining, meaning that it is slower than the first step in the reverse direction: r2 ≪ r−1. In this hypothesis, r1 − r−1 ≈ 0, so that the first step is (almost) at equilibrium. The overall rate is determined by the second step: r = r2 ≪ r1, as very few molecules that react at the first step continue to the second step, which is much slower. Such a situation in which an intermediate (here ) forms an equilibrium with reactants prior to the rate-determining step is described as a pre-equilibrium For the reaction of and CO, this hypothesis can be rejected, since it implies a rate equation that disagrees with experiment. + → NO + (fast step) + CO → + (slow step, rate-determining) If the first step were at equilibrium, then its equilibrium constant expression permits calculation of the concentration of the intermediate in terms of more stable (and more easily measured) reactant and product species: The overall reaction rate would then be which disagrees with the experimental rate law given above, and so disproves the hypothesis that the second step is rate-determining for this reaction. However, some other reactions are believed to involve rapid pre-equilibria prior to the rate-determining step, as shown below. Nucleophilic substitution Another example is the unimolecular nucleophilic substitution (SN1) reaction in organic chemistry, where it is the first, rate-determining step that is unimolecular. A specific case is the basic hydrolysis of tert-butyl bromide () by aqueous sodium hydroxide. The mechanism has two steps (where R denotes the tert-butyl radical ): Formation of a carbocation R−Br → + . Nucleophilic attack by hydroxide ion + → ROH. This reaction is found to be first-order with r = k[R−Br], which indicates that the first step is slow and determines the rate. The second step with OH− is much faster, so the overall rate is independent of the concentration of OH−. In contrast, the alkaline hydrolysis of methyl bromide () is a bimolecular nucleophilic substitution (SN2) reaction in a single bimolecular step. Its rate law is second-order: r = k[R−Br][]. Composition of the transition state A useful rule in the determination of mechanism is that the concentration factors in the rate law indicate the composition and charge of the activated complex or transition state. For the –CO reaction above, the rate depends on []2, so that the activated complex has composition , with 2 entering the reaction before the transition state, and CO reacting after the transition state. A multistep example is the reaction between oxalic acid and chlorine in aqueous solution: + → 2 + 2 + 2 . The observed rate law is which implies an activated complex in which the reactants lose 2 + before the rate-determining step. The formula of the activated complex is + − 2 − +  , or (an unknown number of water molecules are added because the possible dependence of the reaction rate on was not studied, since the data were obtained in water solvent at a large and essentially unvarying concentration). One possible mechanism in which the preliminary steps are assumed to be rapid pre-equilibria occurring prior to the transition state is + HOCl + + + HOCl + → + + 2 Reaction coordinate diagram In a multistep reaction, the rate-determining step does not necessarily correspond to the highest Gibbs energy on the reaction coordinate diagram. If there is a reaction intermediate whose energy is lower than the initial reactants, then the activation energy needed to pass through any subsequent transition state depends on the Gibbs energy of that state relative to the lower-energy intermediate. The rate-determining step is then the step with the largest Gibbs energy difference relative either to the starting material or to any previous intermediate on the diagram. Also, for reaction steps that are not first-order, concentration terms must be considered in choosing the rate-determining step. Chain reactions Not all reactions have a single rate-determining step. In particular, the rate of a chain reaction is usually not controlled by any single step. Diffusion control In the previous examples the rate determining step was one of the sequential chemical reactions leading to a product. The rate-determining step can also be the transport of reactants to where they can interact and form the product. This case is referred to as diffusion control and, in general, occurs when the formation of product from the activated complex is very rapid and thus the provision of the supply of reactants is rate-determining. See also Product-determining step Rate-limiting step (biochemistry) References Chemical kinetics ja:反応速度#律速段階
Rate-determining step
[ "Chemistry" ]
1,760
[ "Chemical reaction engineering", "Chemical kinetics" ]
1,427,685
https://en.wikipedia.org/wiki/Heterogeneous%20catalysis
Heterogeneous catalysis is catalysis where the phase of catalysts differs from that of the reagents or products. The process contrasts with homogeneous catalysis where the reagents, products and catalyst exist in the same phase. Phase distinguishes between not only solid, liquid, and gas components, but also immiscible mixtures (e.g., oil and water), or anywhere an interface is present. Heterogeneous catalysis typically involves solid phase catalysts and gas phase reactants. In this case, there is a cycle of molecular adsorption, reaction, and desorption occurring at the catalyst surface. Thermodynamics, mass transfer, and heat transfer influence the rate (kinetics) of reaction. Heterogeneous catalysis is very important because it enables faster, large-scale production and the selective product formation. Approximately 35% of the world's GDP is influenced by catalysis. The production of 90% of chemicals (by volume) is assisted by solid catalysts. The chemical and energy industries rely heavily on heterogeneous catalysis. For example, the Haber–Bosch process uses metal-based catalysts in the synthesis of ammonia, an important component in fertilizer; 144 million tons of ammonia were produced in 2016. Adsorption Adsorption is an essential step in heterogeneous catalysis. Adsorption is the process by which a gas (or solution) phase molecule (the adsorbate) binds to solid (or liquid) surface atoms (the adsorbent). The reverse of adsorption is desorption, the adsorbate splitting from adsorbent. In a reaction facilitated by heterogeneous catalysis, the catalyst is the adsorbent and the reactants are the adsorbate. Types of adsorption Two types of adsorption are recognized: physisorption, weakly bound adsorption, and chemisorption, strongly bound adsorption. Many processes in heterogeneous catalysis lie between the two extremes. The Lennard-Jones model provides a basic framework for predicting molecular interactions as a function of atomic separation. Physisorption In physisorption, a molecule becomes attracted to the surface atoms via van der Waals forces. These include dipole-dipole interactions, induced dipole interactions, and London dispersion forces. Note that no chemical bonds are formed between adsorbate and adsorbent, and their electronic states remain relatively unperturbed. Typical energies for physisorption are from 3 to 10 kcal/mol. In heterogeneous catalysis, when a reactant molecule physisorbs to a catalyst, it is commonly said to be in a precursor state, an intermediate energy state before chemisorption, a more strongly bound adsorption. From the precursor state, a molecule can either undergo chemisorption, desorption, or migration across the surface. The nature of the precursor state can influence the reaction kinetics. Chemisorption When a molecule approaches close enough to surface atoms such that their electron clouds overlap, chemisorption can occur. In chemisorption, the adsorbate and adsorbent share electrons signifying the formation of chemical bonds. Typical energies for chemisorption range from 20 to 100 kcal/mol. Two cases of chemisorption are: Molecular adsorption: the adsorbate remains intact. An example is alkene binding by platinum. Dissociation adsorption: one or more bonds break concomitantly with adsorption. In this case, the barrier to dissociation affects the rate of adsorption. An example of this is the binding of H2 to a metal catalyst, where the H-H bond is broken upon adsorption. Surface reactions Most metal surface reactions occur by chain propagation in which catalytic intermediates are cyclically produced and consumed. Two main mechanisms for surface reactions can be described for A + B → C. Langmuir–Hinshelwood mechanism: The reactant molecules, A and B, both adsorb to the catalytic surface. While adsorbed to the surface, they combine to form product C, which then desorbs. Eley–Rideal mechanism: One reactant molecule, A, adsorbs to the catalytic surface. Without adsorbing, B reacts with absorbed A to form C, that then desorbs from the surface. Most heterogeneously catalyzed reactions are described by the Langmuir–Hinshelwood model. In heterogeneous catalysis, reactants diffuse from the bulk fluid phase to adsorb to the catalyst surface. The adsorption site is not always an active catalyst site, so reactant molecules must migrate across the surface to an active site. At the active site, reactant molecules will react to form product molecule(s) by following a more energetically facile path through catalytic intermediates (see figure to the right). The product molecules then desorb from the surface and diffuse away. The catalyst itself remains intact and free to mediate further reactions. Transport phenomena such as heat and mass transfer, also play a role in the observed reaction rate. Catalyst design Catalysts are not active towards reactants across their entire surface; only specific locations possess catalytic activity, called active sites. The surface area of a solid catalyst has a strong influence on the number of available active sites. In industrial practice, solid catalysts are often porous to maximize surface area, commonly achieving 50–400 m2/g. Some mesoporous silicates, such as the MCM-41, have surface areas greater than 1000 m2/g. Porous materials are cost effective due to their high surface area-to-mass ratio and enhanced catalytic activity. In many cases, a solid catalyst is dispersed on a supporting material to increase surface area (spread the number of active sites) and provide stability. Usually catalyst supports are inert, high melting point materials, but they can also be catalytic themselves. Most catalyst supports are porous (frequently carbon, silica, zeolite, or alumina-based) and chosen for their high surface area-to-mass ratio. For a given reaction, porous supports must be selected such that reactants and products can enter and exit the material. Often, substances are intentionally added to the reaction feed or on the catalyst to influence catalytic activity, selectivity, and/or stability. These compounds are called promoters. For example, alumina (Al2O3) is added during ammonia synthesis to providing greater stability by slowing sintering processes on the Fe-catalyst. Sabatier principle can be considered one of the cornerstones of modern theory of catalysis. Sabatier principle states that the surface-adsorbates interaction has to be an optimal amount: not too weak to be inert toward the reactants and not too strong to poison the surface and avoid desorption of the products. The statement that the surface-adsorbate interaction has to be an optimum, is a qualitative one. Usually the number of adsorbates and transition states associated with a chemical reaction is a large number, thus the optimum has to be found in a many-dimensional space. Catalyst design in such a many-dimensional space is not a computationally viable task. Additionally, such optimization process would be far from intuitive. Scaling relations are used to decrease the dimensionality of the space of catalyst design. Such relations are correlations among adsorbates binding energies (or among adsorbate binding energies and transition states also known as BEP relations) that are "similar enough" e.g., OH versus OOH scaling. Applying scaling relations to the catalyst design problems greatly reduces the space dimensionality (sometimes to as small as 1 or 2). One can also use micro-kinetic modeling based on such scaling relations to take into account the kinetics associated with adsorption, reaction and desorption of molecules under specific pressure or temperature conditions. Such modeling then leads to well-known volcano-plots at which the optimum qualitatively described by the Sabatier principle is referred to as the "top of the volcano". Scaling relations can be used not only to connect the energetics of radical surface-adsorbed groups (e.g., O*,OH*), but also to connect the energetics of closed-shell molecules among each other or to the counterpart radical adsorbates. A recent challenge for researchers in catalytic sciences is to "break" the scaling relations. The correlations which are manifested in the scaling relations confine the catalyst design space, preventing one from reaching the "top of the volcano". Breaking scaling relations can refer to either designing surfaces or motifs that do not follow a scaling relation, or ones that follow a different scaling relation (than the usual relation for the associated adsorbates) in the right direction: one that can get us closer to the top of the reactivity volcano. In addition to studying catalytic reactivity, scaling relations can be used to study and screen materials for selectivity toward a special product. There are special combination of binding energies that favor specific products over the others. Sometimes a set of binding energies that can change the selectivity toward a specific product "scale" with each other, thus to improve the selectivity one has to break some scaling relations; an example of this is the scaling between methane and methanol oxidative activation energies that leads to the lack of selectivity in direct conversion of methane to methanol. Catalyst deactivation Catalyst deactivation is defined as a loss in catalytic activity and/or selectivity over time. Substances that decrease reaction rate are called poisons. Poisons chemisorb to catalyst surface and reduce the number of available active sites for reactant molecules to bind to. Common poisons include Group V, VI, and VII elements (e.g. S, O, P, Cl), some toxic metals (e.g. As, Pb), and adsorbing species with multiple bonds (e.g. CO, unsaturated hydrocarbons). For example, sulfur disrupts the production of methanol by poisoning the Cu/ZnO catalyst. Substances that increase reaction rate are called promoters. For example, the presence of alkali metals in ammonia synthesis increases the rate of N2 dissociation. The presence of poisons and promoters can alter the activation energy of the rate-limiting step and affect a catalyst's selectivity for the formation of certain products. Depending on the amount, a substance can be favorable or unfavorable for a chemical process. For example, in the production of ethylene, a small amount of chemisorbed chlorine will act as a promoter by improving Ag-catalyst selectivity towards ethylene over CO2, while too much chlorine will act as a poison. Other mechanisms for catalyst deactivation include: Sintering: when heated, dispersed catalytic metal particles can migrate across the support surface and form crystals. This results in a reduction of catalyst surface area. Fouling: the deposition of materials from the fluid phase onto the solid phase catalyst and/or support surfaces. This results in active site and/or pore blockage. Coking: the deposition of heavy, carbon-rich solids onto surfaces due to the decomposition of hydrocarbons Vapor-solid reactions: formation of an inactive surface layer and/or formation of a volatile compound that exits the reactor. This results in a loss of surface area and/or catalyst material. Solid-state transformation: solid-state diffusion of catalyst support atoms to the surface followed by a reaction that forms an inactive phase. This results in a loss of catalyst surface area. Erosion: continual attrition of catalyst material common in fluidized-bed reactors. This results in a loss of catalyst material. In industry, catalyst deactivation costs billions every year due to process shutdown and catalyst replacement. Industrial examples In industry, many design variables must be considered including reactor and catalyst design across multiple scales ranging from the subnanometer to tens of meters. The conventional heterogeneous catalysis reactors include batch, continuous, and fluidized-bed reactors, while more recent setups include fixed-bed, microchannel, and multi-functional reactors. Other variables to consider are reactor dimensions, surface area, catalyst type, catalyst support, as well as reactor operating conditions such as temperature, pressure, and reactant concentrations. Some large-scale industrial processes incorporating heterogeneous catalysts are listed below. Other examples Reduction of nitriles in the synthesis of phenethylamine with Raney nickel catalyst and hydrogen in ammonia: The cracking, isomerisation, and reformation of hydrocarbons to form appropriate and useful blends of petrol. In automobiles, catalytic converters are used to catalyze three main reactions: The oxidation of carbon monoxide to carbon dioxide: 2CO(g) + O2(g) → 2CO2(g) The reduction of nitrogen monoxide back to nitrogen: 2NO(g) + 2CO(g) → N2(g) + 2CO2(g) The oxidation of hydrocarbons to water and carbon dioxide: 2 C6H6 + 15 O2 → 12 CO2 + 6 H2O This process can occur with any of hydrocarbon, but most commonly is performed with petrol or diesel. Asymmetric heterogeneous catalysis facilitates the production of pure enantiomer compounds using chiral heterogeneous catalysts. The majority of heterogeneous catalysts are based on metals or metal oxides; however, some chemical reactions can be catalyzed by carbon-based materials, e.g., oxidative dehydrogenations or selective oxidations. Ethylbenzene + 1/2 O2 → Styrene + H2O Acrolein + 1/2 O2 → Acrylic acid Solid-Liquid and Liquid-Liquid Catalyzed Reactions Although the majority of heterogeneous catalysts are solids, there are a few variations which are of practical value. For two immiscible solutions (liquids), one carries the catalyst while the other carries the reactant. This set up is the basis of biphasic catalysis as implemented in the industrial production of butyraldehyde by the hydroformylation of propylene. See also Heterogeneous gold catalysis Nanomaterial-based catalysts Platinum nanoparticles Temperature-programmed reduction Thermal desorption spectroscopy References External links Catalysis
Heterogeneous catalysis
[ "Chemistry" ]
3,039
[ "Catalysis", "Chemical kinetics" ]
1,427,715
https://en.wikipedia.org/wiki/Homogeneous%20catalysis
In chemistry, homogeneous catalysis is catalysis where the catalyst is in same phase as reactants, principally by a soluble catalyst in a solution. In contrast, heterogeneous catalysis describes processes where the catalysts and substrate are in distinct phases, typically solid and gas, respectively. The term is used almost exclusively to describe solutions and implies catalysis by organometallic compounds. Homogeneous catalysis is an established technology that continues to evolve. An illustrative major application is the production of acetic acid. Enzymes are examples of homogeneous catalysts. Examples Acid catalysis The proton is a pervasive homogeneous catalyst because water is the most common solvent. Water forms protons by the process of self-ionization of water. In an illustrative case, acids accelerate (catalyze) the hydrolysis of esters: CH3CO2CH3 + H2O CH3CO2H + CH3OH At neutral pH, aqueous solutions of most esters do not hydrolyze at practical rates. Transition metal-catalysis Hydrogenation and related reactions A prominent class of reductive transformations are hydrogenations. In this process, H2 added to unsaturated substrates. A related methodology, transfer hydrogenation, involves by transfer of hydrogen from one substrate (the hydrogen donor) to another (the hydrogen acceptor). Related reactions entail "HX additions" where X = silyl (hydrosilylation) and CN (hydrocyanation). Most large-scale industrial hydrogenations – margarine, ammonia, benzene-to-cyclohexane – are conducted with heterogeneous catalysts. Fine chemical syntheses, however, often rely on homogeneous catalysts. Carbonylations Hydroformylation, a prominent form of carbonylation, involves the addition of H and "C(O)H" across a double bond. This process is almost exclusively conducted with soluble rhodium- and cobalt-containing complexes. A related carbonylation is the conversion of alcohols to carboxylic acids. MeOH and CO react in the presence of homogeneous catalysts to give acetic acid, as practiced in the Monsanto process and Cativa processes. Related reactions include hydrocarboxylation and hydroesterifications. Polymerization and metathesis of alkenes A number of polyolefins, e.g. polyethylene and polypropylene, are produced from ethylene and propylene by Ziegler-Natta catalysis. Heterogeneous catalysts dominate, but many soluble catalysts are employed especially for stereospecific polymers. Olefin metathesis is usually catalyzed heterogeneously in industry, but homogeneous variants are valuable in fine chemical synthesis. Oxidations Homogeneous catalysts are also used in a variety of oxidations. In the Wacker process, acetaldehyde is produced from ethene and oxygen. Many non-organometallic complexes are also widely used in catalysis, e.g. for the production of terephthalic acid from xylene. Alkenes are epoxidized and dihydroxylated by metal complexes, as illustrated by the Halcon process and the Sharpless dihydroxylation. Enzymes (including metalloenzymes) Enzymes are homogeneous catalysts that are essential for life but are also harnessed for industrial processes. A well-studied example is carbonic anhydrase, which catalyzes the release of CO2 into the lungs from the bloodstream. Enzymes possess properties of both homogeneous and heterogeneous catalysts. As such, they are usually regarded as a third, separate category of catalyst. Water is a common reagent in enzymatic catalysis. Esters and amides are slow to hydrolyze in neutral water, but the rates are sharply affected by metalloenzymes, which can be viewed as large coordination complexes. Acrylamide is prepared by the enzyme-catalyzed hydrolysis of acrylonitrile. US demand for acrylamide was as of 2007. Advantages and disadvantages Advantages Homogeneous catalysts are often more selective than heterogeneous catalysts. For exothermic processes, homogeneous catalysts dump heat into the solvent. Homogeneous catalysts are easier to characterize, making their reaction mechanisms amenable to rational manipulation. Disadvantages The separation of homogeneous catalysts from products can be challenging. In some cases involving high activity catalysts, the catalyst is not removed from the product. In other cases, distillation can extract volatile organic products. Homogeneous catalysts have limited thermal stability compared to heterogeneous catalysts. Many organometallic complexes degrade below 100 °C. Some pincer-based catalysts, however, operate near 200 °C. See also Concurrent tandem catalysis References Catalysis
Homogeneous catalysis
[ "Chemistry" ]
1,012
[ "Catalysis", "Chemical kinetics", "Homogeneous catalysis" ]
1,427,908
https://en.wikipedia.org/wiki/Ossification
Ossification (also called osteogenesis or bone mineralization) in bone remodeling is the process of laying down new bone material by cells named osteoblasts. It is synonymous with bone tissue formation. There are two processes resulting in the formation of normal, healthy bone tissue: Intramembranous ossification is the direct laying down of bone into the primitive connective tissue (mesenchyme), while endochondral ossification involves cartilage as a precursor. In fracture healing, endochondral osteogenesis is the most commonly occurring process, for example in fractures of long bones treated by plaster of Paris, whereas fractures treated by open reduction and internal fixation with metal plates, screws, pins, rods and nails may heal by intramembranous osteogenesis. Heterotopic ossification is a process resulting in the formation of bone tissue that is often atypical, at an extraskeletal location. Calcification is often confused with ossification. Calcification is synonymous with the formation of calcium-based salts and crystals within cells and tissue. It is a process that occurs during ossification, but not necessarily vice versa. The exact mechanisms by which bone development is triggered remains unclear, but growth factors and cytokines appear to play a role. Intramembranous ossification Intramembranous ossification forms the flat bones of the skull, mandible and hip bone. Osteoblasts cluster together to create an ossification center. They then start secreting osteoid, an unmineralized collagen-proteoglycan matrix that has the ability to bind calcium. As calcium binds to the osteoid, the matrix hardens, and the osteoblasts become entrapped, transforming into osteocytes. As osteoblasts continue to secrete osteoid, it surrounds blood vessels, leading to the formation of trabecular (cancellous or spongy) bone. These blood vessels will eventually develop into red bone marrow. Mesenchymal cells on the bone surface form a membrane known as the periosteum. Osteoblasts secrete osteoid in parallel with the existing matrix, creating layers of compact (cortical) bone. Endochondral ossification Endochondral ossification is the formation of long bones and other bones. This requires a hyaline cartilage precursor. There are two centers of ossification for endochondral ossification. The primary center In long bones, bone tissue first appears in the diaphysis (middle of shaft). Chondrocytes multiply and form trebeculae. Cartilage is progressively eroded and replaced by hardened bone, extending towards the epiphysis. A perichondrium layer surrounding the cartilage forms the periosteum, which generates osteogenic cells that then go on to make a collar that encircles the outside of the bone and remodels the medullary cavity on the inside. The nutrient artery enters via the nutrient foramen from a small opening in the diaphysis. It invades the primary center of ossification, bringing osteogenic cells (osteoblasts on the outside, osteoclasts on the inside.) The canal of the nutrient foramen is directed away from more active end of bone when one end grows more than the other. When bone grows at same rate at both ends, the nutrient artery is perpendicular to the bone. Most other bones (e.g. vertebrae) also have primary ossification centers, and bone is laid down in a similar manner. Secondary centers The secondary centers generally appear at the epiphysis. Secondary ossification mostly occurs after birth (except for distal femur and proximal tibia which occurs during 9th month of fetal development). The epiphyseal arteries and osteogenic cells invade the epiphysis, depositing osteoclasts and osteoblasts which erode the cartilage and build bone, respectively. This occurs at both ends of long bones but only one end of digits and ribs. Evolution Several hypotheses have been proposed for how bone evolved as a structural element in vertebrates. One hypothesis is that bone developed from tissues that evolved to store minerals. Specifically, calcium-based minerals were stored in cartilage and bone was an exaptation development from this calcified cartilage. However, other possibilities include bony tissue evolving as an osmotic barrier, or as a protective structure. See also Dystrophic calcification Mechanostat, a model describing ossification and bone loss Ossicone, the horn-like (or antler-like) protuberances on the heads of giraffes and related species Osteogenesis imperfecta, a juvenile bone disease Fibrodysplasia ossificans progressiva, an extremely rare genetic disease which causes fibrous tissue (muscle, tendon, ligament etc.) to ossify when damaged Primrose syndrome, a rare genetic disease in which cartilage becomes ossified. References Animal physiology Skeletal system Ossification
Ossification
[ "Biology" ]
1,099
[ "Animals", "Animal physiology" ]
1,427,967
https://en.wikipedia.org/wiki/Four-wave%20mixing
Four-wave mixing (FWM) is an intermodulation phenomenon in nonlinear optics, whereby interactions between two or three wavelengths produce two or one new wavelengths. It is similar to the third-order intercept point in electrical systems. Four-wave mixing can be compared to the intermodulation distortion in standard electrical systems. It is a parametric nonlinear process, in that the energy of the incoming photons is conserved. FWM is a phase-sensitive process, in that the efficiency of the process is strongly affected by phase matching conditions. Mechanism When three frequencies (f1, f2, and f3) interact in a nonlinear medium, they give rise to a fourth frequency (f4) which is formed by the scattering of the incident photons, producing the fourth photon. Given inputs f1, f2, and f3, the nonlinear system will produce From calculations with the three input signals, it is found that 12 interfering frequencies are produced, three of which lie on one of the original incoming frequencies. Note that these three frequencies which lie at the original incoming frequencies are typically attributed to self-phase modulation and cross-phase modulation, and are naturally phase-matched unlike FWM. Sum- and difference-frequency generation Two common forms of four-wave mixing are dubbed sum-frequency generation and difference-frequency generation. In sum-frequency generation three fields are input and the output is a new high frequency field at the sum of the three input frequencies. In difference-frequency generation, the typical output is the sum of two minus the third. A condition for efficient generation of FWM is phase matching: the associated k-vectors of the four components must add to zero when they are plane waves. This becomes significant since sum- and difference-frequency generation are often enhanced when resonance in the mixing media is exploited. In many configurations the sum of the first two photons will be tuned close to a resonant state. However, close to resonances the index of refraction changes rapidly and makes addition four co-linear k-vectors fail to add exactly to zero—thus long mixing path lengths are not always possible as the four component lose phase lock. Consequently, beams are often focused both for intensity but also to shorten the mixing zone. In gaseous media an often overlooked complication is that light beams are rarely plane waves but are often focused for extra intensity, this can add an addition pi-phase shift to each k-vector in the phase matching condition. It is often very hard to satisfy this in the sum-frequency configuration but it is more easily satisfied in the difference-frequency configuration (where the pi phase shifts cancel out). As a result, difference-frequency is usually more broadly tunable and easier to set up than sum-frequency generation, making it preferable as a light source even though it's less quantum efficient than sum-frequency generation. The special case of sum-frequency generation where all the input photons have the same frequency (and wavelength) is Third-Harmonic Generation (THG). Degenerate four-wave mixing Four-wave mixing is also present if only two components interact. In this case the term couples three components, thus generating so-called degenerate four-wave mixing, showing identical properties to the case of three interacting waves. Adverse effects of FWM in fiber-optic communications FWM is a fiber-optic characteristic that affects wavelength-division multiplexing (WDM) systems, where multiple optical wavelengths are spaced at equal intervals or channel spacing. The effects of FWM are pronounced with decreased channel spacing of wavelengths (such as in dense WDM systems) and at high signal power levels. High chromatic dispersion decreases FWM effects, as the signals lose coherence, or in other words, the phase mismatch between the signals increases. The interference FWM caused in WDM systems is known as interchannel crosstalk. FWM can be mitigated by using uneven channel spacing or fiber that increases dispersion. For the special case where the three frequencies are close to degenerate, then optical separation of the difference frequency can be technically challenging. Applications FWM finds applications in optical phase conjugation, parametric amplification, supercontinuum generation, Vacuum Ultraviolet light generation and in microresonator based frequency comb generation. Parametric amplifiers and oscillators based on four-wave mixing use the third order nonlinearity, as opposed to most typical parametric oscillators which use the second-order nonlinearity. Apart from these classical applications, four-wave mixing has shown promise in the quantum optical regime for generating single photons, correlated photon pairs, squeezed light and entangled photons. See also Kerr frequency comb Lugiato–Lefever equation Optical Kerr effect Optical phase conjugation, phase conjugate mirror Supercontinuum generation References External links Encyclopedia of Laser Physics and Technology Nonlinear optics Photonics Fiber optics Frequency mixers
Four-wave mixing
[ "Engineering" ]
1,020
[ "Radio electronics", "Frequency mixers" ]
1,428,025
https://en.wikipedia.org/wiki/Terephthalic%20acid
Terephthalic acid is an organic compound with formula C6H4(CO2H)2. This white solid is a commodity chemical, used principally as a precursor to the polyester PET, used to make clothing and plastic bottles. Several million tons are produced annually. The common name is derived from the turpentine-producing tree Pistacia terebinthus and phthalic acid. Terephthalic acid is also used in the production of PBT plastic (polybutylene terephthalate). History Terephthalic acid was first isolated (from turpentine) by the French chemist Amédée Cailliot (1805–1884) in 1846. Terephthalic acid became industrially important after World War II. Terephthalic acid was produced by oxidation of p-xylene with 30-40% nitric acid. Air oxidation of p-xylene gives p-toluic acid, which resists further air-oxidation. Esterification of p-toluic acid to methyl p-toluate (CH3C6H4CO2CH3) opens the way for further oxidation to monomethyl terephthalate. In the Dynamit−Nobel process these two oxidations and the esterification were performed in a single reactor. The reaction conditions also lead to a second esterification, producing dimethyl terephthalate, which could be hydrolysed to terepthalic acid. In 1955, Mid-Century Corporation and ICI announced the bromide-catalysed oxidation of p-toluic acid directly to terephthalic acid, without the need to isolate intermediates and still using air as the oxidant. Amoco (as Standard Oil of Indiana) purchased the Mid-Century/ICI technology, and the process is now known by their name. Synthesis Amoco Process In the Amoco process, which is widely adopted worldwide, terephthalic acid is produced by catalytic oxidation of p-xylene: The process uses a cobalt–manganese–bromide catalyst. The bromide source can be sodium bromide, hydrogen bromide or tetrabromoethane. Bromine functions as a regenerative source of free radicals. Acetic acid is the solvent and compressed air serves as the oxidant. The combination of bromine and acetic acid is highly corrosive, requiring specialized reactors, such as those lined with titanium. A mixture of p-xylene, acetic acid, the catalyst system, and compressed air is fed to a reactor. Mechanism The oxidation of p-xylene proceeds by a free radical process. Bromine radicals decompose cobalt and manganese hydroperoxides. The resulting oxygen-based radicals abstract hydrogen from a methyl group, which have weaker C–H bonds than does the aromatic ring. Many intermediates have been isolated. p-xylene is converted to p-toluic acid, which is less reactive than the p-xylene owing to the influence of the electron-withdrawing carboxylic acid group. Incomplete oxidation produces 4-carboxybenzaldehyde (4-CBA), which is often a problematic impurity. Challenges Approximately 5% of the acetic acid solvent is lost by decomposition or "burning". Product loss by decarboxylation to benzoic acid is common. The high temperature diminishes oxygen solubility in an already oxygen-starved system. Pure oxygen cannot be used in the traditional system due to hazards of flammable organic–O2 mixtures. Atmospheric air can be used in its place, but once reacted needs to be purified of toxins and ozone depleters such as methylbromide before being released. Additionally, the corrosive nature of bromides at high temperatures requires the reaction be run in expensive titanium reactors. Alternative reaction media The use of carbon dioxide overcomes many of the problems with the original industrial process. Because CO2 is a better flame inhibitor than N2, a CO2 environment allows for the use of pure oxygen directly, instead of air, with reduced flammability hazards. The solubility of molecular oxygen in solution is also enhanced in the CO2 environment. Because more oxygen is available to the system, supercritical carbon dioxide (Tc = 31 °C) has more complete oxidation with fewer byproducts, lower carbon monoxide production, less decarboxylation and higher purity than the commercial process. In supercritical water medium, the oxidation can be effectively catalyzed by MnBr2 with pure O2 in a medium-high temperature. Use of supercritical water instead of acetic acid as a solvent diminishes environmental impact and offers a cost advantage. However, the scope of such reaction systems is limited by the even more demanding conditions than the industrial process (300–400 °C, >200 bar). Promotors and additives As with any large-scale process, many additives have been investigated for potential beneficial effects. Promising results have been reported with the following. Ketones act as promoters for formation of the active cobalt(III) catalyst. In particular, ketones with α-methylene groups oxidize to hydroperoxides that are known to oxidize cobalt(II). 2-Butanone is often used. Zirconium salts enhance the activity of Co-Mn-Br catalysts. Selectivity is also improved. N-Hydroxyphthalimide is a potential replacement for bromide, which is highly corrosive. The phthalimide functions by formation of the oxyl radical. Guanidine inhibits the oxidation of the first methyl but enhances the usually slow oxidation of the toluic acid. Alternative routes Terephthalic acid can also be made from toluene by the Gattermann-Koch reaction, which gives 4-methylbenzaldehyde. Oxidation of the latter gives terephthalic acid. Terephthalic acid can be prepared in the laboratory by oxidizing many para-disubstituted derivatives of benzene, including caraway oil or a mixture of cymene and cuminol with chromic acid. Although not commercially significant, there is also the so-called "Henkel process" or "Raecke process", named after the company and patent holder, respectively. This route involves the transfer of carboxylate groups. Either potassium benzoate disproportionates to potassium terephthalate and benzene or potassium phthalate rearranges to the terephthalate. Phthalic anhydride can be used as a raw material and then potassium can be recycled. Lummus (now a subsidiary of McDermott International) has reported a route from the dinitrile, which can be obtained by ammoxidation of p-xylene. Applications Virtually the entire world's supply of terephthalic acid and dimethyl terephthalate are consumed as precursors to polyethylene terephthalate (PET). World production in 1970 was around 1.75 million tonnes. By 2006, global purified terephthalic acid (PTA) demand had exceeded 30 million tonnes. A smaller, but nevertheless significant, demand for terephthalic acid exists in the production of polybutylene terephthalate and several other engineering polymers. Other uses Polyester fibers based on PTA provide easy fabric care, both alone and in blends with natural and other synthetic fibers. Polyester films are used widely in audio and video recording tapes, data storage tapes, photographic films, labels and other sheet material requiring both dimensional stability and toughness. Terephthalic acid is used in paint as a carrier. Terephthalic acid is used as a raw material to make terephthalate plasticizers such as dioctyl terephthalate and dibutyl terephthalate. It is used in the pharmaceutical industry as a raw material for certain drugs. In addition to these end uses, Terephthalic acid based polyesters and polyamides are also used in hot melt adhesives. PTA is an important raw material for lower molecular weight saturated polyesters for powder and water-soluble coatings. In the research laboratory, terephthalic acid has been popularized as a component for the synthesis of metal-organic frameworks. The analgesic drug oxycodone occasionally comes as a terephthalate salt; however, the more usual salt of oxycodone is the hydrochloride. Pharmacologically, one milligram of hydrochloridum oxycodonae is equivalent to 1.13 mg of terephthalas oxycodonae. Terephthalic acid is used as a filler in some military smoke grenades, most notably the American M83 smoke grenade and M90 vehicle-employed smoke grenade, producing a thick white smoke that acts as an obscurant in the visual and near-infrared spectrum when burned. Solubility Terephthalic acid is poorly soluble in water and alcohols; consequently, until about 1970 terephthalic acid was purified as its dimethyl ester. It sublimes when heated. |} Toxicity Terephthalic acid and its dimethyl ester have very low toxicity, with >1 g/kg (oral, mouse). Biodegradation In Comamonas thiooxydans strain E6, terephthalic acid is biodegraded to protocatechuic acid, a common natural product, via a reaction pathway initiated by terephthalate 1,2-dioxygenase. Combined with the previously known PETase and MHETase, a full pathway for PET plastic degradation can be engineered. See also Polycyclohexylenedimethylene terephthalate a thermoplastic polyester formed from terephthalic acid References Cited sources External links and further reading International Chemical Safety Card 0330 Dicarboxylic acids Carboxylic acid-based monomers Benzoic acids Commodity chemicals Substances discovered in the 19th century
Terephthalic acid
[ "Chemistry" ]
2,142
[ "Commodity chemicals", "Products of chemical industry" ]
1,428,123
https://en.wikipedia.org/wiki/BF%20model
The BF model or BF theory is a topological field, which when quantized, becomes a topological quantum field theory. BF stands for background field B and F, as can be seen below, are also the variables appearing in the Lagrangian of the theory, which is helpful as a mnemonic device. We have a 4-dimensional differentiable manifold M, a gauge group G, which has as "dynamical" fields a 2-form B taking values in the adjoint representation of G, and a connection form A for G. The action is given by where K is an invariant nondegenerate bilinear form over (if G is semisimple, the Killing form will do) and F is the curvature form This action is diffeomorphically invariant and gauge invariant. Its Euler–Lagrange equations are (no curvature) and (the covariant exterior derivative of B is zero). In fact, it is always possible to gauge away any local degrees of freedom, which is why it is called a topological field theory. However, if M is topologically nontrivial, A and B can have nontrivial solutions globally. In fact, BF theory can be used to formulate discrete gauge theory. One can add additional twist terms allowed by group cohomology theory such as Dijkgraaf–Witten topological gauge theory. There are many kinds of modified BF theories as topological field theories, which give rise to link invariants in 3 dimensions, 4 dimensions, and other general dimensions. See also Background field method Barrett–Crane model Dual graviton Plebanski action Spin foam Tetradic Palatini action References External links http://math.ucr.edu/home/baez/qg-fall2000/qg2.2.html Quantum field theory
BF model
[ "Physics" ]
378
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]
1,428,177
https://en.wikipedia.org/wiki/Deicing
De-icing is the process of removing snow, ice or frost from a surface. Anti-icing is the application of chemicals that not only de-ice but also remain on a surface and continue to delay the reformation of ice for a certain period of time, or prevent adhesion of ice to make mechanical removal easier. De-icing can be accomplished by mechanical methods (scraping, pushing); through the application of heat; by use of dry or liquid chemicals designed to lower the freezing point of water (various salts or brines, alcohols, glycols); or by a combination of these different techniques. Application areas Roadways In 2013, an estimated 14 million tons of salt were used for de-icing roads in North America. De-icing of roads has traditionally been done with salt, spread by snowplows or dump trucks designed to spread it, often mixed with sand and gravel, on slick roads. Sodium chloride (rock salt) is normally used, as it is inexpensive and readily available in large quantities. However, since salt water still freezes at , it is of no help when the temperature falls below this point. It also has a tendency to cause corrosion, rusting the steel used in most vehicles and the rebar in concrete bridges. Depending on the concentration, it can be toxic to some plants and animals, and some urban areas have moved away from it as a result. More recent snowmelters use other salts, such as calcium chloride and magnesium chloride, which not only depress the freezing point of water to a much lower temperature, but also produce an exothermic reaction. They are somewhat safer for sidewalks, but excess should still be removed. More recently, organic compounds have been developed that reduce the environmental issues connected with salts and have longer residual effects when spread on roadways, usually in conjunction with salt brines or solids. These compounds are often generated as byproducts of agricultural operations such as sugar beet refining or the distillation process that produces ethanol. Other organic compounds are wood ash and a de-icing salt called calcium magnesium acetate made from roadside grass or even kitchen waste. Additionally, mixing common rock salt with some of the organic compounds and magnesium chloride results in spreadable materials that are both effective to much colder temperatures () as well as at lower overall rates of spreading per unit area. Solar road systems have been used to maintain the surface of roads above the freezing point of water. An array of pipes embedded in the road surface is used to collect solar energy in summer, transfer the heat to thermal banks and return the heat to the road in winter to maintain the surface above . This automated form of renewable energy collection, storage and delivery avoids the environmental issues of using chemical contaminants. It was suggested in 2012 that superhydrophobic surfaces capable of repelling water can also be used to prevent ice accumulation leading to icephobicity. However, not every superhydrophobic surface is icephobic and the method is still under development. Trains and rail switches Trains and rail switches in Arctic regions can have significant problems with snow and ice build up. They need a constant heat source on cold days to ensure functionality. On trains it is primarily the brakes, suspension, and couplers that require heaters for de-icing. On the rails it is primarily track switches that are sensitive to ice. High-powered electrical heaters prevent ice formation and rapidly melt any ice that forms. The heaters are preferably made of PTC material, for example PTC rubber, to avoid overheating and potentially destroying the heaters. These heaters are self-limiting and require no regulating electronics; they cannot overheat and require no overheat protection. Aviation Ground de-icing of aircraft On the ground, when there are freezing conditions and precipitation, de-icing an aircraft is commonly practiced. Frozen contaminants interfere with the aerodynamic properties of the vehicle. Furthermore, dislodged ice can damage the engines. Ground de-icing methods include: Spraying on various aircraft deicing fluids to melt ice and prevent reformation Using unheated forced air to blow off loose snow and ice Using infrared heating to melt snow, ice, and frost without using chemicals Mechanical deicing using tools such as brooms, scrapers, and ropes Placing an aircraft in a warm hangar In-flight de-icing Ice can build up on aircraft in flight due to atmospheric conditions, causing potential degradation of flight performance. Large commercial aircraft almost always have in-flight ice protections systems to shed ice buildup and prevent reformation. Ice protection systems are becoming increasingly common in smaller general aviation aircraft as well. Ice protection systems typically use one or more of the following approaches: pneumatic rubber "boots" on leading edges of wings and control surfaces, which expand to break off accumulated ice electrically heated strips on critical surfaces to prevent ice formation and melt accumulated ice bleed air systems which take heated air from the engines and duct them to locations where ice can accumulate fluid systems which "weep" de-icing fluid over wings and control surfaces via tiny holes Airport pavement De-icing operations for airport pavement (runways, taxiways, aprons, taxiway bridges) may involve several types of liquid and solid chemical products, including propylene glycol, ethylene glycol and other organic compounds. Chloride-based compounds (e.g. salt) are not used at airports, due to their corrosive effect on aircraft and other equipment. Urea mixtures have also been used for pavement de-icing, due to their low cost. However, urea is a significant pollutant in waterways and wildlife, as it degrades to ammonia after application, and it has largely been phased out at U.S. airports. In 2012 the U.S. Environmental Protection Agency (EPA) prohibited use of urea-based de-icers at most commercial airports. Water agitator de-icer Water agitators are electric motors put under water that propel up warmer water and agitate the surface with it to de-ice aquatic structures on rivers and lakes in freezing temperatures. There are also agitator bubblers that use compressed air, run through a hose, and released to agitate the water. De-icing chemicals All chemical de-icers share a common working mechanism: they chemically prevent water molecules from binding above a certain temperature that depends on the concentration. This temperature is below 0 °C, the freezing point of pure water (freezing point depression). Sometimes, there is an exothermic dissolution reaction that allows for an even stronger melting power. The following lists contains the most-commonly used de-icing chemicals and their typical chemical formula. Salts Sodium chloride (NaCl or table salt; the most common de-icing chemical) Magnesium chloride (, often added to salt to lower its working temperature) Calcium chloride (, often added to salt to lower its working temperature, attacks concrete) Potassium chloride (KCl) Calcium magnesium acetate () Potassium acetate () Potassium formate () Sodium formate (HCOONa) Calcium formate () Organics Urea (), a common fertilizer Agricultural by-products, generally used as additives to sodium chloride Methanol (), scarcely used on roads Ethylene glycol (), scarcely used on roads Propylene glycol (), scarcely used on roads Glycerol (), scarcely used on roads Environmental impact and mitigation De-icing salts such as sodium chloride or calcium chloride leach into natural waters, strongly affecting their salinity. Ethylene glycol and propylene glycol are known to exert high levels of biochemical oxygen demand (BOD) during degradation in surface waters. This process can adversely affect aquatic life by consuming oxygen needed by aquatic organisms for survival. Large quantities of dissolved oxygen (DO) in the water column are consumed when microbial populations decompose propylene glycol. Some airports recycle used de-icing fluid, separating water and solid contaminants, enabling reuse of the fluid in other applications. Other airports have an on-site wastewater treatment facility, and/or send collected fluid to a municipal sewage treatment plant or a commercial wastewater treatment facility. See also Atmospheric icing Pollution Winter service vehicle References External links Aviation safety Aviation risks Transport safety Chemical processes Ice in transportation NASA spin-off technologies
Deicing
[ "Physics", "Chemistry" ]
1,732
[ "Ice in transportation", "Transport safety", "Physical systems", "Transport", "Chemical processes", "nan", "Chemical process engineering" ]
1,429,237
https://en.wikipedia.org/wiki/Induced%20gravity
Induced gravity (or emergent gravity) is an idea in quantum gravity that spacetime curvature and its dynamics emerge as a mean field approximation of underlying microscopic degrees of freedom, similar to the fluid mechanics approximation of Bose–Einstein condensates. The concept was originally proposed by Andrei Sakharov in 1967. Overview Sakharov observed that many condensed matter systems give rise to emergent phenomena that are analogous to general relativity. For example, crystal defects can look like curvature and torsion in an Einstein–Cartan spacetime. This allows one to create a theory of gravity with torsion from a world crystal model of spacetime in which the lattice spacing is of the order of a Planck length. Sakharov's idea was to start with an arbitrary background pseudo-Riemannian manifold (in modern treatments, possibly with torsion) and introduce quantum fields (matter) on it but not introduce any gravitational dynamics explicitly. This gives rise to an effective action which to one-loop order contains the Einstein–Hilbert action with a cosmological constant. In other words, general relativity arises as an emergent property of matter fields and is not put in by hand. On the other hand, such models typically predict huge cosmological constants. Some argue that the particular models proposed by Sakharov and others have been proven impossible by the Weinberg–Witten theorem. However, models with emergent gravity are possible as long as other things, such as spacetime dimensions, emerge together with gravity. Developments in AdS/CFT correspondence after 1997 suggest that the microphysical degrees of freedom in induced gravity might be radically different. The bulk spacetime arises as an emergent phenomenon of the quantum degrees of freedom that are entangled and live in the boundary of the spacetime. According to some prominent researchers in emergent gravity (such as Mark Van Raamsdonk) spacetime is built up of quantum entanglement. This implies that quantum entanglement is the fundamental property that gives rise to spacetime. In 1995, Jacobson showed that the Einstein field equations can be derived from the first law of thermodynamics applied at local Rindler horizons. Thanu Padmanabhan and Erik Verlinde explore links between gravity and entropy, Verlinde being known for an entropic gravity proposal. The Einstein equation for gravity can emerge from the entanglement first law. In the "quantum graphity" proposal of Konopka, Markopoulu-Kalamara, Severini and Smolin, the fundamental degrees of freedom exist on a dynamical graph that is initially complete, and an effective spatial lattice structure emerges in the low-temperature limit. See also Black hole thermodynamics Entropic force Entropic gravity List of quantum gravity researchers Superfluid vacuum theory Einstein–Cartan theory References External links Carlos Barcelo, Stefano Liberati, Matt Visser, Living Rev.Rel. 8:12, 2005. D. Berenstein, Emergent Gravity from CFT, online lecture. C. J. Hogan Quantum Indeterminacy of Emergent Spacetime, preprint A.D. Sakharov, Vacuum Quantum Fluctuations in Curved Space and the Theory of Gravitation, 1967. Matt Visser, Sakharov's induced gravity: a modern perspective, 2002. H. Kleinert, Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation, 2008. M. Brouwer et al., First test of Verlinde's theory of Emergent Gravity using Weak Gravitational Lensing measurements, 2016. Theories of gravity Emergence
Induced gravity
[ "Physics" ]
743
[ "Theoretical physics", "Theories of gravity" ]
1,430,548
https://en.wikipedia.org/wiki/Preon
In particle physics, preons are hypothetical point particles, conceived of as sub-components of quarks and leptons. The word was coined by Jogesh Pati and Abdus Salam, in 1974. Interest in preon models peaked in the 1980s but has slowed, as the Standard Model of particle physics continues to describe physics mostly successfully, and no direct experimental evidence for lepton and quark compositeness has been found. Preons come in four varieties: plus, anti-plus, zero, and anti-zero. W bosons have six preons, and quarks and leptons have only three. In the hadronic sector, some effects are considered anomalies within the Standard Model. For example, the proton spin puzzle, the EMC effect, the distributions of electric charges inside the nucleons, as found by Robert Hofstadter in 1956, and the ad hoc CKM matrix elements. When the term "preon" was coined, it was primarily to explain the two families of spin- fermions: quarks and leptons. More recent preon models also account for spin-1 bosons, and are still called "preons". Each of the preon models postulates a set of fewer fundamental particles than those of the Standard Model, together with the rules governing how those fundamental particles combine and interact. Based on these rules, the preon models try to explain the Standard Model, often predicting small discrepancies with this model and generating new particles and certain phenomena which do not belong to the Standard Model. Goals of preon models Preon research is motivated by the desire to: Reduce the large number of particles, many that differ only in charge, to a smaller number of more fundamental particles. For example, the down quark and up quark are nearly identical except for charge, and a slight mass difference; preon research is motivated by explaining that quarks are composed of similar preons, with incremental differences accounting for charge. The hope is to reproduce the reductionist strategy that has worked for the periodic table of elements and the quark model of mesons and baryons. Explain the reason for there being exactly three generations of fermions. Calculate parameters that are currently unexplained by the Standard Model, such as the masses of S.M. fundamental fermions, their electric charges, and color charges; in effect, reduce the number of model-required experimental input parameters from the number required by the Standard Model. Provide reasons for the very large range of mass-energy observed in supposedly fundamental particles, from the electron neutrino to the top quark. Provide alternative explanations for the electro-weak symmetry breaking without invoking a Higgs field, which itself possibly needs a supersymmetry to correct the theoretical problems involved with the Higgs field; (further, the supersymmetric theories proposed so far have theoretical and observational problems of their own). Account for neutrino oscillation and apparently unique mass mechanism. Make new, non-repetitive predictions, such as providing cold dark matter candidates. Explain why there exists only the observed variety of particle species, and give a model with reasons for producing only these observed particles (since the prediction of non-observed particles is a problem with many current models, such as supersymmetry). Background Before the Standard Model was developed in the 1970s (the key elements of the Standard Model known as quarks were proposed by Murray Gell-Mann and George Zweig in 1964), physicists observed hundreds of different kinds of particles in particle accelerators. These were organized into relationships on their physical properties in a largely ad-hoc system of hierarchies, not entirely unlike the way taxonomy grouped animals based on their physical features. Not surprisingly, the huge number of particles was referred to as the "particle zoo". The Standard Model, which is now the prevailing model of particle physics, dramatically simplified this picture by showing that most of the observed particles were mesons, which are combinations of two quarks, or baryons which are combinations of three quarks, plus a handful of other particles. The particles being seen in the ever-more-powerful accelerators were, according to the theory, typically nothing more than combinations of these quarks. Comparisons of quarks, leptons, and bosons Within the Standard Model, there are several classes of particles. One of these, the quarks, has six types, of which there are three varieties in each (dubbed "colors", red, green, and blue, giving rise to quantum chromodynamics). Additionally, there are six different types of what are known as leptons. Of these six leptons, there are three charged particles: the electron, muon, and tau. The neutrinos comprise the other three leptons, and each neutrino pairs with one of the three charged leptons. In the Standard Model, there are also bosons, including the photons and gluons; W, W, and Z bosons; and the Higgs boson; and an open space left for the graviton. Almost all of these particles come in "left-handed" and "right-handed" versions (see chirality). The quarks, leptons, and W boson all have antiparticles with opposite electric charge (or in the case of the neutrinos, opposite weak isospin). Unresolved problems with the Standard Model The Standard Model also has a number of problems which have not been entirely solved. In particular, no successful theory of gravitation based on a particle theory has yet been proposed. Although the Model assumes the existence of a graviton, all attempts to produce a consistent theory based on them have failed. Kalman asserts that, according to the concept of atomism, fundamental building blocks of nature are indivisible bits of matter that are ungenerated and indestructible. Neither leptons nor quarks are truly indestructible, since some leptons can decay into other leptons, some quarks into other quarks. Thus, on fundamental grounds, quarks are not themselves fundamental building blocks, but must be composed of other, fundamental quantities—preons. Although the mass of each successive particle follows certain patterns, predictions of the rest mass of most particles cannot be made precisely, except for the masses of almost all baryons which have been modeled well by de Souza (2010). The Standard Model also has problems predicting the large scale structure of the universe. For instance, the SM generally predicts equal amounts of matter and antimatter in the universe. A number of attempts have been made to "fix" this through a variety of mechanisms, but to date none have won widespread support. Likewise, basic adaptations of the Model suggest the presence of proton decay, which has not yet been observed. Motivation for preon models Several models have been proposed in an attempt to provide a more fundamental explanation of the results in experimental and theoretical particle physics, using names such as "parton" or "preon" for the hypothetical basic particle constituents. Preon theory is motivated by a desire to replicate in particle physics the achievements of the periodic table in Chemistry, which reduced 94 naturally occurring elements to combinations of just three building-blocks (proton, neutron, electron). Likewise, the Standard Model later organized the "particle zoo" of hadrons by reducing several dozen particles to combinations at a more fundamental level of (at first) just three quarks, consequently reducing the huge number of arbitrary constants in mid-twentieth-century particle physics prior to the Standard Model and quantum chromodynamics. However, the particular preon model discussed below has attracted comparatively little interest among the particle physics community to date, in part because no evidence has been obtained so far in collider experiments to show that the fermions of the Standard Model are composite. Attempts A number of physicists have attempted to develop a theory of "pre-quarks" (from which the name preon derives) in an effort to justify theoretically the many parts of the Standard Model that are known only through experimental data. Other names which have been used for these proposed fundamental particles (or particles intermediate between the most fundamental particles and those observed in the Standard Model) include prequarks, subquarks, maons, alphons, quinks, rishons, tweedles, helons, haplons, Y-particles, and primons. Preon is the leading name in the physics community. Efforts to develop a substructure date at least as far back as 1974 with a paper by Pati and Salam in Physical Review. Other attempts include a 1977 paper by Terazawa, Chikashige, and Akama, similar, but independent, 1979 papers by Ne'eman, Harari, and Shupe, a 1981 paper by Fritzsch and Mandelbaum, and a 1992 book by D'Souza and Kalman. None of these have gained wide acceptance in the physics world. However, in a recent work de Souza has shown that his model describes well all weak decays of hadrons according to selection rules dictated by a quantum number derived from his compositeness model. In his model leptons are elementary particles and each quark is composed of two primons, and thus, all quarks are described by four primons. Therefore, there is no need for the Standard Model Higgs boson and each quark mass is derived from the interaction between each pair of primons by means of three Higgs-like bosons. In his 1989 Nobel Prize acceptance lecture, Hans Dehmelt described a most fundamental elementary particle, with definable properties, which he called the cosmon, as the likely result of a long but finite chain of increasingly more elementary particles. Composite Higgs Many preon models either do not account for the Higgs boson or rule it out, and propose that electro-weak symmetry is broken not by a scalar Higgs field but by composite preons. For example, Fredriksson preon theory does not need the Higgs boson, and explains the electro-weak breaking as the rearrangement of preons, rather than a Higgs-mediated field. In fact, the Fredriksson preon model and the de Souza model predict that the Standard Model Higgs boson does not exist. Rishon model The rishon model (RM) is the earliest effort (1979) to develop a preon model to explain the phenomenon appearing in the Standard Model (SM) of particle physics. It was first developed by Haim Harari and Michael A. Shupe (independently of each other), and later expanded by Harari and his then-student Nathan Seiberg. The model has two kinds of fundamental particles called rishons (ראשונים) (which means "First" in Hebrew). They are T ("Third" since it has an electric charge of ⅓ e, or Tohu (תוהו) which means "Chaos") and V ("Vanishes", since it is electrically neutral, or Vohu which means "void"). All leptons and all flavours of quarks are three-rishon ordered triplets. These groups of three rishons have spin-½. The Rishon model illustrates some of the typical efforts in the field. Many of the preon models theorize that the apparent imbalance of matter and antimatter in the universe is in fact illusory, with large quantities of preon-level antimatter confined within more complex structures. Criticisms The mass paradox One preon model started as an internal paper at the Collider Detector at Fermilab (CDF) around 1994. The paper was written after an unexpected and inexplicable excess of jets with energies above 200 GeV were detected in the 1992–1993 running period. However, scattering experiments have shown that quarks and leptons are "point like" down to distance scales of less than  m (or of a proton diameter). The momentum uncertainty of a preon (of whatever mass) confined to a box of this size is about 200 GeV/c, which is 50,000 times larger than the (model dependent) rest mass of an up-quark, and 400,000 times larger than the rest mass of an electron. Heisenberg's uncertainty principle states that and thus anything confined to a box smaller than would have a momentum uncertainty proportionally greater. Thus, the preon model proposed particles smaller than the elementary particles they make up, since the momentum uncertainty should be greater than the particles themselves. So the preon model represents a mass paradox: How could quarks or electrons be made of smaller particles that would have many orders of magnitude greater mass-energies arising from their enormous momenta? One way of resolving this paradox is to postulate a large binding force between preons that cancels their mass-energies. Conflicts with observed physics Preon models propose additional unobserved forces or dynamics to account for the observed properties of elementary particles, which may have implications in conflict with observation. For example, now that the LHC's observation of a Higgs boson is confirmed, the observation contradicts the predictions of many preon models that excluded it. Preon theories require quarks and leptons to have a finite size. It is possible that the Large Hadron Collider will observe this after it is upgraded to higher energies. In popular culture In the 1948 reprint/edit of his 1930 novel Skylark Three, E. E. Smith postulated a series of 'subelectrons of the first and second type' with the latter being fundamental particles that were associated with the gravitation force. While this may not have been an element of the original novel (the scientific basis of some of the other novels in the series was revised extensively due to the additional eighteen years of scientific development), even the edited publication may be the first, or one of the first, mentions of the possibility that electrons are not fundamental particles. In the novelized version of the 1982 motion picture Star Trek II: The Wrath of Khan, written by Vonda McIntyre, two of Dr. Carol Marcus' Genesis project team, Vance Madison and Delwyn March, have studied sub-elementary particles they've named "boojums" and "snarks", in a field they jokingly call "kindergarten physics" because it is lower than "elementary" (analogy to school levels). James P. Hogan's 1982 novel Voyage from Yesteryear discussed preons (called tweedles), the physics of which became central to the plot. In the 2018 VR video game Blade and Sorcery, a preon star is revealed to be the energy source that powers magic in the game's world, and its 10,000 year close orbital pass is a key driver for the plot. In the 2020 video game Risk of Rain 2, an item in the game is called the 'preon accumulator', though only a reference in name. See also References Further reading — an editorial about preons Hypothetical elementary particles
Preon
[ "Physics" ]
3,157
[ "Hypothetical elementary particles", "Unsolved problems in physics", "Physics beyond the Standard Model" ]
1,430,855
https://en.wikipedia.org/wiki/Phage%20display
Phage display is a laboratory technique for the study of protein–protein, protein–peptide, and protein–DNA interactions that uses bacteriophages (viruses that infect bacteria) to connect proteins with the genetic information that encodes them. In this technique, a gene encoding a protein of interest is inserted into a phage coat protein gene, causing the phage to "display" the protein on its outside while containing the gene for the protein on its inside, resulting in a connection between genotype and phenotype. The proteins that the phages are displaying can then be screened against other proteins, peptides or DNA sequences, in order to detect interaction between the displayed protein and those of other molecules. In this way, large libraries of proteins can be screened and amplified in a process called in vitro selection, which is analogous to natural selection. The most common bacteriophages used in phage display are M13 and fd filamentous phage, though T4, T7, and λ phage have also been used. History Phage display was first described by George P. Smith in 1985, when he demonstrated the display of peptides on filamentous phage (long, thin viruses that infect bacteria) by fusing the virus's capsid protein to one peptide out of a collection of peptide sequences. This displayed the different peptides on the outer surfaces of the collection of viral clones, where the screening step of the process isolated the peptides with the highest binding affinity. In 1988, Stephen Parmley and George Smith described biopanning for affinity selection and demonstrated that recursive rounds of selection could enrich for clones present at 1 in a billion or less. In 1990, Jamie Scott and George Smith described creation of large random peptide libraries displayed on filamentous phage. Phage display technology was further developed and improved by groups at the Laboratory of Molecular Biology with Greg Winter and John McCafferty, The Scripps Research Institute with Richard Lerner and Carlos Barbas and the German Cancer Research Center with Frank Breitling and Stefan Dübel for display of proteins such as antibodies for therapeutic protein engineering. Smith and Winter were awarded a half share of the 2018 Nobel Prize in chemistry for their contribution to developing phage display. A patent by George Pieczenik claiming priority from 1985 also describes the generation of peptide libraries. Principle Like the two-hybrid system, phage display is used for the high-throughput screening of protein interactions. In the case of M13 filamentous phage display, the DNA encoding the protein or peptide of interest is ligated into the pIII or pVIII gene, encoding either the minor or major coat protein, respectively. Multiple cloning sites are sometimes used to ensure that the fragments are inserted in all three possible reading frames so that the cDNA fragment is translated in the proper frame. The phage gene and insert DNA hybrid is then inserted (a process known as "transduction") into E. coli bacterial cells such as TG1, SS320, ER2738, or XL1-Blue E. coli. If a "phagemid" vector is used (a simplified display construct vector) phage particles will not be released from the E. coli cells until they are infected with helper phage, which enables packaging of the phage DNA and assembly of the mature virions with the relevant protein fragment as part of their outer coat on either the minor (pIII) or major (pVIII) coat protein. By immobilizing a relevant DNA or protein target(s) to the surface of a microtiter plate well, a phage that displays a protein that binds to one of those targets on its surface will remain while others are removed by washing. Those that remain can be eluted, used to produce more phage (by bacterial infection with helper phage) and to produce a phage mixture that is enriched with relevant (i.e. binding) phage. The repeated cycling of these steps is referred to as 'panning', in reference to the enrichment of a sample of gold by removing undesirable materials. Phage eluted in the final step can be used to infect a suitable bacterial host, from which the phagemids can be collected and the relevant DNA sequence excised and sequenced to identify the relevant, interacting proteins or protein fragments. The use of a helper phage can be eliminated by using 'bacterial packaging cell line' technology. Elution can be done combining low-pH elution buffer with sonification, which, in addition to loosening the peptide-target interaction, also serves to detach the target molecule from the immobilization surface. This ultrasound-based method enables single-step selection of a high-affinity peptide. Applications Applications of phage display technology include determination of interaction partners of a protein (which would be used as the immobilised phage "bait" with a DNA library consisting of all coding sequences of a cell, tissue or organism) so that the function or the mechanism of the function of that protein may be determined. Phage display is also a widely used method for in vitro protein evolution (also called protein engineering). As such, phage display is a useful tool in drug discovery. It is used for finding new ligands (enzyme inhibitors, receptor agonists and antagonists) to target proteins. The technique is also used to determine tumour antigens (for use in diagnosis and therapeutic targeting) and in searching for protein-DNA interactions using specially-constructed DNA libraries with randomised segments. Recently, phage display has also been used in the context of cancer treatments - such as the adoptive cell transfer approach. In these cases, phage display is used to create and select synthetic antibodies that target tumour surface proteins. These are made into synthetic receptors for T-Cells collected from the patient that are used to combat the disease. Competing methods for in vitro protein evolution include yeast display, bacterial display, ribosome display, and mRNA display. Antibody maturation in vitro The invention of antibody phage display revolutionised antibody drug discovery. Initial work was done by laboratories at the MRC Laboratory of Molecular Biology (Greg Winter and John McCafferty), the Scripps Research Institute (Richard Lerner and Carlos F. Barbas) and the German Cancer Research Centre (Frank Breitling and Stefan Dübel). In 1991, The Scripps group reported the first display and selection of human antibodies on phage. This initial study described the rapid isolation of human antibody Fab that bound tetanus toxin and the method was then extended to rapidly clone human anti-HIV-1 antibodies for vaccine design and therapy. Phage display of antibody libraries has become a powerful method for both studying the immune response as well as a method to rapidly select and evolve human antibodies for therapy. Antibody phage display was later used by Carlos F. Barbas at The Scripps Research Institute to create synthetic human antibody libraries, a principle first patented in 1990 by Breitling and coworkers (Patent CA 2035384), thereby allowing human antibodies to be created in vitro from synthetic diversity elements. Antibody libraries displaying millions of different antibodies on phage are often used in the pharmaceutical industry to isolate highly specific therapeutic antibody leads, for development into antibody drugs primarily as anti-cancer or anti-inflammatory therapeutics. One of the most successful was adalimumab, discovered by Cambridge Antibody Technology as D2E7 and developed and marketed by Abbott Laboratories. Adalimumab, an antibody to TNF alpha, was the world's first fully human antibody to achieve annual sales exceeding $1bn. General protocol Below is the sequence of events that are followed in phage display screening to identify polypeptides that bind with high affinity to desired target protein or DNA sequence: Target proteins or DNA sequences are immobilized to the wells of a microtiter plate. Many genetic sequences are expressed in a bacteriophage library in the form of fusions with the bacteriophage coat protein, so that they are displayed on the surface of the viral particle. The protein displayed corresponds to the genetic sequence within the phage. This phage-display library is added to the dish and after allowing the phage time to bind, the dish is washed. Phage-displaying proteins that interact with the target molecules remain attached to the dish, while all others are washed away. Attached phage may be eluted and used to create more phage by infection of suitable bacterial hosts. The new phage constitutes an enriched mixture, containing considerably less irrelevant phage (i.e. non-binding) than were present in the initial mixture. Steps 3 to 5 are optionally repeated one or more times, further enriching the phage library in binding proteins. Following further bacterial-based amplification, the DNA within the interacting phage is sequenced to identify the interacting proteins or protein fragments. Selection of the coat protein Filamentous phages pIII pIII is the protein that determines the infectivity of the virion. pIII is composed of three domains (N1, N2 and CT) connected by glycine-rich linkers. The N2 domain binds to the F pilus during virion infection freeing the N1 domain which then interacts with a TolA protein on the surface of the bacterium. Insertions within this protein are usually added in position 249 (within a linker region between CT and N2), position 198 (within the N2 domain) and at the N-terminus (inserted between the N-terminal secretion sequence and the N-terminus of pIII). However, when using the BamHI site located at position 198 one must be careful of the unpaired Cysteine residue (C201) that could cause problems during phage display if one is using a non-truncated version of pIII. An advantage of using pIII rather than pVIII is that pIII allows for monovalent display when using a phagemid (plasmid derived from Ff phages) combined with a helper phage. Moreover, pIII allows for the insertion of larger protein sequences (>100 amino acids) and is more tolerant to it than pVIII. However, using pIII as the fusion partner can lead to a decrease in phage infectivity leading to problems such as selection bias caused by difference in phage growth rate or even worse, the phage's inability to infect its host. Loss of phage infectivity can be avoided by using a phagemid plasmid and a helper phage so that the resultant phage contains both wild type and fusion pIII. cDNA has also been analyzed using pIII via a two complementary leucine zippers system, Direct Interaction Rescue or by adding an 8-10 amino acid linker between the cDNA and pIII at the C-terminus. pVIII pVIII is the main coat protein of Ff phages. Peptides are usually fused to the N-terminus of pVIII. Usually peptides that can be fused to pVIII are 6-8 amino acids long. The size restriction seems to have less to do with structural impediment caused by the added section and more to do with the size exclusion caused by pIV during coat protein export. Since there are around 2700 copies of the protein on a typical phages, it is more likely that the protein of interest will be expressed polyvalently even if a phagemid is used. This makes the use of this protein unfavorable for the discovery of high affinity binding partners. To overcome the size problem of pVIII, artificial coat proteins have been designed. An example is Weiss and Sidhu's inverted artificial coat protein (ACP) which allows the display of large proteins at the C-terminus. The ACP's could display a protein of 20kDa, however, only at low levels (mostly only monovalently). pVI pVI has been widely used for the display of cDNA libraries. The display of cDNA libraries via phage display is an attractive alternative to the yeast-2-hybrid method for the discovery of interacting proteins and peptides due to its high throughput capability. pVI has been used preferentially to pVIII and pIII for the expression of cDNA libraries because one can add the protein of interest to the C-terminus of pVI without greatly affecting pVI's role in phage assembly. This means that the stop codon in the cDNA is no longer an issue. However, phage display of cDNA is always limited by the inability of most prokaryotes in producing post-translational modifications present in eukaryotic cells or by the misfolding of multi-domain proteins. While pVI has been useful for the analysis of cDNA libraries, pIII and pVIII remain the most utilized coat proteins for phage display. pVII and pIX In an experiment in 1995, display of Glutathione S-transferase was attempted on both pVII and pIX and failed. However, phage display of this protein was completed successfully after the addition of a periplasmic signal sequence (pelB or ompA) on the N-terminus. In a recent study, it has been shown that AviTag, FLAG and His could be displayed on pVII without the need of a signal sequence. Then the expression of single chain Fv's (scFv), and single chain T cell receptors (scTCR) were expressed both with and without the signal sequence. PelB (an amino acid signal sequence that targets the protein to the periplasm where a signal peptidase then cleaves off PelB) improved the phage display level when compared to pVII and pIX fusions without the signal sequence. However, this led to the incorporation of more helper phage genomes rather than phagemid genomes. In all cases, phage display levels were lower than using pIII fusion. However, lower display might be more favorable for the selection of binders due to lower display being closer to true monovalent display. In five out of six occasions, pVII and pIX fusions without pelB was more efficient than pIII fusions in affinity selection assays. The paper even goes on to state that pVII and pIX display platforms may outperform pIII in the long run. The use of pVII and pIX instead of pIII might also be an advantage because virion rescue may be undertaken without breaking the virion-antigen bond if the pIII used is wild type. Instead, one could cleave in a section between the bead and the antigen to elute. Since the pIII is intact it does not matter whether the antigen remains bound to the phage. T7 phages The issue of using Ff phages for phage display is that they require the protein of interest to be translocated across the bacterial inner membrane before they are assembled into the phage. Some proteins cannot undergo this process and therefore cannot be displayed on the surface of Ff phages. In these cases, T7 phage display is used instead. In T7 phage display, the protein to be displayed is attached to the C-terminus of the gene 10 capsid protein of T7. The disadvantage of using T7 is that the size of the protein that can be expressed on the surface is limited to shorter peptides because large changes to the T7 genome cannot be accommodated like it is in M13 where the phage just makes its coat longer to fit the larger genome within it. However, it can be useful for the production of a large protein library for scFV selection where the scFV is expressed on an M13 phage and the antigens are expressed on the surface of the T7 phage. Bioinformatics resources and tools Databases and computational tools for mimotopes have been an important part of phage display study. Databases, programs and web servers have been widely used to exclude target-unrelated peptides, characterize small molecules-protein interactions and map protein-protein interactions. Users can use three dimensional structure of a protein and the peptides selected from phage display experiment to map conformational epitopes. Some of the fast and efficient computational methods are available online. See also Directed evolution protein–protein interactions PelB leader sequence Competing techniques: Two-hybrid system mRNA display Ribosome display Yeast display References Further reading Selection Versus Design in Chemical Engineering The ETH-2 human antibody phage library External links Molecular biology Bacteriophages Microbiology Protein–protein interaction assays Display techniques
Phage display
[ "Chemistry", "Biology" ]
3,442
[ "Biochemistry methods", "Protein–protein interaction assays", "Microbiology", "Microscopy", "Biochemistry", "Display techniques", "Molecular biology" ]
1,431,342
https://en.wikipedia.org/wiki/Potential%20theory
In mathematics and mathematical physics, potential theory is the study of harmonic functions. The term "potential theory" was coined in 19th-century physics when it was realized that two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation. There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation. For example, a result about the singularities of harmonic functions would be said to belong to potential theory whilst a result on how the solution depends on the boundary data would be said to belong to the theory of Poisson's equation. This is not a hard and fast distinction, and in practice there is considerable overlap between the two fields, with methods and results from one being used in the other. Modern potential theory is also intimately connected with probability and the theory of Markov chains. In the continuous case, this is closely related to analytic theory. In the finite state space case, this connection can be introduced by introducing an electrical network on the state space, with resistance between points inversely proportional to transition probabilities and densities proportional to potentials. Even in the finite case, the analogue I-K of the Laplacian in potential theory has its own maximum principle, uniqueness principle, balance principle, and others. Symmetry A useful starting point and organizing principle in the study of harmonic functions is a consideration of the symmetries of the Laplace equation. Although it is not a symmetry in the usual sense of the term, we can start with the observation that the Laplace equation is linear. This means that the fundamental object of study in potential theory is a linear space of functions. This observation will prove especially important when we consider function space approaches to the subject in a later section. As for symmetry in the usual sense of the term, we may start with the theorem that the symmetries of the -dimensional Laplace equation are exactly the conformal symmetries of the -dimensional Euclidean space. This fact has several implications. First of all, one can consider harmonic functions which transform under irreducible representations of the conformal group or of its subgroups (such as the group of rotations or translations). Proceeding in this fashion, one systematically obtains the solutions of the Laplace equation which arise from separation of variables such as spherical harmonic solutions and Fourier series. By taking linear superpositions of these solutions, one can produce large classes of harmonic functions which can be shown to be dense in the space of all harmonic functions under suitable topologies. Second, one can use conformal symmetry to understand such classical tricks and techniques for generating harmonic functions as the Kelvin transform and the method of images. Third, one can use conformal transforms to map harmonic functions in one domain to harmonic functions in another domain. The most common instance of such a construction is to relate harmonic functions on a disk to harmonic functions on a half-plane. Fourth, one can use conformal symmetry to extend harmonic functions to harmonic functions on conformally flat Riemannian manifolds. Perhaps the simplest such extension is to consider a harmonic function defined on the whole of Rn (with the possible exception of a discrete set of singular points) as a harmonic function on the -dimensional sphere. More complicated situations can also happen. For instance, one can obtain a higher-dimensional analog of Riemann surface theory by expressing a multi-valued harmonic function as a single-valued function on a branched cover of Rn or one can regard harmonic functions which are invariant under a discrete subgroup of the conformal group as functions on a multiply connected manifold or orbifold. Two dimensions From the fact that the group of conformal transforms is infinite-dimensional in two dimensions and finite-dimensional for more than two dimensions, one can surmise that potential theory in two dimensions is different from potential theory in other dimensions. This is correct and, in fact, when one realizes that any two-dimensional harmonic function is the real part of a complex analytic function, one sees that the subject of two-dimensional potential theory is substantially the same as that of complex analysis. For this reason, when speaking of potential theory, one focuses attention on theorems which hold in three or more dimensions. In this connection, a surprising fact is that many results and concepts originally discovered in complex analysis (such as Schwarz's theorem, Morera's theorem, the Weierstrass-Casorati theorem, Laurent series, and the classification of singularities as removable, poles and essential singularities) generalize to results on harmonic functions in any dimension. By considering which theorems of complex analysis are special cases of theorems of potential theory in any dimension, one can obtain a feel for exactly what is special about complex analysis in two dimensions and what is simply the two-dimensional instance of more general results. Local behavior An important topic in potential theory is the study of the local behavior of harmonic functions. Perhaps the most fundamental theorem about local behavior is the regularity theorem for Laplace's equation, which states that harmonic functions are analytic. There are results which describe the local structure of level sets of harmonic functions. There is Bôcher's theorem, which characterizes the behavior of isolated singularities of positive harmonic functions. As alluded to in the last section, one can classify the isolated singularities of harmonic functions as removable singularities, poles, and essential singularities. Inequalities A fruitful approach to the study of harmonic functions is the consideration of inequalities they satisfy. Perhaps the most basic such inequality, from which most other inequalities may be derived, is the maximum principle. Another important result is Liouville's theorem, which states the only bounded harmonic functions defined on the whole of Rn are, in fact, constant functions. In addition to these basic inequalities, one has Harnack's inequality, which states that positive harmonic functions on bounded domains are roughly constant. One important use of these inequalities is to prove convergence of families of harmonic functions or sub-harmonic functions, see Harnack's theorem. These convergence theorems are used to prove the existence of harmonic functions with particular properties. Spaces of harmonic functions Since the Laplace equation is linear, the set of harmonic functions defined on a given domain is, in fact, a vector space. By defining suitable norms and/or inner products, one can exhibit sets of harmonic functions which form Hilbert or Banach spaces. In this fashion, one obtains such spaces as the Hardy space, Bloch space, Bergman space and Sobolev space. See also References S. Axler, P. Bourdon, W. Ramey (2001). Harmonic Function Theory (2nd edition). Springer-Verlag. . O. D. Kellogg (1969). Foundations of Potential Theory. Dover Publications. . L. L. Helms (1975). Introduction to potential theory. R. E. Krieger . J. L. Doob. Classical Potential Theory and Its Probabilistic Counterpart, Springer-Verlag, Berlin Heidelberg New York, . Partial differential equations Mathematical physics
Potential theory
[ "Physics", "Mathematics" ]
1,521
[ "Functions and mappings", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Potential theory", "Mathematical relations", "Mathematical physics" ]
29,021,055
https://en.wikipedia.org/wiki/Plasma%20medicine
Plasma medicine is an emerging field that combines plasma physics, life sciences and clinical medicine. It is being studied in disinfection, healing, cancer , and surgery. Most of the research is in vitro and in animal models. It uses ionized gas (physical plasma) for medical uses or dental applications. Plasma, often called the fourth state of matter, is an ionized gas containing positive ions and negative ions or electrons, but is approximately charge neutral on the whole. The plasma sources used for plasma medicine are generally low temperature plasmas, and they generate ions, chemically reactive atoms and molecules, and UV-photons. These plasma-generated active species are useful for several bio-medical applications such as sterilization of implants and surgical instruments as well as modifying biomaterial surface properties. Sensitive applications of plasma, like subjecting human body or internal organs to plasma treatment for medical purposes, are also possible. This possibility is being heavily investigated by research groups worldwide under the highly-interdisciplinary research field called "plasma medicine". Plasma sources Plasma sources used in plasma medicine are typically "low temperature" plasma sources operated at atmospheric pressure. In this context, low temperature refers to temperatures similar to room temperature, usually slightly above. There is a strict upper limit of 50 °C when treating tissue to avoid burns. The plasmas are only partially ionized, with less than 1 ppm of the gas being charged species, and the rest composed of neutral gas. Dielectric-barrier discharges Dielectric-barrier discharges are a type of plasma source that limits the current using a dielectric that covers one or both electrodes. The DBD was the plasma source used in the mid-1990s in the early groundbreaking work on the biomedical applications of cold plasma. A conventional DBD device comprises two planar electrodes with at least one of them covered with a dielectric material and the electrodes are separated by a small gap which is called the discharge gap. Research has demonstrated that modifying the configuration of the embedded electrode and altering its distribution within the dielectric medium can significantly impact the effectiveness of dielectric barrier discharge (DBD) plasma actuators. The performance characteristics of these actuators can be tuned and optimised by strategically manipulating the electrode's encapsulation and placement throughout the dielectric layer. DBDs are usually driven by high AC voltages with frequencies in the kHz range. In order to use DC and 50/60 Hz power sources investigators developed the Resistive Barrier Discharge (RBD). However, for medical application of DBD devices, the human body itself can serve as one of the two electrodes making it sufficient to devise plasma sources that consist of only one electrode covered with a dielectric such as alumina or quartz. DBD for medical applications such as for the inactivation of bacteria, for treatment of skin diseases and wounds, tumor treatment and disinfection of skin surface are currently under investigation. The treatment usually takes place in the room air. They are generally powered by several kilovolt biases using either AC or pulsed power supplies. Atmospheric pressure plasma jets Atmospheric pressure plasma jets (APPJs) are a collection of plasma sources that use a gas flow to deliver the reactive species generated in the plasma to the tissue or sample. The gas used is usually helium or argon, sometimes with a small amount (< 5%) of O2, H2O or N2 mixed in to increase the production of chemically reactive atoms and molecules. The use of a noble gas keeps temperatures low, and makes it simpler to produce a stable discharge. The gas flow also serves to generate a region where room air is in contact with and diffusing in to the noble gas, which is where much of the reactive species are produced. There is a large variety in jet designs used in experiments. Many APPJs use a dielectric to limit current, just like in a DBD, but not all do. Those that use a dielectric to limit current usually consists of a tube made of quartz or alumina, with a high voltage electrode wrapped around the outside. There can also be a grounded electrode wrapped around the outside of the dielectric tube. Designs that do not use a dielectric to limit the current use a high voltage pin electrode at the center of the quartz tube. These devices all generate ionization waves that begin inside the jet and propagate out to mix with the ambient air. Even though the plasma may look continuous, it is actually a series of ionization waves or "plasma bullets". This ionization wave may or may not treat the tissue being treated. Direct contact of the plasma with the tissue or sample can result in dramatically larger amounts of reactive species, charged species, and photons being delivered to the sample. One type of design that does not use a dielectric to limit the current is two planar electrodes with a gas flow running between them. In this case, the plasma does not exit the jet, and only the neutral atoms and molecules and photons reach the sample. Most devices of this type produce thin (mm diameter) plasma jets, larger surfaces can be treated simultaneously by joining many such jets or by multielectrode systems. Significantly larger surfaces can be treated than with an individual jet. Further, the distance between the device and the skin is to a certain degree variable, as the skin is not needed as a plasma electrode, significantly simplifying use on the patient. Low temperature plasma jets have been used in various biomedical applications ranging from the inactivation of bacteria to the killing of cancer cells. Applications Plasma medicine can be subdivided into five main fields: Non-thermal atmospheric-pressure direct plasma for medical therapy Plasma-assisted modification of bio-relevant surfaces Plasma-based bio-decontamination and sterilization Plasma-assisted modification of biomolecules, e.g., proteins, carbohydrates, lipids, and amino acids Plasma-assisted prodrug activation Non-thermal atmospheric-pressure plasma One of challenges is the application of non-thermal plasmas directly on the surface of human body or on internal organs. Whereas for surface modification and biological decontamination both low-pressure and atmospheric pressure plasmas can be used, for direct therapeutic applications only atmospheric pressure plasma sources are applicable. The high reactivity of plasma is a result of different plasma components: electromagnetic radiation (UV/VUV, visible light, IR, high-frequency electromagnetic fields, etc.) on the one hand and ions, electrons and reactive chemical species, primarily radicals, on the other. Besides surgical plasma application like argon plasma coagulation (APC), which is based on high-intensity lethal plasma effects, first and sporadic non-thermal therapeutic plasma applications are documented in literature. However, the basic understanding of mechanisms of plasma effects on different components of living systems is in the early beginning. Especially for the field of direct therapeutic plasma application, a fundamental knowledge of the mechanisms of plasma interaction with living cells and tissue is essential as a scientific basis. Plasma Dermatology The skin offers a convenient target for plasma applications, which partly explains the recent boom in plasma dermatology. The first successes were achieved by German scientists using plasma treatment to heal chronic ulcers. These studies resulted in the development of plasma devices now in clinical use in the European Union. In the United States, a collaborative group of academic scientists of the Nyheim Plasma Institute of Drexel University and dermatologist-researcher Dr. Peter C. Friedman pioneered the use of plasma to treat precancerous (actinic) keratosis and warts. The same team was able to show promising results in the treatment of hair loss (androgenetic alopecia) with a modified protocol, called indirect plasma treatment. Successful plasma treatment of actinic keratosis was repeated by a different group in Germany using a different type of plasma device, further demonstrating the value of this technology even when compared to established treatment methods such as topical diclofenac. There are ongoing clinical trials in dermatology for acne, rosacea, hair loss, and other conditions. The understanding gained from studying plasma treatment of skin diseases, may also help to develop new Plasma Medicine strategies to treat internal organs. Cold plasma is used to treat chronic wounds. Preliminary results indicate that cold plasma therapy can be more effective than the gold standard. Mechanisms Though many positive results have been seen in the experiments, it is not clear what the dominant mechanism of action is for any applications in plasma medicine. The plasma treatment generates reactive oxygen and nitrogen species, which include free radicals. These species include O, O3, OH, H2O2, HO2, NO, ONOOH and many others. This increase the oxidative stress on cells, which may explain the selective killing of cancer cells, which are already oxidatively stressed. Additionally, prokaryotic cells may be more sensitive to the oxidative stress than eukaryotic cells, allowing for selective killing of bacteria. It is known that electric fields can influence cell membranes from studies on electroporation. Electric fields on the cells being treated by a plasma jet can be high enough to produce electroporation, which may directly influence the cell behavior, or may simply allow more reactive species to enter the cell. Both physical and chemical properties of plasma are known to induce uptake of nanomaterials in cells. For example, the uptake of 20 nm gold nanoparticles can be stimulated in cancer cells using non-lethal doses of cold plasma. Uptake mechanisms involve both energy dependent endocytosis and energy independent transport across cell membranes. The primary route for accelerated endocytosis of nanoparticles after exposure to cold plasma is a clathrin-dependent membrane repair pathway caused by lipid peroxidation and cell membrane damage. The role of the immune system in plasma medicine has recently become very convincing. It is possible that the reactive species introduced by a plasma recruit a systemic immune response. References medicine
Plasma medicine
[ "Physics" ]
2,050
[ "Plasma technology and applications", "Plasma physics" ]
5,212,064
https://en.wikipedia.org/wiki/Vacuum%20coffee%20maker
A vacuum coffee maker brews coffee using two chambers where vapor pressure and gravity produce coffee. This type of coffee maker is also known as vac pot, siphon or syphon coffee maker, and was invented by Loeff of Berlin in the 1830s. These devices have since been used for more than a century in many parts of the world. Design and composition of the vacuum coffee maker varies. The chamber material is borosilicate glass, metal, or plastic, and the filter can be either a glass rod or a screen made of metal, cloth, paper, or nylon. The Napier Vacuum Machine by James Robert Napier, presented in 1840, was an early example of this technique. While vacuum coffee makers generally were excessively complex for everyday use, they were prized for producing a clear brew, and were quite popular until the middle of the twentieth century. Vacuum coffee makers remain popular in some parts of Asia, including Japan and Taiwan. The Bauhaus interpretation of this device can be seen in Gerhard Marcks' coffee maker of 1925. Workings A vacuum coffee maker operates as a siphon, where heating and cooling the lower vessel changes the vapor pressure of water in the lower, first pushing the water up into the upper vessel, then allowing the water to fall back down into the lower vessel. Specifically, once the water in lower chamber is hot enough that its vapor pressure (the pressure exerted by the vapour component of a liquid) exceeds the pressure of a standard atmosphere, some of it begins to boil, turning into water vapor. Since the density of water vapor is about 1/2000 that of liquid water, the mixture of the air and water vapor in the lower chamber quickly expands, and, when the new pressure exceeds atmospheric pressure, pushes the remaining water up the siphon tube into the upper chamber, where it remains so long as the pressure difference between the upper and lower chambers is sufficient to support it (about 1.5 kPa or 0.015 atm). This pressure difference is maintained during brewing through the continuous heating of the lower chamber. Coffee grinds are added to the water in the upper chamber. When the coffee has finished brewing, the heat is removed and the pressure in the bottom vessel drops, so the combined force of gravity and atmospheric pressure overcomes the pressure of the bottom chamber, causing the brewed coffee to be pulled into the bottom chamber of the vacuum coffee maker, leaving the coffee grounds in the top chamber. The iconic Moka pot coffee maker functions on the same principle but the water is forced up from the bottom chamber through the third middle chamber containing the coffee grounds to the top chamber which has an air gap to prevent the brewed coffee from returning downwards. (Additionally, because the water is forced up through packed grounds, the pressures are greater.) The prepared coffee is then poured off from the top. Balance siphon An early variation of this principle is called a balance siphon. This implementation has the two chambers arranged side by side on a balance-like device, with a counterweight attached to the heated chamber. Once the vapor has forced the hot water out, the counterweight activates a spring-loaded snuffer which smothers the flame and allows the initial chamber to cool down thus lowering pressure (creating a vacuum) and causing the brewed coffee to seep in. Automated version In 2022, Japanese Tiger Corporation was working on an automated coffee-maker based on the vacuum coffee maker principle, the Siphonysta. The Siphonysta's heating is electrical. The chambers are made of plastic ("resin"). Gallery of process See also Minto wheel Bodum, makers of the Santos, Pebo and ePebo vacuum coffee makers References External links Vac Pot How-To pdf Siphon brewer in operation, Sydney Australia Coffee preparation Vacuum Fluid dynamics Coffeeware
Vacuum coffee maker
[ "Physics", "Chemistry", "Engineering" ]
777
[ "Chemical engineering", "Vacuum", "Piping", "Matter", "Fluid dynamics" ]
2,876,834
https://en.wikipedia.org/wiki/Heath%E2%80%93Jarrow%E2%80%93Morton%20framework
The Heath–Jarrow–Morton (HJM) framework is a general framework to model the evolution of interest rate curves – instantaneous forward rate curves in particular (as opposed to simple forward rates). When the volatility and drift of the instantaneous forward rate are assumed to be deterministic, this is known as the Gaussian Heath–Jarrow–Morton (HJM) model of forward rates. For direct modeling of simple forward rates the Brace–Gatarek–Musiela model represents an example. The HJM framework originates from the work of David Heath, Robert A. Jarrow, and Andrew Morton in the late 1980s, especially Bond pricing and the term structure of interest rates: a new methodology (1987) – working paper, Cornell University, and Bond pricing and the term structure of interest rates: a new methodology (1989) – working paper (revised ed.), Cornell University. It has its critics, however, with Paul Wilmott describing it as "...actually just a big rug for [mistakes] to be swept under". Framework The key to these techniques is the recognition that the drifts of the no-arbitrage evolution of certain variables can be expressed as functions of their volatilities and the correlations among themselves. In other words, no drift estimation is needed. Models developed according to the HJM framework are different from the so-called short-rate models in the sense that HJM-type models capture the full dynamics of the entire forward rate curve, while the short-rate models only capture the dynamics of a point on the curve (the short rate). However, models developed according to the general HJM framework are often non-Markovian and can even have infinite dimensions. A number of researchers have made great contributions to tackle this problem. They show that if the volatility structure of the forward rates satisfy certain conditions, then an HJM model can be expressed entirely by a finite state Markovian system, making it computationally feasible. Examples include a one-factor, two state model (O. Cheyette, "Term Structure Dynamics and Mortgage Valuation", Journal of Fixed Income, 1, 1992; P. Ritchken and L. Sankarasubramanian in "Volatility Structures of Forward Rates and the Dynamics of Term Structure", Mathematical Finance, 5, No. 1, Jan 1995), and later multi-factor versions. Mathematical formulation The class of models developed by Heath, Jarrow and Morton (1992) is based on modelling the forward rates. The model begins by introducing the instantaneous forward rate , , which is defined as the continuous compounding rate available at time as seen from time . The relation between bond prices and the forward rate is also provided in the following way: Here is the price at time of a zero-coupon bond paying $1 at maturity . The risk-free money market account is also defined as This last equation lets us define , the risk free short rate. The HJM framework assumes that the dynamics of under a risk-neutral pricing measure are the following: Where is a -dimensional Wiener process and , are adapted processes. Now based on these dynamics for , we'll attempt to find the dynamics for and find the conditions that need to be satisfied under risk-neutral pricing rules. Let's define the following process: The dynamics of can be obtained through Leibniz's rule: If we define , and assume that the conditions for Fubini's Theorem are satisfied in the formula for the dynamics of , we get: By Itō's lemma, the dynamics of are then: But must be a martingale under the pricing measure , so we require that . Differentiating this with respect to we get: Which finally tells us that the dynamics of must be of the following form: Which allows us to price bonds and interest rate derivatives based on our choice of . See also Black–Derman–Toy model Brace–Gatarek–Musiela model Chen model Cheyette model Ho–Lee model Hull–White model References Notes Sources Heath, D., Jarrow, R. and Morton, A. (1990). Bond Pricing and the Term Structure of Interest Rates: A Discrete Time Approximation. Journal of Financial and Quantitative Analysis, 25:419-440. Heath, D., Jarrow, R. and Morton, A. (1991). Contingent Claims Valuation with a Random Evolution of Interest Rates . Review of Futures Markets, 9:54-76. Heath, D., Jarrow, R. and Morton, A. (1992). Bond Pricing and the Term Structure of Interest Rates: A New Methodology for Contingent Claims Valuation. Econometrica, 60(1):77-105. Robert Jarrow (2002). Modelling Fixed Income Securities and Interest Rate Options (2nd ed.). Stanford Economics and Finance. Further reading Non-Bushy Trees For Gaussian HJM And Lognormal Forward Models, Prof Alan Brace, University of Technology Sydney The Heath-Jarrow-Morton Term Structure Model , Prof. Don Chance E. J. Ourso College of Business, Louisiana State University Recombining Trees for One-Dimensional Forward Rate Models, Dariusz Gatarek, Wyższa Szkoła Biznesu – National-Louis University, and Jaroslaw Kolakowski Implementing No-Arbitrage Term Structure of Interest Rate Models in Discrete Time When Interest Rates Are Normally Distributed, Dwight M Grant and Gautam Vora. The Journal of Fixed Income March 1999, Vol. 8, No. 4: pp. 85–98 Heath–Jarrow–Morton model and its application, Vladimir I Pozdynyakov, University of Pennsylvania An Empirical Study of the Convergence Properties of the Non-recombining HJM Forward Rate Tree in Pricing Interest Rate Derivatives, A.R. Radhakrishnan New York University Modeling Interest Rates with Heath, Jarrow and Morton. Dr Donald van Deventer, Kamakura Corporation: With One Factor and Maturity-Dependent Volatility With One Factor and Rate and Maturity-Dependent Volatility With Two Factors and Rate and Maturity-Dependent Volatility With Three Factors and Rate and Maturity-Dependent Volatility Financial models Mathematical finance Fixed income analysis
Heath–Jarrow–Morton framework
[ "Mathematics" ]
1,286
[ "Applied mathematics", "Mathematical finance" ]
2,877,265
https://en.wikipedia.org/wiki/Stuart%20Parkin
Stuart Stephen Papworth Parkin (born 9 December 1955) is an experimental physicist, Managing Director at the Max Planck Institute of Microstructure Physics in Halle and an Alexander von Humboldt Professor at the Institute of Physics of the Martin-Luther-University Halle-Wittenberg. He is a pioneer in the science and application of spintronic materials, and has made discoveries into the behaviour of thin-film magnetic structures that were critical in enabling recent increases in the data density and capacity of computer hard-disk drives. For these discoveries, he was awarded the 2014 Millennium Technology Prize.. He is commonly referred to as the "spin doctor". Before his current position, Parkin was an IBM Fellow and manager of the magnetoelectronics group at the IBM Almaden Research Center in San Jose, California. He was also a consulting professor in the department of applied physics at Stanford University and director of the IBM-Stanford Spintronic Science and Applications Center, which was formed in 2004. Education and early life A native of Watford, England, Parkin received his B.A. (1977) and was elected a research fellow (1979) at Trinity College, Cambridge, England, and was awarded his PhD (1980) at the Cavendish Laboratory, also in Cambridge. He joined IBM in 1982 as a World Trade Post-doctoral Fellow, becoming a permanent member of the staff the following year. In 1999 he was named an IBM Fellow, IBM's highest technical honour. Research and career In 2007 Parkin was named a distinguished visiting professor at the National University of Singapore, a visiting chair professor at the National Taiwan University, and an honorary visiting professor at University College London. In 2008, he was elected to the National Academy of Sciences. The Materials Research Network Dresden granted him the Dresden Barkhausen Award in 2009. Parkin has been awarded honorary doctorates by the University of Aachen, Germany and the Eindhoven University of Technology, The Netherlands. In 1989 Stuart Parkin discovered the phenomenon of oscillatory interlayer coupling in magnetic multilayers, by which magnetic layers are magnetically coupled via an intervening non-magnetic metallic spacer layer. Parkin found that the sign of the exchange coupling oscillates from ferromagnetic to antiferromagnetic with an oscillation period of just a few atomic layers. Remarkably, Parkin discovered this phenomenon in thin film magnetic heterostructures that he prepared in a simple home-made sputtering system. Parkin, moreover, showed that this phenomenon is displayed by almost all metallic transition elements. In what is often referred to as "Parkin's Periodic Table", Parkin showed that the strength of this oscillatory interlayer exchange interaction varied systematically across the Periodic Table of the elements. Parkin made numerous other fundamental discoveries which continued the development of the field of "spintronics" of which he is recognised as a prolific scientist. Later Parkin improved magnetic tunnelling junctions, a device invented in the 1970s by Julliere, and revolutionized by Jagadeesh Moodera of MIT. This element can create a high performance magnetic random access memory in 1995. Magnetoresistive random-access memory (MRAM) promises unique attributes of high speed, high density and non-volatility. The development by Parkin in 2001 of giant tunnelling magnetoresistance in magnetic tunnel junctions using highly textured MgO tunnel barriers has made MRAM even more promising. IBM developed the first MRAM prototype in 1999 and is currently developing a 16 Mbit chip. Parkin's research interests include organic superconductors, high-temperature superconductors, and, most recently, magnetic thin film structures and spintronic materials and devices for advanced sensor, memory, and logic applications. Most recently, Parkin has proposed and is working on a novel storage class memory device, The Magnetic Racetrack memory, which could replace both hard disk drives and many forms of conventional solid state memory. His research interests also include spin transistors and spin-logic devices that may enable a new generation of low-power electronics. Parkin has received two ERC Advanced Grants: The first was awarded in 2014 and focused on spin-orbitronics for electronic technologies ("SORBET"). The second was awarded in 2022, focusing on the interplay between chirality, spin textures and superconductivity at manufactured interfaces ("SUPERMINT"). Parkin has authored over 670 papers and has more than 123 issued patents. Clarivate has named Parkin a "Highly Cited Researcher in the field of Physics" for the years 2018–2022. He is also the chief editor of Spin, one of World Scientific's newest journals, which publishes articles in spin electronics. Awards Parkin is the recipient of numerous honours, including the Gutenberg Research Award (2008), a Humboldt Research Award (2004), the 1999–2000 American Institute of Physics Prize for Industrial Applications of Physics, the European Physical Society's Europhysics Prize (1997), the American Physical Society's International New Materials Prize (1994), the MRS Outstanding Young Investigator Award (1991) and the Charles Vernon Boys Prize from the Institute of Physics, London (1991). In 2001, he was named the first "Innovator of the Year" by R&D Magazine and in October 2007 was received the "No Boundaries" Award for Innovation from The Economist. In 2009, Parkin received the of the International Union of Pure and Applied Physics. In 2012, Parkin was awarded the Von Hippel Award of the Materials Research Society. In April 2014, Parkin was awarded the Millennium Technology Prize for his work on spintronic materials, "leading to a prodigious growth in the capacity to store digital information". In 2021 he received the King Faisal Prize in Science. In 2023, Parkin was named a Clarivate Citation Laureate in Physics, an award given out to scientists considered likely to receive a Nobel Prize in the future. Parkin received the 2024 APS Medal for contributions to spintronics and data storage. Parkin was awarded the Charles Stark Draper Prize in 2024 for his "inventions in the field of spintronics". Memberships Parkin is a Fellow of the Royal Society, the American Physical Society, the Materials Research Society, the Institute of Physics (London), the Institute of Electrical and Electronics Engineers, the American Association for the Advancement of Science, and the Gutenberg Research College (GRC). In 2008, Parkin was elected a member of the National Academy of Sciences. In 2009, he was elected into the National Academy of Engineering for contributions to development of spin-engineered magnetic heterostructures for magnetic sensors and memory devices. In 2012, he was elected into The World Academy of Sciences. The same year, he received an Honorary fellowship of the Indian Academy of Sciences. In 2015, he became a member of the German Academy of Sciences Leopoldina. In March 2016, Parkin was elected a Corresponding Fellow of the Royal Society of Edinburgh, Scotland's national academy of science and letters. In 2019, he became a fellow of the Royal Academy of Engineering. References External links Video on Stuart Parkin's research (Latest Thinking) Racetrack memory video IBM Researcher Bio for Stuart Parkin Redefining the Architecture of Memory – New York Times article dated 9/11/2007 1955 births Living people People from Watford British physicists Fellows of the Royal Society IBM Fellows Members of the United States National Academy of Sciences Fellows of the American Physical Society Spintronics Members of the German National Academy of Sciences Leopoldina Max Planck Institute directors
Stuart Parkin
[ "Physics", "Materials_science" ]
1,557
[ "Spintronics", "Condensed matter physics" ]
2,877,563
https://en.wikipedia.org/wiki/Front%20end%20of%20line
The front end of line (FEOL) is the first portion of IC fabrication where the individual components (transistors, capacitors, resistors, etc.) are patterned in a semiconductor substrate. FEOL generally covers everything up to (but not including) the deposition of metal interconnect layers. Steps For the CMOS process, FEOL contains all fabrication steps needed to form isolated CMOS elements: Selecting the type of wafer to be used; Chemical-mechanical planarization (CMP) and cleaning of the wafer. Shallow trench isolation (STI) (or LOCOS in early processes with feature size > 0.25 μm); Well formation; Gate module formation; Source and drain module formation. Finally, the surface is treated to prepare the contacts for the subsequent metallization. This concludes the FEOL process, that is, all devices have been built. Following these steps, the devices must be connected electrically as per the nets to build the electrical circuit. This is done in the back end of line (BEOL). BEOL is thus the second portion of IC fabrication where the individual devices are connected. See also Back end of line (BEOL) Integrated circuit References Further reading "CMOS: Circuit Design, Layout, and Simulation" Wiley-IEEE, 2010. . pages 177-178 (Chapter 7.2 CMOS Process Integration); pages 180-199 (7.2.1 Frontend-of-the-line integration) "Fundamentals of Layout Design for Electronic Circuits", by Lienig, Scheible, Springer, , 2020. Chapter 2: Technology Know-How: From Silicon to Devices, pages 78-82 (2.9.3 FEOL: Creating Devices) Electronics manufacturing Semiconductor device fabrication
Front end of line
[ "Materials_science", "Engineering" ]
362
[ "Semiconductor device fabrication", "Electronic engineering", "Electronics manufacturing", "Microtechnology" ]
2,877,631
https://en.wikipedia.org/wiki/Back%20end%20of%20line
Back end of the line or back end of line (BEOL) is a process in semiconductor device fabrication that consists of depositing metal interconnect layers onto a wafer already patterned with devices. It is the second part of IC fabrication, after front end of line (FEOL). In BEOL, the individual devices (transistors, capacitors, resistors, etc.) are connected to each other according to how the metal wiring is deposited. Metalization The individual devices are connected by alternately stacking oxide layers (for insulation purposes) and metal layers (for the interconnect tracks). The vias between layers and the interconnects on the individual layers are thus formed using a structuring process. Common metals are copper and aluminum. BEOL generally begins when the first layer of metal is deposited on the wafer. BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. For modern IC processes, more than 10 metal layers can be added in the BEOL. Before 1998, practically all chips used aluminium for the metal interconnection layers, whereas copper is mostly used nowadays. Steps Steps of the BEOL are: Silicidation of source and drain regions and the polysilicon region. Adding a dielectric (first, lower layer is pre-metal dielectric (PMD) – to isolate metal from silicon and polysilicon), CMP processing it Make holes in PMD, make a contacts in them. Add metal layer 1 Add a second dielectric, called the inter-metal dielectric (IMD) Make vias through dielectric to connect lower metal with higher metal. Vias filled by Metal CVD process. Repeat steps 4–6 to get all metal layers. Add final passivation layer to protect the microchip After BEOL there is a "back-end process" (also called post-fab), which is done not in the cleanroom, often by a different company. It includes wafer test, wafer backgrinding, die separation, die tests, IC packaging and final test. See also Front end of line (FEOL) Integrated circuit Phosphosilicate glass References Further reading "Chapter 2: Technology Know-How: From Silicon to Devices". Fundamentals of Layout Design for Electronic Circuits, by Lienig, Scheible, Springer, , 2020. p. 82 (2.9.4 BEOL: Connecting Devices) Electronics manufacturing Semiconductor device fabrication
Back end of line
[ "Materials_science", "Engineering" ]
530
[ "Semiconductor device fabrication", "Electronic engineering", "Electronics manufacturing", "Microtechnology" ]
2,877,844
https://en.wikipedia.org/wiki/Icosahedral%20symmetry
In mathematics, and especially in geometry, an object has icosahedral symmetry if it has the same symmetries as a regular icosahedron. Examples of other polyhedra with icosahedral symmetry include the regular dodecahedron (the dual of the icosahedron) and the rhombic triacontahedron. Every polyhedron with icosahedral symmetry has 60 rotational (or orientation-preserving) symmetries and 60 orientation-reversing symmetries (that combine a rotation and a reflection), for a total symmetry order of 120. The full symmetry group is the Coxeter group of type . It may be represented by Coxeter notation and Coxeter diagram . The set of rotational symmetries forms a subgroup that is isomorphic to the alternating group on 5 letters. As point group Apart from the two infinite series of prismatic and antiprismatic symmetry, rotational icosahedral symmetry or chiral icosahedral symmetry of chiral objects and full icosahedral symmetry or achiral icosahedral symmetry are the discrete point symmetries (or equivalently, symmetries on the sphere) with the largest symmetry groups. Icosahedral symmetry is not compatible with translational symmetry, so there are no associated crystallographic point groups or space groups. Presentations corresponding to the above are: These correspond to the icosahedral groups (rotational and full) being the (2,3,5) triangle groups. The first presentation was given by William Rowan Hamilton in 1856, in his paper on icosian calculus. Note that other presentations are possible, for instance as an alternating group (for I). Visualizations The full symmetry group is the Coxeter group of type . It may be represented by Coxeter notation and Coxeter diagram . The set of rotational symmetries forms a subgroup that is isomorphic to the alternating group on 5 letters. Group structure Every polyhedron with icosahedral symmetry has 60 rotational (or orientation-preserving) symmetries and 60 orientation-reversing symmetries (that combine a rotation and a reflection), for a total symmetry order of 120. The I is of order 60. The group I is isomorphic to A5, the alternating group of even permutations of five objects. This isomorphism can be realized by I acting on various compounds, notably the compound of five cubes (which inscribe in the dodecahedron), the compound of five octahedra, or either of the two compounds of five tetrahedra (which are enantiomorphs, and inscribe in the dodecahedron). The group contains 5 versions of Th with 20 versions of D3 (10 axes, 2 per axis), and 6 versions of D5. The Ih has order 120. It has I as normal subgroup of index 2. The group Ih is isomorphic to I × Z2, or A5 × Z2, with the inversion in the center corresponding to element (identity,-1), where Z2 is written multiplicatively. Ih acts on the compound of five cubes and the compound of five octahedra, but −1 acts as the identity (as cubes and octahedra are centrally symmetric). It acts on the compound of ten tetrahedra: I acts on the two chiral halves (compounds of five tetrahedra), and −1 interchanges the two halves. Notably, it does not act as S5, and these groups are not isomorphic; see below for details. The group contains 10 versions of D3d and 6 versions of D5d (symmetries like antiprisms). I is also isomorphic to PSL2(5), but Ih is not isomorphic to SL2(5). Isomorphism of I with A5 It is useful to describe explicitly what the isomorphism between I and A5 looks like. In the following table, permutations Pi and Qi act on 5 and 12 elements respectively, while the rotation matrices Mi are the elements of I. If Pk is the product of taking the permutation Pi and applying Pj to it, then for the same values of i, j and k, it is also true that Qk is the product of taking Qi and applying Qj, and also that premultiplying a vector by Mk is the same as premultiplying that vector by Mi and then premultiplying that result with Mj, that is Mk = Mj × Mi. Since the permutations Pi are all the 60 even permutations of 12345, the one-to-one correspondence is made explicit, therefore the isomorphism too. Commonly confused groups The following groups all have order 120, but are not isomorphic: S5, the symmetric group on 5 elements Ih, the full icosahedral group (subject of this article, also known as H3) 2I, the binary icosahedral group They correspond to the following short exact sequences (the latter of which does not split) and product In words, is a normal subgroup of is a factor of , which is a direct product is a quotient group of Note that has an exceptional irreducible 3-dimensional representation (as the icosahedral rotation group), but does not have an irreducible 3-dimensional representation, corresponding to the full icosahedral group not being the symmetric group. These can also be related to linear groups over the finite field with five elements, which exhibit the subgroups and covering groups directly; none of these are the full icosahedral group: the projective special linear group, see here for a proof; the projective general linear group; the special linear group. Conjugacy classes The 120 symmetries fall into 10 conjugacy classes. Subgroups of the full icosahedral symmetry group Each line in the following table represents one class of conjugate (i.e., geometrically equivalent) subgroups. The column "Mult." (multiplicity) gives the number of different subgroups in the conjugacy class. Explanation of colors: green = the groups that are generated by reflections, red = the chiral (orientation-preserving) groups, which contain only rotations. The groups are described geometrically in terms of the dodecahedron. The abbreviation "h.t.s.(edge)" means "halfturn swapping this edge with its opposite edge", and similarly for "face" and "vertex". Vertex stabilizers Stabilizers of an opposite pair of vertices can be interpreted as stabilizers of the axis they generate. vertex stabilizers in I give cyclic groups C3 vertex stabilizers in Ih give dihedral groups D3 stabilizers of an opposite pair of vertices in I give dihedral groups D3 stabilizers of an opposite pair of vertices in Ih give Edge stabilizers Stabilizers of an opposite pair of edges can be interpreted as stabilizers of the rectangle they generate. edges stabilizers in I give cyclic groups Z2 edges stabilizers in Ih give Klein four-groups stabilizers of a pair of edges in I give Klein four-groups ; there are 5 of these, given by rotation by 180° in 3 perpendicular axes. stabilizers of a pair of edges in Ih give ; there are 5 of these, given by reflections in 3 perpendicular axes. Face stabilizers Stabilizers of an opposite pair of faces can be interpreted as stabilizers of the antiprism they generate. face stabilizers in I give cyclic groups C5 face stabilizers in Ih give dihedral groups D5 stabilizers of an opposite pair of faces in I give dihedral groups D5 stabilizers of an opposite pair of faces in Ih give Polyhedron stabilizers For each of these, there are 5 conjugate copies, and the conjugation action gives a map, indeed an isomorphism, . stabilizers of the inscribed tetrahedra in I are a copy of T stabilizers of the inscribed tetrahedra in Ih are a copy of T stabilizers of the inscribed cubes (or opposite pair of tetrahedra, or octahedra) in I are a copy of T stabilizers of the inscribed cubes (or opposite pair of tetrahedra, or octahedra) in Ih are a copy of Th Coxeter group generators The full icosahedral symmetry group [5,3] () of order 120 has generators represented by the reflection matrices R0, R1, R2 below, with relations R02 = R12 = R22 = (R0×R1)5 = (R1×R2)3 = (R0×R2)2 = Identity. The group [5,3]+ () of order 60 is generated by any two of the rotations S0,1, S1,2, S0,2. A rotoreflection of order 10 is generated by V0,1,2, the product of all 3 reflections. Here denotes the golden ratio. Fundamental domain Fundamental domains for the icosahedral rotation group and the full icosahedral group are given by: In the disdyakis triacontahedron one full face is a fundamental domain; other solids with the same symmetry can be obtained by adjusting the orientation of the faces, e.g. flattening selected subsets of faces to combine each subset into one face, or replacing each face by multiple faces, or a curved surface. Polyhedra with icosahedral symmetry Examples of other polyhedra with icosahedral symmetry include the regular dodecahedron (the dual of the icosahedron) and the rhombic triacontahedron. Chiral polyhedra Full icosahedral symmetry Other objects with icosahedral symmetry Barth surfaces Virus structure, and Capsid In chemistry, the dodecaborate ion ([B12H12]2−) and the dodecahedrane molecule (C20H20) Liquid crystals with icosahedral symmetry For the intermediate material phase called liquid crystals the existence of icosahedral symmetry was proposed by H. Kleinert and K. Maki and its structure was first analyzed in detail in that paper. See the review article here. In aluminum, the icosahedral structure was discovered experimentally three years after this by Dan Shechtman, which earned him the Nobel Prize in 2011. Related geometries Icosahedral symmetry is equivalently the projective special linear group PSL(2,5), and is the symmetry group of the modular curve X(5), and more generally PSL(2,p) is the symmetry group of the modular curve X(p). The modular curve X(5) is geometrically a dodecahedron with a cusp at the center of each polygonal face, which demonstrates the symmetry group. This geometry, and associated symmetry group, was studied by Felix Klein as the monodromy groups of a Belyi surface – a Riemann surface with a holomorphic map to the Riemann sphere, ramified only at 0, 1, and infinity (a Belyi function) – the cusps are the points lying over infinity, while the vertices and the centers of each edge lie over 0 and 1; the degree of the covering (number of sheets) equals 5. This arose from his efforts to give a geometric setting for why icosahedral symmetry arose in the solution of the quintic equation, with the theory given in the famous ; a modern exposition is given in . Klein's investigations continued with his discovery of order 7 and order 11 symmetries in and (and associated coverings of degree 7 and 11) and dessins d'enfants, the first yielding the Klein quartic, whose associated geometry has a tiling by 24 heptagons (with a cusp at the center of each). Similar geometries occur for PSL(2,n) and more general groups for other modular curves. More exotically, there are special connections between the groups PSL(2,5) (order 60), PSL(2,7) (order 168) and PSL(2,11) (order 660), which also admit geometric interpretations – PSL(2,5) is the symmetries of the icosahedron (genus 0), PSL(2,7) of the Klein quartic (genus 3), and PSL(2,11) the buckyball surface (genus 70). These groups form a "trinity" in the sense of Vladimir Arnold, which gives a framework for the various relationships; see trinities for details. There is a close relationship to other Platonic solids. See also Tetrahedral symmetry Octahedral symmetry Binary icosahedral group Icosian calculus References Translated in , collected as pp. 140–165 in Oeuvres, Tome 3 Peter R. Cromwell, Polyhedra (1997), p. 296 The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, N.W. Johnson: Geometries and Transformations, (2018) Chapter 11: Finite symmetry groups, 11.5 Spherical Coxeter groups External links THE SUBGROUPS OF W(H3) (Subgroups of other Coxeter groups ) Gotz Pfeiffer Finite groups Rotational symmetry
Icosahedral symmetry
[ "Physics", "Mathematics" ]
2,811
[ "Mathematical structures", "Finite groups", "Algebraic structures", "Symmetry", "Rotational symmetry" ]
2,878,096
https://en.wikipedia.org/wiki/Longitudinal%20mode
A longitudinal mode of a resonant cavity is a particular standing wave pattern formed by waves confined in the cavity. The longitudinal modes correspond to the wavelengths of the wave which are reinforced by constructive interference after many reflections from the cavity's reflecting surfaces. All other wavelengths are suppressed by destructive interference. A longitudinal mode pattern has its nodes located axially along the length of the cavity. Transverse modes, with nodes located perpendicular to the axis of the cavity, may also exist. Simple cavity A common example of longitudinal modes are the light wavelengths produced by a laser. In the simplest case, the laser's optical cavity is formed by two opposed plane (flat) mirrors surrounding the gain medium (a plane-parallel or Fabry–Pérot cavity). The allowed modes of the cavity are those where the mirror separation distance L is equal to an exact multiple of half the wavelength, λ: where q is an integer known as the mode order. In practice, the separation distance of the mirrors L is usually much greater than the wavelength of light λ, so the relevant values of q are large (around 105 to 106). The frequency separation between any two adjacent modes, q and q+1, in a material that is transparent at the laser wavelength, are given (for an empty linear resonator of length L) by Δν: where c is the speed of light and n is the refractive index of the material (note: n≈1 in air). Composite cavity If the cavity is non-empty (i.e. contains one or more elements with different values of refractive index), the values of L used are the optical path lengths for each element. The frequency spacing of longitudinal modes in the cavity is then given by: where ni is the refractive index of the i'th element of length Li. More generally, the longitudinal modes may be found for any type of wave in a cavity by solving the relevant wave equation with the appropriate boundary conditions. Both transverse and longitudinal waves may have longitudinal modes when confined to a cavity. The analysis of longitudinal modes is especially important in lasers with single transversal mode, for example, in single-mode fiber lasers. The number of longitudinal modes of such a laser can be estimated as ratio of the spectral width of gain to the spectral separation of longitudinal modes. Power per longitudinal mode For lasers with single transversal mode, the power per one longitudinal mode can be significantly increased by the coherent addition of lasers. Such addition allows one to both scale-up the output power of a single-transverse-mode laser and reduce number of longitudinal modes; because the system chooses automatically only the modes which are common for all the combined lasers. The reduction of the number of longitudinal modes determines the limits of the coherent addition. The ability to coherently add one additional laser is exhausted when one longitudinal mode, common for the combined lasers, lies within the spectral width of the gain; a subsequent addition will lead to loss of efficiency of the coherent combination and will not increase the power per longitudinal mode of such a laser. See also Fabry–Pérot interferometer Modelocking Normal mode References Wave mechanics
Longitudinal mode
[ "Physics" ]
636
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
2,878,102
https://en.wikipedia.org/wiki/Smart%20work%20zone
A smart work zone or intelligent work zone refers to a site-specific configuration of traffic control technology deployed within a roadway work zone to increase the safety of construction workers, provide "real-time" travel information, and efficiently route motorists through a work zone. Smart work zones reduce the dependency on human "flaggers" and make the work zone safer for roadway workers. Common terms used to describe equipment configuration used within smart work zones: Smart work zones often use radar guns or other non-intrusive sensors to detect the presence and speed of vehicles approaching a work zone, in order to display an appropriate message on one or more variable message signs. In a "dynamic merge" system, for example, vehicles approaching a lane closure are directed to use all available lanes when congestion develops and speeds are low. When speeds are high, motorists are directed to merge early or are left to use their own judgement. Such a system is usually deployed in addition to traditional static messaging. References Road construction
Smart work zone
[ "Engineering" ]
200
[ "Construction", "Road construction" ]
2,878,201
https://en.wikipedia.org/wiki/Tetrapropylammonium%20perruthenate
Tetrapropylammonium perruthenate (TPAP or TPAPR) is the chemical compound described by the formula N(C3H7)4RuO4. Sometimes known as the Ley–Griffith reagent, this ruthenium compound is used as a reagent in organic synthesis. This salt consists of the tetrapropylammonium cation and the perruthenate anion, . Uses Ruthenium tetroxide is a highly aggressive oxidant, but TPAP, which is its one-electron reduced derivative, is a mild oxidizing agent for the conversion of primary alcohols to aldehydes (the Ley oxidation). Secondary alcohols are similarly oxidized to ketones. It can also be used to oxidize primary alcohols all the way to the carboxylic acid with a higher catalyst loading, larger amount of the cooxidant, and addition of two equivalents of water. In this situation, the aldehyde reacts with water to form the geminal diol hydrate, which is then oxidized again. The oxidation generates water that can be removed by adding molecular sieves. TPAP is expensive, but it can be used in catalytic amounts. The catalytic cycle is maintained by adding a stoichiometric amount of a co-oxidant such as N-methylmorpholine N-oxide or molecular oxygen. TPAP is also used to cleave vicinal diols to form aldehydes. References Ruthenium compounds Quaternary ammonium compounds Oxidizing agents Deliquescent materials
Tetrapropylammonium perruthenate
[ "Chemistry" ]
337
[ "Redox", "Oxidizing agents", "Deliquescent materials" ]
2,878,431
https://en.wikipedia.org/wiki/N-Methylmorpholine%20N-oxide
N-Methylmorpholine N-oxide (more correctly 4-methylmorpholine 4-oxide), NMO or NMMO is an organic compound. This heterocyclic amine oxide and morpholine derivative is used in organic chemistry as a co-oxidant and sacrificial catalyst in oxidation reactions for instance in osmium tetroxide oxidations and the Sharpless asymmetric dihydroxylation or oxidations with TPAP. NMO is commercially supplied both as a monohydrate C5H11NO2·H2O and as the anhydrous compound. The monohydrate is used as a solvent for cellulose in the lyocell process to produce cellulose fibers. Uses Solvent of cellulose NMMO monohydrate is used as a solvent in the lyocell process to produce lyocell fiber. It dissolves cellulose to form a solution called dope, and the cellulose is reprecipitated in a water bath to produce a fiber. The process is similar but not analogous to the viscose process. In the viscose process, cellulose is made soluble by conversion to its xanthate derivatives. With NMMO, cellulose is not derivatized but dissolves to give a homogeneous polymer solution. The resulting fiber is similar to viscose; this was observed, for example, for Valonia cellulose microfibrils. Dilution with water causes the cellulose to reprecipitate, i.e. the solvation of cellulose with NMMO is a water sensitive process. Cellulose remains insoluble in most solvents because it has a strong and highly structured intermolecular hydrogen bonding network, which resists common solvents. NMMO breaks the hydrogen bonding network that keeps cellulose insoluble in water and other solvents. Similar solubility has been obtained in a few solvents, particularly a mix of lithium chloride in dimethyl acetamide and some hydrophilic ionic liquids. Dissolution of scleroproteins Another use of NMMO is in the dissolution of scleroprotein (found in animal tissue). This dissolution occurs in the crystal areas which are more homogeneous and contain glycine and alanine residues with a small number of other residues. How NMMO dissolves these proteins is scarcely studied. Other studies, however, have been done in similar amide systems (i.e. hexapeptide). The hydrogen bonds of the amides can be broken by NMMO. Oxidant NMO, as an N-oxide, is an oxidant in the Upjohn dihydroxylation. It is generally used in stoichiometric amounts as a secondary oxidant (a cooxidant) to regenerate a primary (catalytic) oxidant after the latter has been reduced by the substrate. Vicinal syn-dihydroxylation reactions for example, would, in theory, require stoichiometric amounts of toxic, volatile and expensive osmium tetroxide, but if continuously regenerated with NMO, the amount required can be reduced to catalytic quantities. References Amine oxides Morpholines Reagents for organic chemistry Solvents
N-Methylmorpholine N-oxide
[ "Chemistry" ]
697
[ "Amine oxides", "Functional groups", "Reagents for organic chemistry" ]
2,879,691
https://en.wikipedia.org/wiki/CHMOS
CHMOS refers to one of a series of Intel CMOS processes developed from their HMOS process. CHMOS stands for "complementary high-performance metal-oxide-silicon. It was first developed in 1981. CHMOS was used in the Intel 80C51BH, a new version of their standard MCS-51 microcontroller. The chip was also used in later versions of Intel 8086, and the 80C88, which were fully static version of the Intel 8088. The Intel 80386 was made in 1.5 μm CHMOS III, and later in 1.0 μm CHMOS IV. CHMOS III used 1.5 micron lithography, p-well processing, n-well processing, and two layers of metal. CHMOS III-E was used for the 12.5 MHz Intel 80C186 microprocessor. This technology uses 1 μm process for the EPROM. CHMOS IV (H stands for High Speed) used 1.0 μm lithography. Many versions of the Intel 80486 were made in 1.0 μm CHMOS IV. Intel uses this technology on these 80C186EB and 80C188EB embedded processors. CHMOS V used 0.8 μm lithography and 3 metal layers, and was used in later versions of the 80386, 80486, and i860. See also Depletion-load NMOS logic#Further development References Electronic design Digital electronics Integrated circuits MOSFETs Intel
CHMOS
[ "Technology", "Engineering" ]
318
[ "Computer engineering", "Digital electronics", "Electronic design", "Electronic engineering", "Design", "Integrated circuits" ]
22,947,275
https://en.wikipedia.org/wiki/Microrheology
Microrheology is a technique used to measure the rheological properties of a medium, such as microviscosity, via the measurement of the trajectory of a flow tracer (a micrometre-sized particle). It is a new way of doing rheology, traditionally done using a rheometer. There are two types of microrheology: passive microrheology and active microrheology. Passive microrheology uses inherent thermal energy to move the tracers, whereas active microrheology uses externally applied forces, such as from a magnetic field or an optical tweezer, to do so. Microrheology can be further differentiated into 1- and 2-particle methods. Passive microrheology Passive microrheology uses the thermal energy (kT) to move the tracers, although recent evidence suggests that active random forces inside cells may instead move the tracers in a diffusive-like manner. The trajectories of the tracers are measured optically either by microscopy, or alternatively by light scattering techniques. Diffusing-wave spectroscopy (DWS) is a common choice that extends light scattering measurement techniques to account for multiple scattering events. From the mean squared displacement with respect to time (noted MSD or <Δr2> ), one can calculate the visco-elastic moduli G′(ω) and G″(ω) using the generalized Stokes–Einstein relation (GSER). Here is a view of the trajectory of a particle of micrometer size. In a standard passive microrheology test, the movement of dozens of tracers is tracked in a single video frame. The motivation is to average the movements of the tracers and calculate a robust MSD profile. Observing the MSD for a wide range of integration time scales (or frequencies) gives information on the microstructure of the medium where are diffusing the tracers. If the tracers are experiencing free diffusion in a purely viscous material, the MSD should grow linearly with sampling integration time: . If the tracers are moving in a spring-like fashion within a purely elastic material, the MSD should have no time dependence: In most cases the tracers are presenting a sub-linear integration-time dependence, indicating the medium has intermediate viscoelastic properties. Of course, the slope changes in different time scales, as the nature of the response from the material is frequency dependent. Microrheology is another way to do linear rheology. Since the force involved is very weak (order of 10−15 N), microrheology is guaranteed to be in the so-called linear region of the strain/stress relationship. It is also able to measure very small volumes (biological cell). Given the complex viscoelastic modulus with G′(ω) the elastic (conservative) part and G″(ω) the viscous (dissipative) part and ω=2πf the pulsation. The GSER is as follows: with : Laplace transform of G kB: Boltzmann constant T: temperature in kelvins s: the Laplace frequency a: the radius of the tracer : the Laplace transform of the mean squared displacement A related method of passive microrheology involves the tracking positions of a particle at high frequency, often with a quadrant photodiode. From the position, , the power spectrum, can be found, and then related to the real and imaginary parts of the response function, . The response function leads directly to a calculation of the complex shear modulus, via: Two Point Microrheology There could be many artifacts that change the values measured by the passive microrheology tests, resulting in a disagreement between microrheology and normal rheology. These artifacts include tracer-matrix interactions, tracer-matrix size mismatch and more. A different microrheological approach studies the cross-correlation of two tracers in the same sample. In practice, instead of measuring the MSD , movements of two distinct particles are measured - . Calculating the G(ω) of the medium between the tracers follows: Notice this equation does not depend on a, but instead in depends on R - the distance between the tracers (assuming R>>a). Some studies has shown that this method is better in coming to agreement with standard rheological measurements (in the relevant frequencies and materials) Active microrheology Active microrheology may use a magnetic field , optical tweezers or an atomic force microscope to apply a force on the tracer and then find the stress/strain relation. The force applied is a sinusoidal force with amplitude A and frequency ω - The response of the tracer is a factor of the matrix visco-elastic nature. If a matrix is totally elastic (a solid), the response to the acting force should be immediate and the tracers should be observed moving by- . with . On the other hand, if the matrix is totally viscous (a liquid), there should be a phase shift of between the strain and the stress - in reality, as most materials are visco-elastic, the phase shift observed is . When φ>45 the matrix is considered mostly in its "viscous domain" and when φ<45 the matrix is considered mostly in its "elastic domain". Given a measured response phase shift φ (sometimes noted as δ), this ratio applies: Similar response phase analysis is used in regular rheology testing. More recently, it has been developed into Force spectrum microscopy to measure contributions of random active motor proteins to diffusive motion in the cytoskeleton. References External links Harvard Weitz Lab page Review of microrheology in optical tweezers Review on microrheology Illustrated description of microrheology and a microrheology analysis, with movies DWS Microrheology Overview Rheology Soft matter
Microrheology
[ "Physics", "Chemistry", "Materials_science" ]
1,223
[ "Soft matter", "Condensed matter physics", "Rheology", "Fluid dynamics" ]
22,948,543
https://en.wikipedia.org/wiki/Tetrafluoroammonium
The tetrafluoroammonium cation (also known as perfluoroammonium) is a positively charged polyatomic ion with chemical formula . It is equivalent to the ammonium ion where the hydrogen atoms surrounding the central nitrogen atom have been replaced by fluorine. Tetrafluoroammonium ion is isoelectronic with tetrafluoromethane , trifluoramine oxide , tetrafluoroborate anion and the tetrafluoroberyllate anion. The tetrafluoroammonium ion forms salts with a large variety of fluorine-bearing anions. These include the bifluoride anion (), tetrafluorobromate (), metal pentafluorides ( where M is Ge, Sn, or Ti), hexafluorides ( where M is P, As, Sb, Bi, or Pt), heptafluorides ( where M is W, U, or Xe), octafluorides (), various oxyfluorides ( where M is W or U; , ), and perchlorate (). Attempts to make the nitrate salt, , were unsuccessful because of quick fluorination: + → + . Structure The geometry of the tetrafluoroammonium ion is tetrahedral, with an estimated nitrogen-fluorine bond length of 124 pm. All fluorine atoms are in equivalent positions. Synthesis Tetrafluoroammonium salts are prepared by oxidising nitrogen trifluoride with fluorine in the presence of a strong Lewis acid which acts as a fluoride ion acceptor. The original synthesis by Tolberg, Rewick, Stringham, and Hill in 1966 employs antimony pentafluoride as the Lewis acid: + + → The hexafluoroarsenate salt was also prepared by a similar reaction with arsenic pentafluoride at 120 °C: + + → The reaction of nitrogen trifluoride with fluorine and boron trifluoride at 800 °C yields the tetrafluoroborate salt: + + → salts can also be prepared by fluorination of with krypton difluoride () and fluorides of the form , where M is Sb, Nb, Pt, Ti, or B. For example, reaction of with and yields . Many tetrafluoroammonium salts can be prepared with metathesis reactions. Reactions Tetrafluoroammonium salts are extremely hygroscopic. The ion, when dissolved in water, readily decomposes into , , and oxygen gas. Some hydrogen peroxide () is also formed during this process: + → + + + 2 → + + Reaction of with alkali metal nitrates yields fluorine nitrate, . Properties Because salts are destroyed by water, water cannot be used as a solvent. Instead, bromine trifluoride, bromine pentafluoride, iodine pentafluoride, or anhydrous hydrogen fluoride can be used. Tetrafluoroammonium salts usually have no colour. However, some are coloured due to other elements in them. , and have a red colour, while , , and are yellow. Applications salts are important for solid propellant gas generators. They are also used as reagents for electrophilic fluorination of aromatic compounds in organic chemistry. As fluorinating agents, they are also strong enough to react with methane. See also Trifluorooxonium Nitrogen pentafluoride References Cations Nitrogen fluorides Fluorinating agents Nitrogen(V) compounds Nitrogen–halogen compounds Quaternary ammonium compounds
Tetrafluoroammonium
[ "Physics", "Chemistry" ]
784
[ "Matter", "Fluorinating agents", "Reagents for organic chemistry", "Cations", "Ions" ]
35,770,676
https://en.wikipedia.org/wiki/Quaternary%20cubic
In mathematics, a quaternary cubic form is a degree 3 homogeneous polynomial in four variables. The zeros form a cubic surface in 3-dimensional projective space. Invariants and studied the ring of invariants of a quaternary cubic, which is a ring generated by invariants of degrees 8, 16, 24, 32, 40, 100. The generators of degrees 8, 16, 24, 32, 40 generate a polynomial ring. The generator of degree 100 is a skew invariant, whose square is a polynomial in the other generators given explicitly by Salmon. Salmon also gave an explicit formula for the discriminant as a polynomial in the generators, though pointed out that the formula has a widely copied misprint in it. Sylvester pentahedron A generic quaternary cubic can be written as a sum of 5 cubes of linear forms, unique up to multiplication by cube roots of unity. This was conjectured by Sylvester in 1851, and proven 10 years later by Clebsch. The union of the 5 planes where these 5 linear forms vanish is called the Sylvester pentahedron. See also Ternary cubic Ternary quartic Invariants of a binary form References Invariant theory Algebraic surfaces
Quaternary cubic
[ "Physics" ]
244
[ "Invariant theory", "Group actions", "Symmetry" ]
35,775,305
https://en.wikipedia.org/wiki/Steam%20turbine%20governing
Steam turbine governing is the procedure of controlling the flow rate of steam to a steam turbine so as to maintain its speed of rotation as constant. The variation in load during the operation of a steam turbine can have a significant impact on its performance. In a practical situation the load frequently varies from the designed or economic load and thus there always exists a considerable deviation from the desired performance of the turbine. The primary objective in the steam turbine operation is to maintain a constant speed of rotation irrespective of the varying load. This can be achieved by means of governing in a steam turbine. There are many types of governors. Overview Steam turbine governing is the procedure of monitoring and controlling the flow rate of steam into the turbine with the objective of maintaining its speed of rotation as constant. The flow rate of steam is monitored and controlled by interposing valves between the boiler and the turbine. Depending upon the particular method adopted for control of steam flow rate, different types of governing methods are being practiced. The principal methods used for governing are described below. Throttle governing In throttle governing the pressure of steam is reduced at the turbine entry thereby decreasing the availability of energy. In this method steam is passed through a restricted passage thereby reducing its pressure across the governing valve. The flow rate is controlled using a partially opened steam control valve. The reduction in pressure leads to a throttling process in which the enthalpy of steam remains constant. Throttle governing – Small turbines Low initial cost and simple mechanism makes throttle governing the most apt method for small steam turbines. The mechanism is illustrated in figure 1. The valve is actuated by using a centrifugal governor which consists of flying balls attached to the arm of the sleeve. A geared mechanism connects the turbine shaft to the rotating shaft on which the sleeve reciprocates axially. With a reduction in the load the turbine shaft speed increases and brings about the movement of the flying balls away from the sleeve axis. This results in an axial movement of the sleeve followed by the activation of a lever, which in turn actuates the main stop valve to a partially opened position to control the flow rate. Throttle governing – Big turbines In larger steam turbines an oil operated servo mechanism is used in order to enhance the lever sensitivity. The use of a relay system magnifies the small deflections of the lever connected to the governor sleeve. The differential lever is connected at both the ends to the governor sleeve and the throttle valve spindle respectively. The pilot valves spindle is also connected to the same lever at some intermediate position. Both the pilot valves cover one port each in the oil chamber. The outlets of the oil chamber are connected to an oil drain tank through pipes. The decrease in load during operation of the turbine will bring about increase in the shaft speed thereby lifting the governor sleeve. Deflection occurs in the lever and due to this the pilot valve spindle raises up opening the upper port for oil entry and lower port for oil exit. Pressurized oil from the oil tank enters the cylinder and pushes the relay piston downwards. As the relay piston moves the throttle valve spindle attached to it also descends and partially closes the valve. Thus the steam flow rates can be controlled. When the load on the turbine increases the deflections in the lever are such that the lower port is opened for oil entry and upper port for oil exit. The relay piston moves upwards and the throttle valve spindle ascend upwards opening the valve. The variation of the steam consumption rate ṁ (kg/h) with the turbine load during throttle governing is linear and is given by the “willan’s line”. The equation for the willan’s line is given by: ṁ=aL+C Where a is the steam rate in kg/kWh, 'L' is the load on turbine in KW and C is no load steam consumption. Nozzle governing In nozzle governing the flow rate of steam is regulated by opening and shutting of sets of nozzles rather than regulating its pressure. In this method groups of two, three or more nozzles form a set and each set is controlled by a separate valve. The actuation of individual valve closes the corresponding set of nozzle thereby controlling the flow rate. In actual turbine, nozzle governing is applied only to the first stage whereas the subsequent stages remain unaffected. Since no regulation to the pressure is applied, the advantage of this method lies in the exploitation of full boiler pressure and temperature. Figure 2 shows the mechanism of nozzle governing applied to steam turbines. As shown in the figure the three sets of nozzles are controlled by means of three separate valves. By pass governing Occasionally the turbine is overloaded for short durations. During such operation, bypass valves are opened and fresh steam is introduced into the later stages of the turbine. This generates more energy to satisfy the increased load. The schematic of bypass governing is as shown in figure3. This governing is use for stable steam velocity and maintain flow rate of steam in turbine blade Combination governing Combination governing employs usage of any two of the above mentioned methods of governing. Generally bypass and nozzle governing are used simultaneously to match the load on turbine as shown in figure 3. Emergency governing Every steam turbine is also provided with emergency governors which come into action under the following condition. Increase of the mechanical speed of shaft beyond 110% Disturbed balancing of the turbine Failure of the lubrication system Vacuum in the condenser is Inadequate supply of coolant to the condenser References Steam turbines Mechanical power control
Steam turbine governing
[ "Physics" ]
1,114
[ "Mechanics", "Mechanical power control" ]
35,775,998
https://en.wikipedia.org/wiki/White%20Rabbit%20Project
White Rabbit is the name of a collaborative project including CERN, GSI Helmholtz Centre for Heavy Ion Research and other partners from universities and industry to develop a fully deterministic Ethernet-based network for general purpose data transfer and sub-nanosecond accuracy time transfer. Its initial use was as a timing distribution network for control and data acquisition timing of the accelerator sites at CERN as well as in GSI's Facility for Antiproton and Ion Research (FAIR) project. The hardware designs as well as the source code are publicly available. The name of the project is a reference to the White Rabbit appearing in Lewis Carroll's novel Alice's Adventures in Wonderland. Focus and goals White Rabbit provides sub-nanosecond synchronization accuracy, which formerly required dedicated hard-wired timing systems, with the flexibility and modularity of real-time Ethernet networks. A White Rabbit network may be used solely to provide timing and synchronization to a distributed electronic system, or be used to provide both timing and real-time data transfer. The White Rabbit Project focuses on: Sub-nanosecond accuracy: synchronization of more than 1000 nodes via fibre or copper connections of up to 10 km of length. Flexibility: creates a scalable and modular platform with simple configuration and low maintenance requirements. Predictability and Reliability: allows the deterministic delivery of highest priority messages by using Class of service. Robustness: no losses of high-priority system device control messages. Open source hardware and software: to avoid vendor lock-in. Another characteristic of this project is that it operates completely on open source with both the hardware and software sources available. Technologies To achieve sub-nanosecond synchronization White Rabbit utilizes Synchronous Ethernet (SyncE) to achieve syntonization and IEEE 1588 (1588) Precision Time Protocol (PTP) to communicate time and a module for precise phase difference measurement between the master reference clock and the local clock based on phase frequency detectors. White Rabbit uses the Precision Time Protocol to achieve sub-nanosecond accuracy. A two-way exchange of the Precision Time Protocol synchronization messages allows precise adjustment of clock phase and offset. The link delay is known precisely via accurate hardware timestamps and the calculation of delay asymmetry. White Rabbit applications At CERN White Rabbit was used for the new control system of the injector chain. At GSI White Rabbit will become the timing system of the FAIR complex. The KM3NeT neutrino telescope uses White Rabbit to synchronise the detector units. The EISCAT 3D radar will utilise White Rabbit for synchronization in the beam-forming network. About 6000 detector nodes for the LHAASO (Large High Altitude Air Shower Observatory) experiment are synchronized by a White Rabbit network. At least two Cosmic Microwave Background research programs (Simons Observatory, and CMB-S4) are considering White Rabbit for the timing of their data acquisition and control systems. Several companies have begun to commercialize White Rabbit for commercial applications by developing their own White Rabbit hardware and software. The first white rabbit element on the white rabbit project was the "white rabbit switch", financed by the government of Spain and CERN, and produced by Seven Solutions. In years 2015-2016 White Rabbit was successfully deployed by Horizon 2020 Project DEMETRA service #3 and tested for distribution Galileo precise UTC using ground fiber service. A White Rabbit timing network A white rabbit timing network consists of three important parts. Precision Time Protocol - The IEEE1588 or the Precision Time Protocol is a time protocol designed to provide synchronization accuracy of 1 microsecond or even less, particularly for use in Industrial networks and Research labs where accurate synchronization is necessary. An accuracy of sub nanoseconds is ideally possible in PTP networks but, in practice, the master-slave and the slave-master links may be asymmetric and the resolution of the PTP time stamps is limited. Hence, the obtained synchronization accuracy in PTP networks is limited. Layer I syntonization using SyncE - Just like the SyncE standard, the mechanism to operate all the nodes at the same frequency works at the physical layer level. Hence, there is no effect on data transfers because of this. The main idea of layer I syntonization is that the clocks in the network are not free running at a frequency, but instead, should be locked to a reference standard and be traceable. So, a network using Layer I syntonization has a hierarchy in the network: there is a master node which sends the frequency information in data streams and all other nodes in the system extract this information from the data stream and have a phase-locked loop which makes them run at exactly this frequency. This removes the jitter and frequency drift in the clocks that is responsible for the offset. Phase measurement - As explained before, the frequency of the local node is disciplined using the clock signal extracted from the data stream sent by the master node. Then, the local node sends back its local clock signal to the master. As the local and master clock frequencies are locked, this clock signal is just a delayed version of the master clock signal. By calculating the phase offset between these two signals, a very accurate measurement for the particular link delay could be made. After finding the link delay, this could be used in the conventional PTP algorithm to achieve a very high accuracy. Components of a White Rabbit network are multi-port White Rabbit Switches and single or dual-port White Rabbit nodes. Both components may be added dynamically to the network. Cable length and other delay factors are automatically compensated by the Precision Time Protocol algorithms. Though conventional Gigabit Ethernet devices may be connected as well, only White Rabbit devices take part in network timing and synchronization. References External links Official website White Rabbit Tutorial ION/PTTI 2011 first paper by ELPROMA and CERN White Rabbit Solution Ethernet Synchronization CERN
White Rabbit Project
[ "Engineering" ]
1,229
[ "Telecommunications engineering", "Synchronization" ]
35,778,656
https://en.wikipedia.org/wiki/Gendered%20sexuality
Gendered sexuality is the way in which gender and sexuality are often viewed as likened constructs, whereby the role of gender in an individual's life is informed by and impacts others' perceptions of their sexuality. For example, both the male and female genders are subject to assumptions of heterosexuality. If a man were to behave in feminine ways, his heterosexuality would be doubted, and individuals may assume that he is gay. Two main theoretical perspectives dominate discussions of gendered sexuality: that of an evolutionary perspective, and that of a sociocultural perspective. Although these two are typically separate, Eagly & Wood believe that these two theories could potentially be reconcilable. Gender and sex in gendered sexuality Both the terms gender and sex have been historically interchangeable, but it was not until the late 1960s and early 70s that the term gender began to be more thoroughly defined and spread throughout the literature within the field of psychology. Although the term has undergone some changes since then, today it represents how an individual feels and expresses their gender, typically through masculinity or femininity. Through this definition, gender has often been used as a variable to study how particular parts of people, (i.e. one's sexuality), can ultimately be informed by gender. Psychological research in this area has tended to follow these three modes of looking at gender: Looking at gender through difference in presentation, actions, and traits Looking at gender vs. individual difference in individuals who identify as male and individuals who identify as female, and Looking at how gender influences how both men and women operate in society Human sexuality, unlike gender, has kept a relatively stable definition by which it refers to all sexual attitudes and behaviours in an erotic, or lack of erotic, nature. The relationship between gender and sexuality is not static, it is fluid and changing. In light of this, gendered sexuality does not necessarily follow predictable patterns. Typically, however, gendered sexuality has often followed a heteronormative path, whereby heterosexuality is seen as what Vanwesenbeeck calls a "key-site" for the intersection between gender and sexuality. Historically, however, these interpretations of sexuality have been riddled with gendered stereotypes, such as men holding more permissive attitudes towards frequent sex and multiple sexual partners, whereas women are more conservative. A study by McCabe, Tanner & Heiman illustrates that gender, at least in the Western world, informs how we understand and conceive of the construct of sexuality. Their study was aimed to discover how men and women gender their meanings of sex and sexuality, if at all, and their results suggest that men and women do talk about sex and sexuality in gendered terms. The most frequent categories of gendering sex/sexuality conversations were: Sex is only physical for men, and only emotional for women Sex is more important for men than women Women's physical appearance is important Sexual desire and/or pleasure does not significantly apply to women The researchers also commented that these four areas of gendering sexuality occurred among the participants without any suggestions or hints towards these particular subject areas. The researchers conclusions stated that gender, in some way, dictates how we learn and what we know about sex and sexuality. Sexual orientation and gendered sexuality Although gendered sexuality is often viewed through the constructs of male, female and heterosexuality, it can also be used in regard to other gender and sexual variant individuals such as gender dysphoria or those who identify as transgender, transsexual, intersex, homosexual or bisexual. Sociocultural perspective The sociocultural perspective of gendered sexuality holds emphasis on the idea that men and women are social beings informed by the social group of which they are a part, and that the social and cultural aspects of these groups influence the traits prescribed to males and females. The sociocultural perspective deems these traits as performative, in opposition to an evolutionary perspective that describes them through notions of essentialism and innateness. When looking at gendered sexuality through a sociocultural lens, behaviour that is considered appropriate will be influenced by four areas of social interactions: behaviour-related aspects, situation-related aspects, partner(s)-related aspects and subject-related aspects. Behaviour-related aspects The sexual behaviour that is evaluated most positively will determine what sexual behaviours are most acceptable in relation to gender. These behaviours apply to specific groups, whereby positive evaluations drive what is socially acceptable and therefore, which behaviours drive overall behaviour. In regard to gendered sexuality, Vanwesenbeck suggests that gendered sexual behaviour, if positively accepted by a social group, is more likely to occur within that social group in comparison to if it was negatively evaluated. In regard to a Western context, this can be seen within heterosexuality in males and females. Gendered behavior is also influenced by family units and consumerism. For example, parents may shop for clothing for their son in the "boys" department. By marketing clothing in this way, the individual's interpretation of sexuality can be externally controlled at an early age. Situation-related aspects This refers to how gendered behaviour is driven and/or encouraged by the sexual situation within one's direct social community. This sexual situation is referred to by Vanwesenbeeck (2009) as the sexual arena of the individual. Some examples of this could be: a gay bar, a sex club (See Ping pong show), or hip-hop culture. These experiences are all situation-specific in relation to gender and sexuality, and have a different meaning of what is considered as "normal" depending on the situational construct. Another factor that contributes to situational gendered sexuality is culture and custom. For some nations, it is customary for men and women to behave in certain ways that are considered unacceptable elsewhere. Men holding hands in India is much more acceptable than in the West, and due to these cultural differences, the perception and reaction to sexuality amongst gender varies. Partner(s)-related aspects Different sexual interactions will determine how much an individual is concerned with conforming to positive societal influences of gender. Studies suggest that increased interactions and strength of gender performativity enacted by one's partner(s) will more strongly influence one's own adherence to gender expectations. The adherence to these gender norms leaves room for unspoken expectations that may create controversy and tension. As an example, it is commonly expected for men to propose marriage to women—not the other way around. This societal expectation influences the behaviors of men and women seeking marital status. Subject-related aspects This final postulate rests on the individual, or the subject, and how much a person strives to meet societal gender norms. There are several theories under the label of sociocultural perspectives which have been theorized to influence gendered sexuality. Social role theory Social Role Theory dictates that people are a product of societal social roles set in place via cultural traditions, whereby society instructs all individuals what roles are appropriate for which individuals under particular circumstances. Social role theory can dictate many different types of social roles, in particular, gender roles. These gender roles imply that men and women have their own particular roles assigned to them via their sex, and that these roles are typical and desirable of their particular sex. Gender roles are both restrictive and opportunistic, whereby they dictate an individual's potential through their identification as male or female. In the Western context, this can be seen particularly through the historic gendered division of labour where men and women are fit into different professional roles dictated by their physical capabilities, typically via sex. Vanwesenbeeck suggests that: "... It's not the biological potential, or sex, per se that causes gender (role) differences to emerge, but the way society differentially treats these potentials" (p. 888). Conformity to these beliefs occurs when others both encourage and accept these behaviours, which in turn, internalizes these gender roles within the minds of men and women throughout a particular group. In a Western context, Eagly & Wood suggest that there are two particular guiding principles of gender role behaviour: Male-typical gender roles are often given a higher status of power, which labels these types of gender roles as dominant, and all others as marginal (e.g. female-typical gender roles). All individuals of a particular society will attempt to both obtain and perform the specific components which correspond with their accepted gender role (e.g. women will attempt to perform the roles dictated by female gender roles). Again, in a Western context, these gender roles also carry over to dictate sexual actions and behaviours. For example, a male gender role suggests dominance and aggression, which also carries over into a male sexual role, whereby the male is expected to be sexually dominant and aggressive. These ideologies were inherent within both male and female gendered sexual roles of the 1950s and 60s, whereby a husband was expected to sexually dominate his wife. These roles, however, have changed; there is also strong evidence to suggest that they will continue to change over time. This being said, social role theory, then, seems to also suggests that any non-heterosexual identity does not properly align with these gendered sexual roles and is not as accepted. This is also known as heteronormativity, which can be defined as "...the normalising of heterosexual structures and relationships and the marginalisation of everything that doesn't conform" (p. 142). Having to maintain an identity that conforms to these gendered sexual roles, however, has not necessarily suggested positive outcomes. Vanwesenbeeck suggests: "... restrictive gender norms, which undermine women's power, competence, and agency, help account for women's higher rates of depression, poorer standardized scores on a variety of psychological outcomes, and higher discontent with sex" (p. 888). Sexual double standard The sexual double standard is suggested to be a product of social role theory, whereby gendered sex roles are a part of this sexual double standard. Historically, the sexual double standard has suggested that it is both acceptable and even encouraged for men to have sex outside of wedlock, but the same concept does not apply to women. Today, sex outside of wedlock is accepted for both men and women in the majority of the Western world, but for women, this idea is restricted to the spheres of love or engagement. The sexual double standard extends itself further to undermine women, whereby gender roles dictate that all women should be sexual, but sexually humble. It influences female sexual roles in that it suggests that women can never be sexual without being sexually promiscuous. Vanwesenbeeck calls this the whore-madonna distinction. Naomi Wolf, in The Beauty Myth explains "Beauty today is what the female orgasm used to be: something given to women by men, if they submitted to their feminine role and were lucky." Research In regard to researching gendered sexuality, self-reporting data can often be confounded by social roles, whereby individuals' responses to questions about sexuality will be influenced by one's ability to want to conform to their appropriate social role. Sexuality, in particular, will inform an individual's responses because the area of sexuality is heavily monitored by what are considered normative social roles. Alexander & Fisher conducted a study to determine whether or not men and women's self-reported sexual behaviours and attitudes are influenced by expected gender roles. The self-reported sex differences were mostly found where there was the greatest risk of participants' answers being read by others, and were smallest in the condition where it was believed that participants would most likely tell the truth in order to save themselves from the embarrassment of detected false answers. The results of the study suggest that men and women are influenced by expected gender roles when it comes to sexual behaviours, particularly those considered less acceptable for women than for men, and that they could actually be more similar than previously thought in regard to these behaviours. Kennair et al. (2023) found no signs on a sexual double standard except against men. Generally there was no signs of a double standard in short-term or long-term mating contexts, nor in choosing a friend. The only exception was that women's self-stimulation was more acceptable than men's. Women assessed much more negatively their suitors with a larger sexual history than similar suitors of their friends, whereas for men the effect was smaller, suggesting women being more hypocrite. Maryanne L. Fisher et al. showed how women's intrasexual competition causes derogatory gossip, also on sexuality. They did not find a single case where a woman would have been derogated of lack of sexual experiences or partners. Instead, sexuality, gold-digging, mate poaching, substance use and mothering qualities were used as subjects. Social constructionism Social constructionism suggests that what we know to be reality is constructed by social realities that are derived from the history of humankind. Inherent within it is the constructionist paradigm, which has four main points: One's experiences with the world are ordered in such a way that we can make sense of them Language provides us with a classification system by which we can understand the world around us All individuals have what is known as a shared reality of life, whereby we understand how reality differs from dreams by how people, places and other things are organized. We all know and understand this is how people operate. We understand that the most beneficial way of going about doing something becomes habituated into the human psyche and ultimately becomes a part of our societal institutions. These ways that social lives are constructed influences both gender and sex. Gender is socially constructed by the ways in which one's various everyday interactions with people in a particular culture influenced the external presentation and construction of gender. The social construction of sexuality, on the other hand, is specifically dictated through societal ideologies that limit and restrict what is constructed as appropriate sexual functioning. From this standpoint, sex differences are simply byproducts of men and women attempting to adhere to their prescribed gender construction given to them by their society. Additionally, adhering to these constructions are complicated by a society's technological and situational conditions particular to each culture. It is also important to point out that gendered differences in regard to social construction are also said to be driven by relations of power, typically through patriarchal ideologies which privilege men over women. These power relations influence differences between the genders, which additionally influences variables of sexuality, such as sexual attitudes and behaviours. Similar to social role theory, these constructions are often influenced by physical traits. Research The social construction of gendered sexuality is said to be influenced by culture. Petersen & Hyde suggest that there should be a smaller gender difference in regard to attitudes on sexual behaviours in cultures that have smaller gender differences in regard to power (e.g. division of labour between the sexes). They examined their claim by using nationality as a control for gender differences in sexual attitudes and behaviours. Results supported their constructionist claims: the majority of gender differences in sexual behaviours were smaller in Europe, Australia and the USA than in countries with large gender inequalities in Asia, Africa, Latin America, and the Middle East. They concluded that these differences in behaviour can be attributed to the way in which the positions of men and women are constructed within society. Baumeister completed a study that looked at female erotic plasticity, suggesting that women are more susceptible to influence by social and cultural factors in regard to sexuality than men are. His results showed that women had greater sexual variability, lower correlations between sexual attitudes and sexual behaviour for women, and greater influence of social factors on these measures as well. Although Baumeister used an evolutionary approach to explain his findings, Hyde & Durik suggest that a sociocultural approach related to social constructionism is more appropriate. Hyde & Durik pointed out that in Baumeister's Western sample: ∗ Men have many more levels of power over women than women have over men ∗ Groups of people who have less power often attempt to acculturate their behaviour to those that are more powerful ∗ Both gender roles and social constructions influence both men and women's behaviour, particularly in the area of sexuality whereby heterosexuality is expected for both men and women. Although other studies have attempted to replicate Baumeister's findings, no successful replications have yet to be found. Objectification theory The objectification theory focuses on how the body is treated in society, particularly how female bodies are treated like objects. First coined by Fredrickson & Roberts, they initially constructed objectification theory to show how sexual objectification effects women's psychological well-being (Hill & Fischer, 2008). Sexual objectification can be seen particularly through the media via sexual inspection or even sexual violence. This objectification can lead women to look at their bodies as objects to be 'toyed' with, rather than an entity which works to keep an individual alive and functioning optimally. Vanwesekbeeck suggests that this "...makes women take distance from their bodies, doubt their bodies' capacities, and results in a lack of experience in using the body effectively" (p. 890). Experience of objectification can vary greatly from woman to woman, but has been suggested to considerably effect how a woman experiences her own sexuality. Vanwesekbeeck When women's bodies are more frequently subject to the male gaze, particularly in regard to sexualization, this can lead women to continually police their body image. This creates what Masters and Johnson called spectatoring, whereby women are continuously conscious of their outer body experience, and in doing so, are completely unaware of their inner body experience. Spectatoring is said to decrease women's overall sexual satisfaction. Mass media The majority of sexual objectification comes from the media, be it TV shows, magazines, movies or music videos. Brown suggests that the media impacts the sexual behavior of individuals in three key ways. The First Way- The media takes on the responsibility of keeping sexuality, sexual attitudes, and sexual behaviors at the forefront of the public eye. Take, for example, magazines such as Cosmopolitan or Glamour. The majority of these magazines will have images and headlines intertwined with themes of sexuality with what they should be doing to stay sexy in order to keep their partners sexually interested. These forms of media, in and of themselves, are enforcing compulsory heterosexuality, let alone gendered sexuality. The Second Way- The media serves as an enforcer of gendered sexual norms. Examine, for example, the cultural importances placed on heteronormativity. As proposed by Gayle Rubin, "heteronormativity in mainstream society creates a "sex hierarchy" that graduates sex practices from morally "good sex" to "bad sex." This hierarchy places reproductive monogamous sex between committed heterosexuals as "good" and places any sexual acts and individuals who fall short of this standard lower until they fall into "bad sex."" The Third Way- The media promotes and encourages the disregarding of the sexually responsible model. Tying back into the previous examples, the media plays upon the assumption that an individual desires acceptance from others. If they display enough promiscuity and sexuality on say the covers of magazines, then eventually people will see that as being the norm and will ignore their social and moral obligations to be responsible with their sexuality. These forms of information from the media have also been suggested to educate the public about sexual roles and portrayals of women, and these influences have been said to have different effects depending on the subgroup. The audience of this form of media, and this type of 'sexual education' is also said to influence some more than others. For example, there is evidence to suggest that teenage girls are most susceptible to these forms of knowledge, impacting female adolescent sexuality. All in all, the structure and foundation of American culture allows for mass media to heavily impact the many different aspects of individualized and gendered sexuality. Health consequences Sexual objectification is said to primarily impact the psychological health of women. It is said to negatively affect young women by instilling shame, doubt and anxiety within them through body spectatoring and policing. These effects are said to potentially lead to even more serious negative mental health complexities, such as depression and sexual dysfunction. Gender inequalities can create health inequalities. For instance, women live longer than men but are considered to be sick five times as frequently as men. Men experience higher rates of fatal illnesses as well as being more frequently injured. The construction of gendered sexuality also brings health consequences in the medical community, in regard to mental solidity and physical health effects. In 1984 genital surgery was created for purely aesthetic reasons, but it has only recently in 1998 that it was recognized on a wider scale. Two such medical surgeries are known as vaginoplasty and labiaplasty. The vaginoplasty is used to “tighten” the vagina to improves function, and the labiaplasty is done to "'enhance' vulval appearance." Throughout time, and through these surgeries, the vagina and female genital is something that is looked at as a problem that needs to be solved if it is not viewed by society as "perfect." These surgeries cause insecurities among women, objectifying them and creating a normalized view on their genitals. Women are seen to "suffer from comparable feelings of genital anxiety," and will undergo these surgeries, that are acclaimed to be expensive and dangerous, in order to concede to social norms and suppress their anxieties. The pursuit for the "optimal vagina" consequently damages the health of women in their attempt to form themselves to idealized sexual function and appearance. See also Gender studies Sex and gender distinction Queer heterosexuality References Gender and society Sexuality Sexual orientation and psychology Social constructionism
Gendered sexuality
[ "Biology" ]
4,465
[ "Behavior", "Sexuality", "Sex" ]
35,779,834
https://en.wikipedia.org/wiki/Carnitine%20biosynthesis
Carnitine biosynthesis is a method for the endogenous production of L-carnitine, a molecule that is essential for energy metabolism. In humans and many other animals, L-carnitine is obtained from both diet and by biosynthesis. The carnitine biosynthesis pathway is highly conserved among many eukaryotes and some prokaryotes. L-Carnitine is biosynthesized from Nε-trimethyllysine. At least four enzymes are involved in the overall biosynthetic pathway. They are Nε-trimethyllysine hydroxylase, 3-hydroxy-Nε-trimethyllysine aldolase, 4-N-trimethylaminobutyraldehyde dehydrogenase and γ-butyrobetaine hydroxylase. Nε-Trimethyllysine hydroxylase The first enzyme of the L-carnitine biosynthetic pathway is Nε-trimethyllysine hydroxylase, an iron and 2-oxoglutarate (2OG)-dependent oxygenase that also requires ascorbate. Nε-trimethyllysine hydroxylase catalyses the hydroxylation reaction of Nε-trimethyllysine to 3-hydroxy-Nε-trimethyllysine. The current consensus theory about the origin of Nε-trimethyllysine in mammals is that mammals utilise lysosomal or proteasomal degradation of proteins containing Nε-trimethyllysine residues as starting point for carnitine biosynthesis. An alternative theory involving endogenous non-peptidyl biosynthesis was also proposed, based on evidence gathered from a study involving feeding normal and undernourished human subjects with the amino acid lysine. Although Nε-trimethyllysine biosynthetic pathway involving Nε-trimethyllysine methyltransferase has been fully characterised in fungi including Neurospora crassa, such biosynthetic pathway has never been properly characterised in mammals or humans. A third theory about the origin of Nε-trimethyllysine in mammals does not involve biosynthesis at all, but involves direct dietary intake from vegetable foods. High-performance liquid chromatography (HPLC) analysis has confirmed that vegetables contain a significant amount of Nε-trimethyllysine. 3-Hydroxy-Nε-trimethyllysine aldolase The second step of L-carnitine biosynthesis requires the 3-hydroxy-Nε-trimethyllysine aldolase enzyme. 3-hydroxy-Nε-trimethyllysine aldolase is a pyridoxal phosphate dependent aldolase, and it catalyses the cleavage of 3-hydroxy-Nε-trimethyllysine into 4-N-trimethylaminobutyraldehyde and glycine. The true identity of 3-hydroxy-Nε-trimethyllysine aldolase is elusive and the mammalian gene encoding 3-hydroxy-Nε-trimethyllysine aldolase has not been identified. 3-hydroxy-Nε-trimethyllysine aldolase activity has been demonstrated in both L-threonine aldolase and serine hydroxymethyltransferase, although whether this is the main catalytic activity of these enzymes remains to be established. 4-N-Trimethylaminobutyraldehyde dehydrogenase The third enzyme of L-carnitine biosynthesis is 4-N-trimethylaminobutyraldehyde dehydrogenase. 4-N-trimethylaminobutyraldehyde dehydrogenase is a NAD+ dependent enzyme. 4-N-trimethylaminobutyraldehyde dehydrogenase catalyses the dehydrogenation of 4-N-trimethylaminobutyraldehyde into gamma-butyrobetaine. Unlike 3-hydroxy-Nε-trimethyllysine aldolase, 4-N-trimethylaminobutyraldehyde dehydrogenase has been identified and purified from many sources including rat and Pseudomonas. However, the human 4-N-trimethylaminobutyraldehyde dehydrogenase has so far not been identified. There is considerable sequence similarity between rat 4-N-trimethylaminobutyraldehyde dehydrogenase and human aldehyde dehydrogenase 9, but the true identity of 4-N-trimethylaminobutyraldehyde dehydrogenase remains to be established. γ-Butyrobetaine hydroxylase The final step of L-carnitine biosynthesis is γ-butyrobetaine hydroxylase, a zinc binding enzyme. Like Nε-trimethyllysine hydroxylase, γ-butyrobetaine hydroxylase is a 2-oxoglutarate and iron(II)-dependent oxygenase. γ-Butyrobetaine hydroxylase catalyses the stereospecific hydroxylation of γ-butyrobetaine to L-carnitine. γ-Butyrobetaine hydroxylase is the most studied enzyme among the four enzymes in the biosynthetic pathway. It has been purified from many sources, such as Pseudomonas, rat, cow, guinea pig and human. Recombinant human γ-butyrobetaine hydroxylase has also been produced by Escherichia coli and baculoviruses systems. References Biosynthesis
Carnitine biosynthesis
[ "Chemistry" ]
1,234
[ "Biosynthesis", "Metabolism", "Chemical synthesis" ]
35,781,493
https://en.wikipedia.org/wiki/Tetrahydrocorticosterone
3α,5α-Tetrahydrocorticosterone (3α,5α-THB), or simply tetrahydrocorticosterone (THB or THCC), is an endogenous glucocorticoid hormone. See also 5α-Dihydrocorticosterone Tetrahydrodeoxycorticosterone Dihydrodeoxycorticosterone Allopregnanolone Tetrahydrocortisone Tetrahydrocortisol References Glucocorticoids Ketones Pregnanes Steroid hormones
Tetrahydrocorticosterone
[ "Chemistry" ]
131
[ "Ketones", "Functional groups" ]
35,781,701
https://en.wikipedia.org/wiki/Mozingo%20reduction
The Mozingo reduction, also known as Mozingo reaction or thioketal reduction, is a chemical reaction capable of fully reducing a ketone or aldehyde to the corresponding alkane via a dithioacetal. The reaction scheme is as follows: The ketone or aldehyde is activated by conversion to cyclic dithioacetal by reaction with a dithiol (nucleophilic substitution) in presence of a H+ donating acid. The cyclic dithioacetal structure is then hydrogenolyzed using Raney nickel. Raney nickel is converted irreversibly to nickel sulfide. This method is milder than either the Clemmensen or Wolff-Kishner reductions, which employ strongly acidic or basic conditions, respectively, that might interfere with other functional groups. History The reaction is named after Ralph Mozingo, who reported the cleavage of thioethers with Raney nickel in 1942. However the modern iteration of the reaction, involving the cyclic dithioacetal, was developed by Melville Wolfrom. References Organic reduction reactions Organic redox reactions Name reactions
Mozingo reduction
[ "Chemistry" ]
233
[ "Name reactions", "Organic redox reactions", "Organic reactions" ]
24,457,573
https://en.wikipedia.org/wiki/Morphism%20of%20algebraic%20varieties
In algebraic geometry, a morphism between algebraic varieties is a function between the varieties that is given locally by polynomials. It is also called a regular map. A morphism from an algebraic variety to the affine line is also called a regular function. A regular map whose inverse is also regular is called biregular, and the biregular maps are the isomorphisms of algebraic varieties. Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective varieties – the concepts of rational and birational maps are widely used as well; they are partial functions that are defined locally by rational fractions instead of polynomials. An algebraic variety has naturally the structure of a locally ringed space; a morphism between algebraic varieties is precisely a morphism of the underlying locally ringed spaces. Definition If X and Y are closed subvarieties of and (so they are affine varieties), then a regular map is the restriction of a polynomial map . Explicitly, it has the form: where the s are in the coordinate ring of X: where I is the ideal defining X (note: two polynomials f and g define the same function on X if and only if f − g is in I). The image f(X) lies in Y, and hence satisfies the defining equations of Y. That is, a regular map is the same as the restriction of a polynomial map whose components satisfy the defining equations of . More generally, a map f:X→Y between two varieties is regular at a point x if there is a neighbourhood U of x and a neighbourhood V of f(x) such that f(U) ⊂ V and the restricted function f:U→V is regular as a function on some affine charts of U and V. Then f is called regular, if it is regular at all points of X. Note: It is not immediately obvious that the two definitions coincide: if X and Y are affine varieties, then a map f:X→Y is regular in the first sense if and only if it is so in the second sense. Also, it is not immediately clear whether regularity depends on a choice of affine charts (it does not.) This kind of a consistency issue, however, disappears if one adopts the formal definition. Formally, an (abstract) algebraic variety is defined to be a particular kind of a locally ringed space. When this definition is used, a morphism of varieties is just a morphism of locally ringed spaces. The composition of regular maps is again regular; thus, algebraic varieties form the category of algebraic varieties where the morphisms are the regular maps. Regular maps between affine varieties correspond contravariantly in one-to-one to algebra homomorphisms between the coordinate rings: if f:X→Y is a morphism of affine varieties, then it defines the algebra homomorphism where are the coordinate rings of X and Y; it is well-defined since is a polynomial in elements of . Conversely, if is an algebra homomorphism, then it induces the morphism given by: writing where are the images of 's. Note as well as In particular, f is an isomorphism of affine varieties if and only if f# is an isomorphism of the coordinate rings. For example, if X is a closed subvariety of an affine variety Y and f is the inclusion, then f# is the restriction of regular functions on Y to X. See #Examples below for more examples. Regular functions In the particular case that Y equals A1 the regular maps f:X→A1 are called regular functions, and are algebraic analogs of smooth functions studied in differential geometry. The ring of regular functions (that is the coordinate ring or more abstractly the ring of global sections of the structure sheaf) is a fundamental object in affine algebraic geometry. The only regular function on a projective variety is constant (this can be viewed as an algebraic analogue of Liouville's theorem in complex analysis). A scalar function f:X→A1 is regular at a point x if, in some open affine neighborhood of x, it is a rational function that is regular at x; i.e., there are regular functions g, h near x such that f = g/h and h does not vanish at x. Caution: the condition is for some pair (g, h) not for all pairs (g, h); see Examples. If X is a quasi-projective variety; i.e., an open subvariety of a projective variety, then the function field k(X) is the same as that of the closure of X and thus a rational function on X is of the form g/h for some homogeneous elements g, h of the same degree in the homogeneous coordinate ring of (cf. Projective variety#Variety structure.) Then a rational function f on X is regular at a point x if and only if there are some homogeneous elements g, h of the same degree in such that f = g/h and h does not vanish at x. This characterization is sometimes taken as the definition of a regular function. Comparison with a morphism of schemes If X = Spec A and Y = Spec B are affine schemes, then each ring homomorphism determines a morphism by taking the pre-images of prime ideals. All morphisms between affine schemes are of this type and gluing such morphisms gives a morphism of schemes in general. Now, if X, Y are affine varieties; i.e., A, B are integral domains that are finitely generated algebras over an algebraically closed field k, then, working with only the closed points, the above coincides with the definition given at #Definition. (Proof: If is a morphism, then writing , we need to show where are the maximal ideals corresponding to the points x and f(x); i.e., . This is immediate.) This fact means that the category of affine varieties can be identified with a full subcategory of affine schemes over k. Since morphisms of varieties are obtained by gluing morphisms of affine varieties in the same way morphisms of schemes are obtained by gluing morphisms of affine schemes, it follows that the category of varieties is a full subcategory of the category of schemes over k. For more details, see . Examples The regular functions on An are exactly the polynomials in n variables and the regular functions on Pn are exactly the constants. Let X be the affine curve . Then is a morphism; it is bijective with the inverse . Since g is also a morphism, f is an isomorphism of varieties. Let X be the affine curve . Then is a morphism. It corresponds to the ring homomorphism which is seen to be injective (since f is surjective). Continuing the preceding example, let U = A1 − {1}. Since U is the complement of the hyperplane t = 1, U is affine. The restriction is bijective. But the corresponding ring homomorphism is the inclusion , which is not an isomorphism and so the restriction f |U is not an isomorphism. Let X be the affine curve x2 + y2 = 1 and let Then f is a rational function on X. It is regular at (0, 1) despite the expression since, as a rational function on X, f can also be written as . Let . Then X is an algebraic variety since it is an open subset of a variety. If f is a regular function on X, then f is regular on and so is in . Similarly, it is in . Thus, we can write: where g, h are polynomials in k[x, y]. But this implies g is divisible by xn and so f is in fact a polynomial. Hence, the ring of regular functions on X is just k[x, y]. (This also shows that X cannot be affine since if it were, X is determined by its coordinate ring and thus X = A2.) Suppose by identifying the points (x : 1) with the points x on A1 and ∞ = (1 : 0). There is an automorphism σ of P1 given by σ(x : y) = (y : x); in particular, σ exchanges 0 and ∞. If f is a rational function on P1, then and f is regular at ∞ if and only if f(1/z) is regular at zero. Taking the function field k(V) of an irreducible algebraic curve V, the functions F in the function field may all be realised as morphisms from V to the projective line over k. (cf. #Properties) The image will either be a single point, or the whole projective line (this is a consequence of the completeness of projective varieties). That is, unless F is actually constant, we have to attribute to F the value ∞ at some points of V. For any algebraic varieties X, Y, the projection is a morphism of varieties. If X and Y are affine, then the corresponding ring homomorphism is where . Properties A morphism between varieties is continuous with respect to Zariski topologies on the source and the target. The image of a morphism of varieties need not be open nor closed (for example, the image of is neither open nor closed). However, one can still say: if f is a morphism between varieties, then the image of f contains an open dense subset of its closure (cf. constructible set). A morphism f:X→Y of algebraic varieties is said to be dominant if it has dense image. For such an f, if V is a nonempty open affine subset of Y, then there is a nonempty open affine subset U of X such that f(U) ⊂ V and then is injective. Thus, the dominant map f induces an injection on the level of function fields: where the direct limit runs over all nonempty open affine subsets of Y. (More abstractly, this is the induced map from the residue field of the generic point of Y to that of X.) Conversely, every inclusion of fields is induced by a dominant rational map from X to Y. Hence, the above construction determines a contravariant-equivalence between the category of algebraic varieties over a field k and dominant rational maps between them and the category of finitely generated field extension of k. If X is a smooth complete curve (for example, P1) and if f is a rational map from X to a projective space Pm, then f is a regular map X → Pm. In particular, when X is a smooth complete curve, any rational function on X may be viewed as a morphism X → P1 and, conversely, such a morphism as a rational function on X. On a normal variety (in particular, a smooth variety), a rational function is regular if and only if it has no poles of codimension one. This is an algebraic analog of Hartogs' extension theorem. There is also a relative version of this fact; see . A morphism between algebraic varieties that is a homeomorphism between the underlying topological spaces need not be an isomorphism (a counterexample is given by a Frobenius morphism .) On the other hand, if f is bijective birational and the target space of f is a normal variety, then f is biregular. (cf. Zariski's main theorem.) A regular map between complex algebraic varieties is a holomorphic map. (There is actually a slight technical difference: a regular map is a meromorphic map whose singular points are removable, but the distinction is usually ignored in practice.) In particular, a regular map into the complex numbers is just a usual holomorphic function (complex-analytic function). Morphisms to a projective space Let be a morphism from a projective variety to a projective space. Let x be a point of X. Then some i-th homogeneous coordinate of f(x) is nonzero; say, i = 0 for simplicity. Then, by continuity, there is an open affine neighborhood U of x such that is a morphism, where yi are the homogeneous coordinates. Note the target space is the affine space Am through the identification . Thus, by definition, the restriction f |U is given by where gi's are regular functions on U. Since X is projective, each gi is a fraction of homogeneous elements of the same degree in the homogeneous coordinate ring k[X] of X. We can arrange the fractions so that they all have the same homogeneous denominator say f0. Then we can write gi = fi/f0 for some homogeneous elements fi's in k[X]. Hence, going back to the homogeneous coordinates, for all x in U and by continuity for all x in X as long as the fi's do not vanish at x simultaneously. If they vanish simultaneously at a point x of X, then, by the above procedure, one can pick a different set of fi's that do not vanish at x simultaneously (see Note at the end of the section.) In fact, the above description is valid for any quasi-projective variety X, an open subvariety of a projective variety ; the difference being that fi's are in the homogeneous coordinate ring of . Note: The above does not say a morphism from a projective variety to a projective space is given by a single set of polynomials (unlike the affine case). For example, let X be the conic in P2. Then two maps and agree on the open subset of X (since ) and so defines a morphism . Fibers of a morphism The important fact is: In Mumford's red book, the theorem is proved by means of Noether's normalization lemma. For an algebraic approach where the generic freeness plays a main role and the notion of "universally catenary ring" is a key in the proof, see Eisenbud, Ch. 14 of "Commutative algebra with a view toward algebraic geometry." In fact, the proof there shows that if f is flat, then the dimension equality in 2. of the theorem holds in general (not just generically). Degree of a finite morphism Let f: X → Y be a finite surjective morphism between algebraic varieties over a field k. Then, by definition, the degree of f is the degree of the finite field extension of the function field k(X) over f*k(Y). By generic freeness, there is some nonempty open subset U in Y such that the restriction of the structure sheaf OX to is free as OYU-module. The degree of f is then also the rank of this free module. If f is étale and if X, Y are complete, then for any coherent sheaf F on Y, writing χ for the Euler characteristic, (The Riemann–Hurwitz formula for a ramified covering shows the "étale" here cannot be omitted.) In general, if f is a finite surjective morphism, if X, Y are complete and F a coherent sheaf on Y, then from the Leray spectral sequence , one gets: In particular, if F is a tensor power of a line bundle, then and since the support of has positive codimension if q is positive, comparing the leading terms, one has: (since the generic rank of is the degree of f.) If f is étale and k is algebraically closed, then each geometric fiber f−1(y) consists exactly of deg(f) points. See also Algebraic function Smooth morphism Étale morphisms – The algebraic analogue of local diffeomorphisms. Resolution of singularities contraction morphism Notes Citations References James Milne, Algebraic geometry, old version v. 5.xx. Algebraic varieties Types of functions Functions and mappings
Morphism of algebraic varieties
[ "Mathematics" ]
3,373
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Mathematical relations", "Types of functions" ]
24,458,945
https://en.wikipedia.org/wiki/C38H30
{{DISPLAYTITLE:C38H30}} The molecular formula C38H30 (molar mass: 486.64 g/mol, exact mass: 486.2348 u) may refer to: Hexaphenylethane Gomberg's dimer Molecular formulas
C38H30
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,458,988
https://en.wikipedia.org/wiki/C40H82
{{DISPLAYTITLE:C40H82}} The molecular formula C40H82 may refer to: Lycopane, an alkane isoprenoid Tetracontane, an alkane 15,19,23-trimethylheptatriacontane, an alkane pheromone Molecular formulas
C40H82
[ "Physics", "Chemistry" ]
77
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,461,363
https://en.wikipedia.org/wiki/Animal%20efficacy%20rule
The FDA animal efficacy rule (also known as animal rule) applies to development and testing of drugs and biologicals to reduce or prevent serious or life-threatening conditions caused by exposure to lethal or permanently disabling toxic agents (chemical, biological, radiological, or nuclear substances), where human efficacy trials are not feasible or ethical. The animal efficacy rule was finalized by the FDA and authorized by the United States Congress in 2002, following the September 11 attacks and concerns regarding bioterrorism. Summary The FDA can rely on evidence from animal studies to provide substantial evidence of product effectiveness if: There is a reasonably well-understood mechanism for the toxicity of the agent and its amelioration or prevention by the product; The effect is demonstrated in either: More than one animal species expected to react with a response predictive for humans; or One well-characterized animal species model (adequately evaluated for its responsiveness in humans) for predicting the response in humans. The animal study endpoint is clearly related to the desired benefit in humans; and Data or information on the pharmacokinetics and pharmacodynamics of the product or other relevant data or information in animals or humans is sufficiently well understood to allow selection of an effective dose in humans, and it is, therefore, reasonable to expect the effectiveness of the product in animals to be a reliable indicator of its effectiveness in humans. FDA published a Guidance for Industry on the Animal Rule in October 2015. References External links 21 CFR Parts 314 and 601, Docket No. 98N-0237 New Drug and Biological Drug Products; Evidence Needed to Demonstrate Effectiveness of New Drugs When Human Efficacy Studies Are Not Ethical or Feasible. What is meant by "Required Under Animal Efficacy Rule" in the search results display? FDA.gov Postmarketing Requirements and Commitments: Frequently Asked Questions (FAQ) Animal testing Biological warfare Biological contamination Bioethics Biotechnology products Biotechnology Life sciences industry Pharmaceutical industry Pharmacy Specialty drugs
Animal efficacy rule
[ "Chemistry", "Technology", "Biology" ]
395
[ "Bioethics", "Animal testing", "Pharmacology", "Specialty drugs", "Biotechnology products", "Life sciences industry", "Pharmacy", "Pharmaceutical industry", "Biotechnology", "Biological warfare", "nan", "Ethics of science and technology", "Biopharmaceuticals" ]
24,465,207
https://en.wikipedia.org/wiki/Bent%20function
In the mathematical field of combinatorics, a bent function is a Boolean function that is maximally non-linear; it is as different as possible from the set of all linear and affine functions when measured by Hamming distance between truth tables. Concretely, this means the maximum correlation between the output of the function and a linear function is minimal. In addition, the derivatives of a bent function are balanced Boolean functions, so for any change in the input variables there is a 50 percent chance that the output value will change. The maximal nonlinearity means approximating a bent function by an affine (linear) function is hard, a useful property in the defence against linear cryptanalysis. In addition, detecting a change in the output of the function yields no information about what change occurred in the inputs, making the function immune to differential cryptanalysis. Bent functions were defined and named in the 1960s by Oscar Rothaus in research not published until 1976. They have been extensively studied for their applications in cryptography, but have also been applied to spread spectrum, coding theory, and combinatorial design. The definition can be extended in several ways, leading to different classes of generalized bent functions that share many of the useful properties of the original. It is known that V. A. Eliseev and O. P. Stepchenkov studied bent functions, which they called minimal functions, in the USSR in 1962. However, their results have still not been declassified. Bent functions are also known as perfectly nonlinear (PN) Boolean functions. Certain functions that are as close as possible to perfect nonlinearity (e.g. for functions of an odd number of bits, or vectorial functions) are known as almost perfectly nonlinear (APN). Walsh transform Bent functions are defined in terms of the Walsh transform. The Walsh transform of a Boolean function is the function given by where is the dot product in Z. Alternatively, let and . Then and hence For any Boolean function f and , the transform lies in the range Moreover, the linear function and the affine function correspond to the two extreme cases, since Thus, for each the value of characterizes where the function f(x) lies in the range from f0(x) to f1(x). Definition and properties Rothaus defined a bent function as a Boolean function whose Walsh transform has constant absolute value. Bent functions are in a sense equidistant from all the affine functions, so they are equally hard to approximate with any affine function. The simplest examples of bent functions, written in algebraic normal form, are and . This pattern continues: is a bent function for every even n, but there is a wide variety of other bent functions as n increases. The sequence of values (−1)f(x), with taken in lexicographical order, is called a bent sequence; bent functions and bent sequences have equivalent properties. In this ±1 form, the Walsh transform is easily computed as where W(2n) is the natural-ordered Walsh matrix and the sequence is treated as a column vector. Rothaus proved that bent functions exist only for even n, and that for a bent function f, for all . In fact, , where g is also bent. In this case, , so f and g are considered dual functions. Every bent function has a Hamming weight (number of times it takes the value 1) of , and in fact agrees with any affine function at one of those two numbers of points. So the nonlinearity of f (minimum number of times it equals any affine function) is , the maximum possible. Conversely, any Boolean function with nonlinearity is bent. The degree of f in algebraic normal form (called the nonlinear order of f) is at most (for ). Although bent functions are vanishingly rare among Boolean functions of many variables, they come in many different kinds. There has been detailed research into special classes of bent functions, such as the homogeneous ones or those arising from a monomial over a finite field, but so far the bent functions have defied all attempts at a complete enumeration or classification. Constructions There are several types of constructions for bent functions. Combinatorial constructions: iterative constructions, Maiorana–McFarland construction, partial spreads, Dillon's and Dobbertin's bent functions, minterm bent functions, bent iterative functions Algebraic constructions: monomial bent functions with exponents of Gold, Dillon, Kasami, Canteaut–Leander and Canteaut–Charpin–Kuyreghyan; Niho bent functions, etc. Applications As early as 1982 it was discovered that maximum length sequences based on bent functions have cross-correlation and autocorrelation properties rivalling those of the Gold codes and Kasami codes for use in CDMA. These sequences have several applications in spread spectrum techniques. The properties of bent functions are naturally of interest in modern digital cryptography, which seeks to obscure relationships between input and output. By 1988 Forré recognized that the Walsh transform of a function can be used to show that it satisfies the strict avalanche criterion (SAC) and higher-order generalizations, and recommended this tool to select candidates for good S-boxes achieving near-perfect diffusion. Indeed, the functions satisfying the SAC to the highest possible order are always bent. Furthermore, the bent functions are as far as possible from having what are called linear structures, nonzero vectors a such that is a constant. In the language of differential cryptanalysis (introduced after this property was discovered) the derivative of a bent function f at every nonzero point a (that is, is a balanced Boolean function, taking on each value exactly half of the time. This property is called perfect nonlinearity. Given such good diffusion properties, apparently perfect resistance to differential cryptanalysis, and resistance by definition to linear cryptanalysis, bent functions might at first seem the ideal choice for secure cryptographic functions such as S-boxes. Their fatal flaw is that they fail to be balanced. In particular, an invertible S-box cannot be constructed directly from bent functions, and a stream cipher using a bent combining function is vulnerable to a correlation attack. Instead, one might start with a bent function and randomly complement appropriate values until the result is balanced. The modified function still has high nonlinearity, and as such functions are very rare the process should be much faster than a brute-force search. But functions produced in this way may lose other desirable properties, even failing to satisfy the SAC – so careful testing is necessary. A number of cryptographers have worked on techniques for generating balanced functions that preserve as many of the good cryptographic qualities of bent functions as possible. Some of this theoretical research has been incorporated into real cryptographic algorithms. The CAST design procedure, used by Carlisle Adams and Stafford Tavares to construct the S-boxes for the block ciphers CAST-128 and CAST-256, makes use of bent functions. The cryptographic hash function HAVAL uses Boolean functions built from representatives of all four of the equivalence classes of bent functions on six variables. The stream cipher Grain uses an NLFSR whose nonlinear feedback polynomial is, by design, the sum of a bent function and a linear function. Generalizations More than 25 different generalizations of bent functions are described in Tokareva's 2015 monograph. There are algebraic generalizations (q-valued bent functions, p-ary bent functions, bent functions over a finite field, generalized Boolean bent functions of Schmidt, bent functions from a finite Abelian group into the set of complex numbers on the unit circle, bent functions from a finite Abelian group into a finite Abelian group, non-Abelian bent functions, vectorial G-bent functions, multidimensional bent functions on a finite Abelian group), combinatorial generalizations (symmetric bent functions, homogeneous bent functions, rotation symmetric bent functions, normal bent functions, self-dual and anti-self-dual bent functions, partially defined bent functions, plateaued functions, Z-bent functions and quantum bent functions) and cryptographic generalizations (semi-bent functions, balanced bent functions, partially bent functions, hyper-bent functions, bent functions of higher order, k-bent functions). The most common class of generalized bent functions is the mod m type, such that has constant absolute value mn/2. Perfect nonlinear functions , those such that for all nonzero a, takes on each value times, are generalized bent. If m is prime, the converse is true. In most cases only prime m are considered. For odd prime m, there are generalized bent functions for every positive n, even and odd. They have many of the same good cryptographic properties as the binary bent functions. Semi-bent functions are an odd-order counterpart to bent functions. A semi-bent function is with n odd, such that takes only the values 0 and m(n+1)/2. They also have good cryptographic characteristics, and some of them are balanced, taking on all possible values equally often. The partially bent functions form a large class defined by a condition on the Walsh transform and autocorrelation functions. All affine and bent functions are partially bent. This is in turn a proper subclass of the plateaued functions. The idea behind the hyper-bent functions is to maximize the minimum distance to all Boolean functions coming from bijective monomials on the finite field GF(2n), not just the affine functions. For these functions this distance is constant, which may make them resistant to an interpolation attack. Other related names have been given to cryptographically important classes of functions , such as almost bent functions and crooked functions. While not bent functions themselves (these are not even Boolean functions), they are closely related to the bent functions and have good nonlinearity properties. See also Correlation immunity References Further reading Boolean algebra Combinatorics Symmetric-key cryptography Theory of cryptography
Bent function
[ "Mathematics" ]
2,046
[ "Boolean algebra", "Discrete mathematics", "Mathematical logic", "Combinatorics", "Fields of abstract algebra" ]
24,465,401
https://en.wikipedia.org/wiki/Nanolaser
A nanolaser is a laser that has nanoscale dimensions and it refers to a micro-/nano- device which can emit light with light or electric excitation of nanowires or other nanomaterials that serve as resonators. A standard feature of nanolasers includes their light confinement on a scale approaching or suppressing the diffraction limit of light. These tiny lasers can be modulated quickly and, combined with their small footprint, this makes them ideal candidates for on-chip optical computing. History Albert Einstein proposed the stimulated emission in 1916, which contributed to the first demonstration of laser in 1961. From then on, people have been pursuing the miniaturization of lasers for more compact size and less energy consumption all the time. Since people noticed that light has different interactions with matter at the nanoscale in the 1990s, significant progress has been made to achieve the miniaturization of lasers and increase power conversion efficiency. Various types of nanolasers have been developed over the past decades. In the 1990s, some intriguing designs of microdisk laser and photonic crystal laser were demonstrated to have cavity size or energy volume with micro-/nano- diameters and approach the diffraction limit of light. Photoluminescence behavior of bulk ZnO nanowires was first reported in 2001 by Prof. Peidong Yang from the University of California, Berkeley and it opened the door to the study of nanowire nanolasers. These designs still do not exceed the diffraction limit until the demonstration of plasmonic lasers or spasers. David J. Bergman and Mark Stockman first proposed amplified surface plasmon waves by stimulated emission and coined the term spaser as "surface plasmon amplification by stimulated emission of radiation" in 2003. Until 2009, the plasmonic nanolasers or spasers were first achieved experimentally, which were regarded as the smallest nanolasers at that time. Since roughly 2010, there has been progress in nanolaser technology, and new types of nanolasers have been developed, such as parity-time symmetry laser, bound states in the continuum laser and photonic topological insulators laser. Comparison with conventional lasers While sharing many similarities with standard lasers, nanolasers maintain many unique features and differences from the conventional lasers due to the fact that light interacts differently with matter at the nanoscale. Mechanism Similar to the conventional lasers, nanolasers also based on stimulated emission which was proposed by Einstein; the main difference between nanolaser and the conventional ones in mechanism is light confinement. The resonator or cavity plays an important role in selecting the light with a certain frequency and the same direction as the most priority amplification and suppressing the other light to achieve the confinement of light. For conventional lasers, Fabry–Pérot cavity with two parallel reflection mirrors is applied. In the case of nanowires, it was shown that the two ends of a nanowire acting as scatters, rather than two parallel mirrors as in the case of Fabry–Pérot cavity, provide the feedback mechanism for nanowire lasers. In this case, light could be confined to a maximum of half its wavelength and such limit is deemed the diffraction limit of light. To approach or decrease the diffraction limit of light, one way is to improve the reflectivity of gain medium, such as using photonic bandgap and nanowires. Another effective way to exceed the diffraction limit is to convert light into surface plasmons in nanostructuralized metals, for amplification in cavity. Recently, new mechanisms of strong light confinement for nanolasers including parity–time symmetry, photonic topological insulators, and bound states in the continuum have been proposed. Properties Compared with conventional lasers, nanolasers show distinct properties and capabilities. The biggest advantages of nanolasers are their ultra-small physical volumes to improve energy efficiencies, decrease lasing thresholds, and achieve high modulation speeds. Types Microdisk laser A microdisk laser is a very small laser consisting of a disk with quantum well structures built into it. Its dimensions can exist on the micro-scale or nano-scale. Microdisk lasers use a whispering-gallery mode resonant cavity. The light in cavity travels around the perimeter of the disk and the total internal reflection of photons can result in a strong light confinement and a high quality factor, which means a powerful ability of the microcavity to store the energy of photons coupled into the cavity. Photonic crystal laser Photonic crystal lasers utilize periodic dielectric structures with different refractive indices; light can be confined with the use of a photonic crystal microcavity. In dielectric materials, there is orderly spatial distribution. When there is a defect in the periodic structure, the two-dimensional or three-dimensional photonic crystal structure will confine the light in the space of the diffractive limit and produce the Fano resonance phenomenon, which means a high quality factor with a strong light confinement for lasers. The fundamental feature of photonic crystals is the photonic bandgap, that is, the light whose frequency falls in the photonic band gap cannot propagate in the crystal structure, thus resulting in a high reflectivity for incident light and a strong confinement of light to a small volume of wavelength scale. The appearance of photonic crystals makes the spontaneous emission in the photon gap completely suppressed. But the high cost of photonic crystal impedes the development and spreading applications of photonic crystal lasers. Nanowire laser Semiconductor nanowire lasers have a quasi-one-dimensional structure with diameters ranging from a few nanometers to a few hundred nanometers and lengths ranging from hundreds of nanometers to a few microns. The width of nanowires is large enough to ignore the quantum size effect, but they are high quality one-dimensional waveguides with cylindrical, rectangular, trigonal, and hexagonal cross-sections. The quasi-one-dimensional structure and high feedback provided by scattering of light at the nanowire ends makes it have good optical waveguide and the ability of light confinement. Nanowire lasers are similar to Fabry–Pérot cavity in mechanism, but different in quantitative reflection coefficients High reflectivity of nanowire and flat end facets of the wire constitute a good resonant cavity, in which photons can be bound between the two ends of the nanowire to limit the light energy to the axial direction of the nanowire, thus meeting the conditions for laser formation. Polygonal nanowires can form a nearly circular cavity in cross section that supports whispering-gallery mode. Plasmonic nanolaser Nanolasers based on surface plasmons are known as plasmonic nanolasers, with sizes far exceeding the diffraction limit of light. If a plasmonic nanolaser is nanoscopic in three dimensions, it is also called a spaser, which is known to have the smallest cavity size and mode size. Design of plasmonic nanolaser has become one of the most effective technology methods for laser miniaturization at present. A little bit different from the conventional lasers, a typical configuration of plasmonic nanolaser includes a process of energy transfer to convert photons into surface plasmons. In plasmonic nanolaser or spaser, the exciton is not photons anymore but surface plasmon polariton. Surface plasmons are collective oscillations of free electrons on metal surfaces under the action of external electromagnetic fields. According to their manifestations, the cavity mode in plasmonic nanolasers can be divided into the propagating surface plasmon polaritons (SPPs) and the non-propagating localized surface plasmons (LSPs). SPPs are electromagnetic waves that propagate along the interface between metal and medium, and their intensities decay gradually in the direction perpendicular to the propagation interface. In 2008, Oulton experimentally validated a plasma nanowire laser consisting of a thin dielectric layer with a low reflectivity growing on a metal surface and a gain layer with a high refractive index semiconductor nanowire. In this structure, the electromagnetic field can be transferred from the metal layer to the intermediate gap layer, so that the mode energy is highly concentrated, thus greatly reducing the energy loss in the metal. The LSP mode exists in a variety of different metal nanostructures, such as metal nanoparticles (nanospheres, nanorods, nanocubes, etc.) and arrays of nanoparticles. Unlike the propagating surface plasmon polaritons, the localized surface plasmon does not propagate along the surface, but oscillates back and forth in the nanostructure in the form of standing waves. When light is incident to the surface of a metal nanoparticles, it causes a real displacement of the surface charge relative to the ions. The attraction between electrons and ions allows for the oscillation of electrode cloud and the formation of local surface from polarization excimer. The oscillation of electrons is determined by the geometrical boundaries of different metal nanoparticles. When its resonance frequency is consistent with the incident electromagnetic field, it will form the localized surface plasmon resonance. In 2009, Mikhail A. Noginov of Norfolk State University in the United States successfully verified the LSPs-based nanolaser for the first time. The nanolaser in this paper was composed of an Au core providing the plasmon mode and a silicon dioxide doped with OG-488 dye providing the gain medium. The diameter of the Au core was 14 nm, the thickness of the silica layer was 15 nm, and the diameter of the whole device was only 44 nm, which was the smallest nanolaser at that time. New types of nanolasers In addition, there have been some new types of nanolasers developed in recent years to approach the diffraction limit. Parity-time symmetry is related to a balance of optical gain and loss in a coupled cavity system. When the gain–loss contrast and coupling constant between two identical, closely located cavities are controlled, the phase transition of lasing modes occurs at an exceptional point. Bound states in the continuum laser confines light in an open system via the elimination of radiation states through destructive interference between resonant modes. A photonic topological insulator laser is based on topological insulators optical mode, where the topological states is confined within the cavity boundaries and they can be used for the formation of laser. All of those new types of nanolasers have high quality factor and can achieve cavity size and mode size approaching the diffraction limit of the light. Applications Due to the unique capabilities including low lasing thresholds, high energy efficiencies and high modulation speeds, nanolasers show great potentials for practical applications in the fields of materials characterization, integrated optical interconnects, and sensing. Nanolasers for material characterization The intense optical fields of such a laser also enable the enhancement effect in non-linear optics or surface-enhanced-raman-scattering (SERS). Nanowire nanolasers can be capable of optical detection at the scale of a single molecule with high resolution and ultrafast modulation. Nanolasers for integrated optical interconnects Internet is developing at an extremely high speed with large energy consumption for data communication. The high energy efficiency of nanolasers plays an important role in decreasing energy consumption for future society. Nanolasers for sensing Plasmonic nanolaser sensors have recently been demonstrated that can detect specific molecules in air and be used for optical biosensors. Molecules can modify the surface of metal nanoparticles and impact the surface recombination velocity of gain medium of a plasmonic nanolaser, which contributes to the sensing mechanism of plasmonic nanolasers. Challenges Although nanolasers have shown great potential, there are still some challenges towards the large-scale use of nanolasers, for example, electrically injected nanolasers, cavity configuration engineering and metal quality improvement. For nanolasers, the realization of electrically injected or pumped operation at room temperature is a key step towards its practical application. However, most nanolaser are optically pumped and the realization of electrically injected nanolasers is still a main technical challenge at present. Only a few studies have reported electrically injected nanolasers. Moreover, it still remains a challenge to realize cavity configuration engineering and metal quality improvement, which are crucial to satisfy the high-performance requirement of nanolasers and achieve their applications. Recently, nanolaser arrays show great potential to increase the power efficiency and accelerate modulation speed. See also Laser List of laser articles Nanowire laser Polariton laser Spaser, plasmonic laser References External links "Spaser": The future of nanolaser technology Breakthrough in the Creation of Electrically Driven Nanolasers for Integrated Circuits Quantum optics Photonics American inventions Laser types
Nanolaser
[ "Physics" ]
2,689
[ "Quantum optics", "Quantum mechanics" ]
24,465,715
https://en.wikipedia.org/wiki/Protective%20index
The protective index (PI) is a comparison of the amount of a therapeutic agent that causes the therapeutic effect to the amount that causes toxicity. Quantitatively, it is the ratio given by the toxic dose divided by the therapeutic dose. A protective index is the toxic dose of a drug for 50% of the population (TD50) divided by the minimum effective dose for 50% of the population (ED50). A high protective index is preferable to a low one: this corresponds to a situation in which one would have to take a much higher dose of a drug to reach the toxic threshold than the dose taken to elicit the therapeutic effect. A drug should ordinarily only be administered if the protective index is greater than one, indicating that the benefit outweighs the risk. The protective index is similar to the therapeutic index, but concerns toxicity (TD50) rather than lethality (); thus, the protective index is a smaller ratio. Toxicity can take many forms, as drugs typically have multiple side effects of varying severity, so a specific criterion of toxicity must be specified for the protective index to be meaningful. Ideally a choice is made such that the harm caused by the toxicity just outweighs the benefit of the drug's effect. Thus, the protective index is a more accurate measure of the benefit-to-risk ratio than the therapeutic index, but is less objectively defined. Nevertheless, the therapeutic index can be viewed as an upper bound to the protective index for a given substance. Protective index can also defined as the factor by which the dose of a toxicant must be multiplied to produce a defined level of toxicity in the presence of a nontoxic dose of another chemical. The higher the protective index, better is the antidotal value of a given substance. Sometimes the protective index is higher in the presence of two or more substances than in the presence of either of the substances alone. For example, the LD50 of potassium cyanide alone is 11 mg/kg, whereas it is 21 mg/kg in the presence of sodium nitrite, giving a protective index of 1.91. The LD50 of potassium cyanide in the presence of sodium thiosulfate is 35 mg/kg, giving a protective index of 3.2. The LD50 of potassium cyanide in the presence of both nitrite and thiosulfate is 52 mg/kg with a protective index of 4.73. Since the protective index is higher for the simultaneous use of nitrite and thiosulfate, the two chemicals constitute the antidote against cyanide intoxication. References Clinical pharmacology
Protective index
[ "Chemistry" ]
539
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs", "Clinical pharmacology" ]
24,465,778
https://en.wikipedia.org/wiki/Median%20toxic%20dose
In toxicology, the median toxic dose (TD50) of a drug or toxin is the dose at which toxicity occurs in 50% of cases. The type of toxicity should be specified for this value to have meaning for practical purposes. The median toxic dose encompasses the category of toxicity that is greater than half maximum effective concentration (ED50) but less than the median lethal dose (LD50). However, for some highly potent toxins (ex. lofentanil, botulinum toxin) the difference between the ED50 and TD50 is so minute that the values assigned to them may be approximated to equal doses. Since toxicity need not be lethal, the TD50 is generally lower than the median lethal dose (LD50), and the latter can be considered an upper bound for the former. However, since the toxicity is above the effective limit, the TD50 is generally greater than the ED50. If the result of a study is a toxic effect that does not result in death, it is classified as this form of toxicity. Toxic effects can be defined differently, sometimes considering the therapeutic effect of a substance to be toxic (such as with chemotherapeutics) which can lead to confusion and contention regarding a substance's TD50. Examples of these toxic endpoints include cancer, blindness, anemia, and birth defects. References Toxicology
Median toxic dose
[ "Chemistry", "Environmental_science" ]
278
[ "Pharmacology", "Toxicology", "Toxicology stubs", "Medicinal chemistry stubs", "Pharmacology stubs" ]
24,467,573
https://en.wikipedia.org/wiki/Black%20star%20%28semiclassical%20gravity%29
A black star is a gravitational object composed of matter. It is a theoretical alternative to the black hole concept from general relativity. The theoretical construct was created through the use of semiclassical gravity theory. A similar structure should also exist for the Einstein–Maxwell–Dirac equations system, which is the (super) classical limit of quantum electrodynamics, and for the Einstein–Yang–Mills–Dirac system, which is the (super) classical limit of the standard model. A black star does not require an event horizon, and may or may not be a transitional phase between a collapsing star and a singularity. A black star is created when matter compresses at a rate significantly less than the free fall velocity of a hypothetical particle falling to the center of its star. Quantum processes create vacuum polarization, producing a form of degeneracy pressure preventing spacetime (and the particles held within it) from occupying the same space at the same time. This vacuum energy is theoretically unlimited and, if built up quickly enough, will stop gravitational collapse from creating a singularity. This may entail an ever-decreasing rate of collapse leading to an infinite collapse time or asymptotically approaching a radius bigger than zero. A black star with a radius slightly greater than the predicted event horizon for an equivalent-mass black hole will appear very dark, because almost all light produced will be drawn back to the star, and any escaping light will be severely gravitationally redshifted. It will appear almost exactly like a black hole. It will feature Hawking radiation, as virtual particle pairs created in its vicinity may still be split, with one particle escaping and the other being trapped. Additionally, it will create thermal Planckian radiation that will closely resemble the expected Hawking radiation of an equivalent black hole. The predicted interior of a black star will be composed of this strange state of spacetime, with each length in depth heading inward appearing the same as a black star of equivalent mass and radius with the overlayment stripped off. Temperatures increase with depth towards the center. Sources See also Gravastar Dark-energy star Black hole Fuzzball (string theory) Black holes Quantum gravity Stellar black holes Hypothetical stars
Black star (semiclassical gravity)
[ "Physics", "Astronomy" ]
447
[ "Black holes", "Physical phenomena", "Physical quantities", "Stellar black holes", "Unsolved problems in physics", "Astrophysics", "Quantum gravity", "Density", "Stellar phenomena", "Astronomical objects", "Physics beyond the Standard Model" ]
33,083,988
https://en.wikipedia.org/wiki/C6H11N3
{{DISPLAYTITLE:C6H11N3}} The molecular formula C6H11N3 (molar mass: 125.17 g/mol, exact mass: 125.0953 u) may refer to: α-Methylhistamine 1-Methylhistamine 4-Methylhistamine Molecular formulas
C6H11N3
[ "Physics", "Chemistry" ]
71
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
33,086,678
https://en.wikipedia.org/wiki/Product%20numerical%20range
Given a Hilbert space with a tensor product structure a product numerical range is defined as a numerical range with respect to the subset of product vectors. In some situations, especially in the context of quantum mechanics product numerical range is known as local numerical range Introduction Let be an operator acting on an -dimensional Hilbert space . Let denote its numerical range, i.e. the set of all such that there exists a normalized state , , which satisfies . An analogous notion can be defined for operators acting on a composite Hilbert space with a tensor product structure. Consider first a bi–partite Hilbert space, of a composite dimension . Let be an operator acting on the composite Hilbert space. We define the product numerical range of , with respect to the tensor product structure of , as where and are normalized. Product numerical radius Let be a tensor product Hilbert space. We define the product numerical radius of , with respect to this tensor product structure, as Notation The notion of numerical range of a given operator, also called "field of values", has been extensively studied during the last few decades and its usefulness in quantum theory has been emphasized. Several generalizations of numerical range are known. In particular, Marcus introduced the notion of ’’’decomposable numerical range’’’, the properties of which are a subject of considerable interest. The product numerical range can be considered as a particular case of the decomposable numerical range defined for operators acting on a tensor product Hilbert space. This notion may also be considered as a numerical range relative to the proper subgroup of the full unitary group . General case It is not difficult to establish the basic properties of the product numerical range which are independent of the partition of the Hilbert space and of the structure of the operator. We list them below leaving some simple items without a proof. Basic properties Topological facts concerning product numerical range for general operators. Product numerical range forms a connected set in the complex plane. This is true because product numerical range is a continuous image of a connected set. Product numerical range is subadditive. For all For all and For all and For all for unitary and . Let and If one of them is normal then the numerical range of their tensor product coincides with the convex hull of the product numerical range, If is positive semidefinite for some , then Let and . For all , we have and Convexity The product numerical range does not need to be convex. Consider the following simple example. Let Matrix defined above is matrix with eigenvalues . It is easy to see that and , but . Actually, by direct computation we have Product numerical range of matrix is presented below. Product numerical range forms a nonempty set for a general operator. In particular it contains the barycenter of the spectrum. Barycenter Product numerical range of includes the barycenter of the spectrum, Product numerical radius is a vector norm on matrices, but it is not a matrix norm. Product numerical radius is invariant with respect to local unitaries, which have the tensor product structure. References Z. Puchała, P. Gawron, J.A. Miszczak, Ł. Skowronek, M.-. Choi, K. Życzkowski, "Product numerical range in a space with tensor product structure", Linear Algebra Appl., 434 (2011) 327–342. . P. Gawron, Z. Puchała, J. A. Miszczak, Ł. Skowronek, K. Życzkowski, "Restricted numerical range: a versatile tool in the theory of quantum information", J. Math. Phys. 51, 102204 (2010). . Quantum mechanics Operator theory
Product numerical range
[ "Physics" ]
759
[ "Theoretical physics", "Quantum mechanics" ]
33,087,757
https://en.wikipedia.org/wiki/Health%20hazards%20of%20air%20travel
A number of possible health hazards of air travel have been investigated. Infection On an airplane, people sit in a confined space for extended periods of time, which increases the risk of transmission of airborne infections. For this reason, airlines place restrictions on the travel of passengers with known airborne contagious diseases (e.g. tuberculosis). During the severe acute respiratory syndrome (SARS) epidemic of 2003, awareness of the possibility of acquisition of infection on a commercial aircraft reached its zenith when on one flight from Hong Kong to Beijing, 16 of 120 people on the flight developed proven SARS from a single index case. There is very limited research done on contagious diseases on aircraft. The two most common respiratory pathogens to which air passengers are exposed are parainfluenza and influenza. In one study, the flight ban imposed following the attacks of September 11, 2001 was found to have restricted the global spread of seasonal influenza, resulting in a much milder influenza season that year, and the ability of influenza to spread on aircraft has been well documented. There is no data on the relative contributions of large droplets, small particles, close contact, surface contamination, and no data on the relative importance of any of these methods of transmission for specific diseases, and therefore very little information on how to control the risk of infection. There is no standardisation of air handling by aircraft, installation of HEPA filters or of hand washing by air crew, and no published information on the relative efficacy of any of these interventions in reducing the spread of infection. Air travel, like other forms of travel, radically increases the speed at which infections spread around the world, as viruses rapidly spread to large numbers of people living across the world. Human and cargo traffic greatly facilitates the spread of pathogens across the world, for example during the COVID-19 pandemic. Deep vein thrombosis Deep vein thrombosis (DVT) is the third most common vascular disease next to stroke and heart attack. It is estimated that DVT affects one in 5,000 travellers on long flights. Risk increases with exposure to more flights within a short time frame and with increasing duration of flights. According to a health expert in Canada, even though the risk of a blood clot is low, given the number of people who fly, it is a public health risk. It is reported in 2016 that the average distance between seat rows has declined to , from over , while the average seat size has shrunk to from in the previous two decades. Radiation exposure Flying high, passengers and crews of jet airliners are exposed to at least 10 times the cosmic ray dose that people at sea level receive. Every few years, a geomagnetic storm permits a solar particle event to penetrate down to jetliner altitudes. Aircraft flying polar routes near the geomagnetic poles are at particular risk. There is also increased radiation from space. Other possible health hazards Other possible hazards of air travel that have been investigated include airsickness and chemical contamination of cabin air. In pregnancy In low risk pregnancies, most health care providers approve flying until about 36 weeks of gestational age. Most airlines allow pregnant women to fly short distances at less than 36 weeks, and long distances at less than 32 weeks. Many airlines require a doctor's note that approves flying, specially at over 28 weeks. See also Air safety Aviation medicine Fear of flying Jet lag Shame of flying Travel medicine References Aviation medicine Technology hazards
Health hazards of air travel
[ "Technology" ]
704
[ "nan" ]
33,093,589
https://en.wikipedia.org/wiki/Carbfix
Carbfix is an Icelandic company founded in 2007. It has developed an approach to permanently store CO2 by dissolving it in water and injecting it into basaltic rocks. Once in the subsurface, the injected CO2 reacts with the host rock forming stable carbonate minerals, thus providing permanent storage of the injected CO2 Approximately 200 tons of CO2 were injected into subsurface basalts in a first-of-a-kind pilot injection in SW-Iceland in 2012. Research results published in 2016 showed that 95% of the injected CO2 was solidified into calcite within 2 years, using 25 tons of water per ton of CO2. Since 2014, this technology has been applied to the emissions of the Hellisheiði Geothermal Power Plant. H2S and CO2 are co-captured from the emission stream of the power station and permanently and safely stored via in-situ carbon mineralization at the Húsmúli reinjection site. The process captures approximately one-third of the CO2 emissions (12,000 tCO2/y) and 60% of the H2S emissions (6,000 tH2S/y) from the power plant. The Silverstone project aims to deploy full-scale CO2 capture, injection, and mineral storage at the Hellisheiði Geothermal Power Plant from 2025 onwards. Carbfix is currently operating four injection sites in Iceland in relation to the Hellisheiði Geothermal Power Plant: the Nesjavellir Geothermal Power Plant, The Orca direct air capture plant near Hellisheiði and within the CO2 Seastone project in Helguvík (see chapter “Current status”). Background Carbfix was founded by the then Icelandic President, Dr Ólafur Ragnar Grímsson, Einar Gunnlaugsson at Reykjavík Energy, Wallace S. Broecker at Columbia University, Eric H. Oelkers at CNRS Toulouse (France), and Sigurður Reynir Gíslason at the University of Iceland to limit the Greenhouse gas emissions in Iceland. Reykjavik Energy supplied the initial funding for Carbfix. Further funding has been supplied by The European Commission and the Department of Energy of the United States. In addition to finding a new method for permanent carbon dioxide storage, another objective of the project was to train scientists. Method Captured CO2 is dissolved in water, either prior to or during injection into mafic or ultramafic formations, such as basalts. The dissolution of CO2 in water can be expressed as: CO2 (g) + H2O(l) ⇌ H2CO3 (aq) ↔ H+(aq) + HCO3- (aq) ↔ 2H+(aq) + CO32-(aq) By dissolving the CO2 in water instant solubility trapping is achieved, which is the second most secure trapping mechanism of CO2 storage: No CO2 bubbles are present in the CO2-charged water, which is furthermore denser than the water that is present in the formation, so that the CO2-charged water has rather the tendency to sink than to migrate upwards towards the surface. The CO2-charged water is acidic, typically having a pH of 3-5 depending on the partial pressure of CO2, water composition, and the temperature of the system. The CO2-charged water reacts with the subsurface rocks and dissolves cations such as Calcium, Magnesium, and Iron. The dissolution of cation-bearing silicate minerals; for example, the dissolution of pyroxene, a common mineral in basalt and peridotite, can be expressed as: 2H+ + H2O + (Ca,Mg,Fe)SiO3 = Ca2+, Mg2+, Fe2+ + H4SiO4 The cations can react with the dissolved CO2 to form stable carbonate minerals, such as Calcite (CaCO3), Magnesite (MgCO3), and Siderite (FeCO3), a reaction that can be expressed as:   Ca2+,Mg2+,Fe2+(aq) + CO32-(aq) → CaCO3 (s), MgCO3 (s), FeCO3 (s) Ultramafic and mafic rock formations are most efficient due to their high reactivity and their abundance in divalent metal cations. The degree to which the released cations form minerals depends on the element, the pH and the temperature. Practicalities Drilling and injecting carbonated water at high pressure into basaltic rocks at Hellisheiði has been estimated to cost less than $25 a ton. This project commenced carbon injection in 2012. The funding was supplied by the University of Iceland, Columbia University, France's National Centre of Scientific Research, the United States Department of Energy, the EU, Nordic funds and Reykjavik Energy. These funding sources include the European Union's Horizon 2020 research and innovation programme under grant agreements No. 764760 and 764810. The European Commission through the projects CarbFix (EC coordinated action 283148), Min-GRO (MC-RTN-35488), Delta-Min (PITN-GA-2008-215360), and -REACT (EC Project 317235). Nordic fund 11029-NORDICCS; the Icelandic GEORG Geothermal Research fund (09-02-001) to S.R.G. and Reykjavik Energy; and the U.S. Department of Energy under award number DE-FE0004847. Cost is around US$25 per tonne of CO2. Challenges Reinjection of geothermal fluid from the Hellisheiði Geothermal Power Plant started in Húsmúli reinjection field, in September 2011. Commissioning of the reinjection site caused significant induced seismicity that was felt in nearby communities. This problem was addressed by introducing a new workflow where preventive steps are taken to minimize this risk, including the adjustment of the injection rates. The implementation of the workflow resulted in the decrease of the annual number of seismic events greater than magnitude 2 in the area from 96 in 2011 to one in 2018, which is considered satisfactory and demonstrates that current operations are within regulatory boundaries. Carbfix started injection of CO2 captured from the Hellisheiði Geothermal Power Plant and dissolved in condensate from the plant’s turbines into one of the existing reinjection wells in the Húsmúli reinjection field in April 2014. No increased seismicity was noted after the injection of CO2 started implying that seismicity is not induced by the injection of the condensate-dissolved CO2. Current status Carbfix is currently operating four injection sites in Iceland with emphasis on injection of CO2 captured from point-sources of CO2, CO2 that is captured and transported to an injection site, and CO2 that is captured directly from the atmosphere using direct air capture (DAC) technology. Point source capture and mineral storage of CO2 Carbfix has since June 2014 captured and injected CO2 and hydrogen sulfide (H2S) from Hellisheiði Geothermal Power Plant. The geothermal gases are dissolved in condensate from the power plant’s turbines in a specially designed scrubbing tower and injected to a depth of 750 m underground into basaltic rocks. Currently about 68% of the H2S and 34% of the CO2 from the plant’s emissions are captured and injected, which amounts to about 12,000 tons of CO2 per year, and about 5,000 tons of H2S per year. Results show that over 60% of the injected CO2 was mineralized within 4 months of injection, and over 85% of the injected H2S within 4 months of injection. Carbfix is currently working on scaling up the operations at the Hellisheiði Geothermal Power Plant through the EU Innovation Fund project Silverstone, aiming for near-zero geothermal power production from 2025 by capturing over 95% of CO2 and 99% of H2S from the plant’s emissions. This accounts for up to 40,000 tons of CO2 and up to 12,000 tons of H2S per year. Carbfix has since early 2023 started the capture and injection of CO2 and H2S from the Nesjavellir Geothermal Power Plant in SW-Iceland as a part of the Europe Horizon 2020 funded GECO project. The same approach is used as at the Hellisheiði Geothermal Power Plant, but with optimized capturing efficiency of the scrubbing tower. The gases are dissolved in condensate from the plant‘s turbines and injected into the basaltic subsurface below 900 m. Injection and mineral storage of CO2 captured from the atmosphere using direct air capture technologies The world’s first injection of CO2 captured from the atmosphere was carried out in Hellisheiði in SW-Iceland in 2017, as part of the Europe H2020 funded project CarbFix2. The CO2 was captured using a Direct Air Capture (DAC) unit developed by the Swiss green-tech company Climeworks. The CO2 was then dissolved in water and injected into the basaltic subsurface. In 2021, the world’s first commercial DAC combined with storage plant, Orca, was commissioned in Hellisheiði in collaboration between Climeworks and Carbfix. The plant has the capacity to capture up to 3,600 tons of CO2 directly from the atmosphere that are injected into basalts for permanent mineral storage. In 2024 Climeworks and Carbfix are commissioning the Mammoth DAC plant, with the capacity to capture up to 36,000 tons per year which will be injected into the basalt for permanent mineral storage at the Geothermal Park in Hellisheiði. CO2 capture, transport and storage Cross-border transport of CO2 was first demonstrated as part of the DemoUpCarma project in August 2022. The project was funded by the Swiss Federal Offices and led by ETH. The CO2 was captured from a biogas upgrading plant in Bern, Switzerland, and transported to Iceland where it was first injected at the Hellisheiði site. The current injection site of DemoUpCarma project is in Helguvík, Iceland, where the CO2 is co-injected with seawater as part of the R&D project CO2Seastone. In July 2021, Carbfix was awarded the largest research grant ever granted to an Icelandic company, when it was nominated for the EU Innovation Fund grant of 15 million EU for the Coda Terminal project. The Coda Terminal will be developed in Straumsvík, SW-Iceland as the first cross-border carbon transport and storage hub in Iceland. CO₂ will be captured at industrial sites in N-Europe, focusing on the hard-to-abate sector, and shipped to the Terminal where it will be unloaded into onshore tanks for temporary storage. The CO₂ will then be pumped into a network of nearby injection wells where it will be dissolved in water during injection into the basaltic bedrock. The operations will be scaled up in steps reaching up to 3 million tons of CO₂ per year from 2031. References External links Carbfix.com – Website of the project , Aug 23, 2016 PBS NewsHour Emissions reduction Environment of Iceland Climate engineering
Carbfix
[ "Chemistry", "Engineering" ]
2,363
[ "Greenhouse gases", "Geoengineering", "Planetary engineering", "Emissions reduction" ]
33,095,314
https://en.wikipedia.org/wiki/Sexual%20bimaturism
Sexual bimaturism describes a difference in developmental timing between males and females of the same species. Sexual bimaturism can result in sexual dimorphism, but sexual dimorphism could also develop through differential rates of development. In many insects, the larval period of females is longer than that of males, and as a result of this extended growth period, these female insects are larger than their male conspecifics. Male simian primates are generally larger than females of the same species due in part to extended growth periods. Gorillas demonstrate a particularly high degree of sexual bimaturism. Bimaturism can refer to developmental differences within a sex related to secondary sex characteristics. For example, male orangutans reach sexual maturity around age 15 but undergo an additional period of development later in life before they exhibit cheek flanges. Flanged males are generally preferred by females so that unflanged males need different mating strategies to compete with flanged males. The onset of this second developmental phase varies greatly and may be influenced by the proximity of other flanged males. In humans, sexual bimaturism is evident in that males begin puberty later than females. This may be related to selection for later maturation in males in a polygynous mating system. References Human sexuality
Sexual bimaturism
[ "Biology" ]
272
[ "Human sexuality", "Behavior", "Human behavior", "Sexuality" ]
25,881,839
https://en.wikipedia.org/wiki/Entropic%20gravity
Entropic gravity, also known as emergent gravity, is a theory in modern physics that describes gravity as an entropic force—a force with macro-scale homogeneity but which is subject to quantum-level disorder—and not a fundamental interaction. The theory, based on string theory, black hole physics, and quantum information theory, describes gravity as an emergent phenomenon that springs from the quantum entanglement of small bits of spacetime information. As such, entropic gravity is said to abide by the second law of thermodynamics under which the entropy of a physical system tends to increase over time. The theory has been controversial within the physics community but has sparked research and experiments to test its validity. Significance At its simplest, the theory holds that when gravity becomes vanishingly weak—levels seen only at interstellar distances—it diverges from its classically understood nature and its strength begins to decay linearly with distance from a mass. Entropic gravity provides an underlying framework to explain Modified Newtonian Dynamics, or MOND, which holds that at a gravitational acceleration threshold of approximately , gravitational strength begins to vary inversely linearly with distance from a mass rather than the normal inverse-square law of the distance. This is an exceedingly low threshold, measuring only 12 trillionths gravity's strength at Earth's surface; an object dropped from a height of one meter would fall for 36 hours were Earth's gravity this weak. It is also 3,000 times less than the remnant of Earth's gravitational field that exists at the point where crossed the solar system's heliopause and entered interstellar space. The theory claims to be consistent with both the macro-level observations of Newtonian gravity as well as Einstein's theory of general relativity and its gravitational distortion of spacetime. Importantly, the theory also explains (without invoking the existence of dark matter and tweaking of its new free parameters) why galactic rotation curves differ from the profile expected with visible matter. The theory of entropic gravity posits that what has been interpreted as unobserved dark matter is the product of quantum effects that can be regarded as a form of positive dark energy that lifts the vacuum energy of space from its ground state value. A central tenet of the theory is that the positive dark energy leads to a thermal-volume law contribution to entropy that overtakes the area law of anti-de Sitter space precisely at the cosmological horizon. Thus this theory provides an alternative explanation for what mainstream physics currently attributes to dark matter. Since dark matter is believed to compose the vast majority of the universe's mass, a theory in which it is absent has huge implications for cosmology. In addition to continuing theoretical work in various directions, there are many experiments planned or in progress to actually detect or better determine the properties of dark matter (beyond its gravitational attraction), all of which would be undermined by an alternative explanation for the gravitational effects currently attributed to this elusive entity. Origin The thermodynamic description of gravity has a history that goes back at least to research on black hole thermodynamics by Bekenstein and Hawking in the mid-1970s. These studies suggest a deep connection between gravity and thermodynamics, which describes the behavior of heat. In 1995, Jacobson demonstrated that the Einstein field equations describing relativistic gravitation can be derived by combining general thermodynamic considerations with the equivalence principle. Subsequently, other physicists, most notably Thanu Padmanabhan, began to explore links between gravity and entropy. Erik Verlinde's theory In 2009, Erik Verlinde proposed a conceptual model that describes gravity as an entropic force. He argues (similar to Jacobson's result) that gravity is a consequence of the "information associated with the positions of material bodies". This model combines the thermodynamic approach to gravity with Gerard 't Hooft's holographic principle. It implies that gravity is not a fundamental interaction, but an emergent phenomenon which arises from the statistical behavior of microscopic degrees of freedom encoded on a holographic screen. The paper drew a variety of responses from the scientific community. Andrew Strominger, a string theorist at Harvard said "Some people have said it can't be right, others that it's right and we already knew it – that it’s right and profound, right and trivial." In July 2011, Verlinde presented the further development of his ideas in a contribution to the Strings 2011 conference, including an explanation for the origin of dark matter. Verlinde's article also attracted a large amount of media exposure, and led to immediate follow-up work in cosmology, the dark energy hypothesis, cosmological acceleration, cosmological inflation, and loop quantum gravity. Also, a specific microscopic model has been proposed that indeed leads to entropic gravity emerging at large scales. Entropic gravity can emerge from quantum entanglement of local Rindler horizons. Derivation of the law of gravitation The law of gravitation is derived from classical statistical mechanics applied to the holographic principle, that states that the description of a volume of space can be thought of as bits of binary information, encoded on a boundary to that region, a closed surface of area . The information is evenly distributed on the surface with each bit requiring an area equal to , the so-called Planck area, from which can thus be computed: where is the Planck length. The Planck length is defined as: where is the universal gravitational constant, is the speed of light, and is the reduced Planck constant. When substituted in the equation for we find: The statistical equipartition theorem defines the temperature of a system with degrees of freedom in terms of its energy such that: where is the Boltzmann constant. This is the equivalent energy for a mass according to: The effective temperature experienced due to a uniform acceleration in a vacuum field according to the Unruh effect is: where is that acceleration, which for a mass would be attributed to a force according to Newton's second law of motion: Taking the holographic screen to be a sphere of radius , the surface area would be given by: From algebraic substitution of these into the above relations, one derives Newton's law of universal gravitation: Note that this derivation assumes that the number of the binary bits of information is equal to the number of the degrees of freedom. Criticism and experimental tests Entropic gravity, as proposed by Verlinde in his original article, reproduces the Einstein field equations and, in a Newtonian approximation, a potential for gravitational forces. Since its results do not differ from Newtonian gravity except in regions of extremely small gravitational fields, testing the theory with earth-based laboratory experiments does not appear feasible. Spacecraft-based experiments performed at Lagrangian points within our solar system would be expensive and challenging. Even so, entropic gravity in its current form has been severely challenged on formal grounds. Matt Visser has shown that the attempt to model conservative forces in the general Newtonian case (i.e. for arbitrary potentials and an unlimited number of discrete masses) leads to unphysical requirements for the required entropy and involves an unnatural number of temperature baths of differing temperatures. Visser concludes: For the derivation of Einstein's equations from an entropic gravity perspective, Tower Wang shows that the inclusion of energy-momentum conservation and cosmological homogeneity and isotropy requirements severely restricts a wide class of potential modifications of entropic gravity, some of which have been used to generalize entropic gravity beyond the singular case of an entropic model of Einstein's equations. Wang asserts that: Cosmological observations using available technology can be used to test the theory. On the basis of lensing by the galaxy cluster Abell 1689, Nieuwenhuizen concludes that EG is strongly ruled out unless additional (dark) matter-like eV neutrinos is added. A team from Leiden Observatory statistically observing the lensing effect of gravitational fields at large distances from the centers of more than 33,000 galaxies found that those gravitational fields were consistent with Verlinde's theory. Using conventional gravitational theory, the fields implied by these observations (as well as from measured galaxy rotation curves) could only be ascribed to a particular distribution of dark matter. In June 2017, a study by Princeton University researcher Kris Pardo asserted that Verlinde's theory is inconsistent with the observed rotation velocities of dwarf galaxies. Another theory of entropy based on geometric considerations (Quantitative Geometrical Thermodynamics, QGT) provides an entropic basis for the holographic principle and also offers another explanation for galaxy rotation curves as being due to the entropic influence of the central supermassive blackhole found in the center of a spiral galaxy. In 2018, Zhi-Wei Wang and Samuel L. Braunstein showed that, while spacetime surfaces near black holes (called stretched horizons) do obey an analog of the first law of thermodynamics, ordinary spacetime surfaces — including holographic screens — generally do not, thus undermining the key thermodynamic assumption of the emergent gravity program. In his 1964 lecture on the Relation of Mathematics and Physics, Richard Feynman describes a related theory for gravity where the gravitational force is explained due to an entropic force due to unspecified microscopic degrees of freedom. However, he immediately points out that the resulting theory cannot be correct as the fluctuation-dissipation theorem would also lead to friction which would slow down the motion of the planets which contradicts observations. Entropic gravity and quantum coherence Another criticism of entropic gravity is that entropic processes should, as critics argue, break quantum coherence. There is no theoretical framework quantitatively describing the strength of such decoherence effects, though. The temperature of the gravitational field in earth gravity well is very small (on the order of 10K). Experiments with ultra-cold neutrons in the gravitational field of Earth are claimed to show that neutrons lie on discrete levels exactly as predicted by the Schrödinger equation considering the gravitation to be a conservative potential field without any decoherent factors. Archil Kobakhidze argues that this result disproves entropic gravity, while Chaichian et al. suggest a potential loophole in the argument in weak gravitational fields such as those affecting Earth-bound experiments. See also Footnotes References Further reading It from bit – Entropic gravity for pedestrians, J. Koelman Gravity: the inside story, T Padmanabhan Experiments Show Gravity Is Not an Emergent Phenomenon Theories of gravity Gravity As An Entropic Force Gravity As An Entropic Force Emergence it:Interazione_gravitazionale#Derivazione_delle_leggi_della_gravitazione_dalla_meccanica_statistica_applicata_al_principio_olografico
Entropic gravity
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
2,282
[ "Telecommunications engineering", "Applied mathematics", "Theoretical physics", "Computer science", "Information theory", "Thermodynamics", "Theories of gravity", "Dynamical systems" ]
25,884,081
https://en.wikipedia.org/wiki/Qubit%20fluorometer
The Qubit fluorometer is a laboratory instrument developed and distributed by Invitrogen, which is now a part of Thermo Fisher. It is used for the quantification of DNA, RNA, and protein. Method The Qubit fluorometer method is to use fluorescent dyes to determine the concentration of either nucleic acids or proteins in a sample. Specialized fluorescent dyes bind specifically to the substances of interest. A spectrophotometer is used in this method to measure the natural absorbance of light at 260 nm (for DNA and RNA) or 280 nm (for proteins). Fluorescent dyes The Qubit assays (formerly known as Quant-iT) were previously developed and manufactured by Molecular Probes (now part of Life Technologies). Each dye is specialized for one type of molecule (DNA, RNA, or protein). These dyes exhibit extremely low fluorescence until bound to their target molecule. Upon binding to DNA, the dye molecules assume a more rigid shape and increase in fluorescence by several orders of magnitude, most likely due to intercalation between the bases. The Qubit fluorometer, a device designed to measure fluorescence signals from samples, operates by correlating these signals with known concentrations of probes. This process enables it to transform the fluorescence data into a quantified concentration measurement. The device uses this established relationship to accurately determine the concentration of a sample. A specific instance of this technology is the Qubit 2.0 fluorometer, which is often used in conjunction with the "dsDNA BR Assay Kit." This kit, along with others in the Qubit quantification system, incorporates dyes. These dyes are sensitive to different biomolecules and their concentrations. In this context, "ds" denotes double-stranded and "ss" signifies single-stranded DNA, indicating the specific types of DNA that the dyes can detect. Versions The second generation, the Qubit 2.0 Fluorometer, was released in 2010, and the 3rd generation as Qubit 3.0 in 2014. The newest version is the 4th generation Qubit 4, introduced in 2017. References External links Official Qubit Fluorometric Quantitation web site A review of the Qubit fluorometer Laboratory equipment Spectroscopy Fluorescence
Qubit fluorometer
[ "Physics", "Chemistry" ]
471
[ "Luminescence", "Fluorescence", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Molecular biology laboratory equipment", "Molecular biology techniques", "Spectroscopy" ]
27,738,864
https://en.wikipedia.org/wiki/Acta%20Materialia
Acta Materialia is a peer-reviewed scientific journal published twenty times per year by Elsevier on behalf of Acta Materialia Inc. The editor-in-chief is Gregory S. Rohrer. The journal covers research on all aspects of materials science and publishes original papers and commissioned reviews called Overviews. History The journal was established in 1953 as Acta Metallurgica and renamed to Acta Metallurgica et Materialia in 1990, before obtaining its current name in 1996. Since 1956, it was published by Pergamon Press, with the imprint being retained for some time after the acquisition by Elsevier. It incorporates Nanostructured Materials that was published independently from 1992 to 1999. Scripta Materialia was established in 1967 as a companion journal, publishing rapid communications as well as opinion articles called Viewpoints. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 8.3. See also Physical metallurgy References External links Biweekly journals Elsevier academic journals English-language journals Materials science journals Academic journals established in 1953 Academic journals established in 1992 Publications disestablished in 1999
Acta Materialia
[ "Materials_science", "Engineering" ]
242
[ "Materials science journals", "Materials science" ]
27,739,443
https://en.wikipedia.org/wiki/Nuclear%20transmutation
Nuclear transmutation is the conversion of one chemical element or an isotope into another chemical element. Nuclear transmutation occurs in any process where the number of protons or neutrons in the nucleus of an atom is changed. A transmutation can be achieved either by nuclear reactions (in which an outside particle reacts with a nucleus) or by radioactive decay, where no outside cause is needed. Natural transmutation by stellar nucleosynthesis in the past created most of the heavier chemical elements in the known existing universe, and continues to take place to this day, creating the vast majority of the most common elements in the universe, including helium, oxygen and carbon. Most stars carry out transmutation through fusion reactions involving hydrogen and helium, while much larger stars are also capable of fusing heavier elements up to iron late in their evolution. Elements heavier than iron, such as gold or lead, are created through elemental transmutations that can naturally occur in supernovae. One goal of alchemy, the transmutation of base substances into gold, is now known to be impossible by chemical means but possible by physical means. As stars begin to fuse heavier elements, substantially less energy is released from each fusion reaction. This continues until it reaches iron which is produced by an endothermic reaction consuming energy. No heavier element can be produced in such conditions. One type of natural transmutation observable in the present occurs when certain radioactive elements present in nature spontaneously decay by a process that causes transmutation, such as alpha or beta decay. An example is the natural decay of potassium-40 to argon-40, which forms most of the argon in the air. Also on Earth, natural transmutations from the different mechanisms of natural nuclear reactions occur, due to cosmic ray bombardment of elements (for example, to form carbon-14), and also occasionally from natural neutron bombardment (for example, see natural nuclear fission reactor). Artificial transmutation may occur in machinery that has enough energy to cause changes in the nuclear structure of the elements. Such machines include particle accelerators and tokamak reactors. Conventional fission power reactors also cause artificial transmutation, not from the power of the machine, but by exposing elements to neutrons produced by fission from an artificially produced nuclear chain reaction. For instance, when a uranium atom is bombarded with slow neutrons, fission takes place. This releases, on average, three neutrons and a large amount of energy. The released neutrons then cause fission of other uranium atoms, until all of the available uranium is exhausted. This is called a chain reaction. Artificial nuclear transmutation has been considered as a possible mechanism for reducing the volume and hazard of radioactive waste. History Alchemy The term transmutation dates back to alchemy. Alchemists pursued the philosopher's stone, capable of chrysopoeia – the transformation of base metals into gold. While alchemists often understood chrysopoeia as a metaphor for a mystical or religious process, some practitioners adopted a literal interpretation and tried to make gold through physical experimentation. The impossibility of the metallic transmutation had been debated amongst alchemists, philosophers and scientists since the Middle Ages. Pseudo-alchemical transmutation was outlawed and publicly mocked beginning in the fourteenth century. Alchemists like Michael Maier and Heinrich Khunrath wrote tracts exposing fraudulent claims of gold making. By the 1720s, there were no longer any respectable figures pursuing the physical transmutation of substances into gold. Antoine Lavoisier, in the 18th century, replaced the alchemical theory of elements with the modern theory of chemical elements, and John Dalton further developed the notion of atoms (from the alchemical theory of corpuscles) to explain various chemical processes. The disintegration of atoms is a distinct process involving much greater energies than could be achieved by alchemists. Modern physics It was first consciously applied to modern physics by Frederick Soddy when he, along with Ernest Rutherford in 1901, discovered that radioactive thorium was converting itself into radium. At the moment of realization, Soddy later recalled, he shouted out: "Rutherford, this is transmutation!" Rutherford snapped back, "For Christ's sake, Soddy, don't call it transmutation. They'll have our heads off as alchemists." Rutherford and Soddy were observing natural transmutation as a part of radioactive decay of the alpha decay type. The first artificial transmutation was accomplished in 1925 by Patrick Blackett, a research fellow working under Rutherford, with the transmutation of nitrogen into oxygen, using alpha particles directed at nitrogen 14N + α → 17O + p. Rutherford had shown in 1919 that a proton (he called it a hydrogen atom) was emitted from alpha bombardment experiments but he had no information about the residual nucleus. Blackett's 1921–1924 experiments provided the first experimental evidence of an artificial nuclear transmutation reaction. Blackett correctly identified the underlying integration process and the identity of the residual nucleus. In 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues John Cockcroft and Ernest Walton, who used artificially accelerated protons against lithium-7 to split the nucleus into two alpha particles. The feat was popularly known as "splitting the atom", although it was not the modern nuclear fission reaction discovered in 1938 by Otto Hahn, Lise Meitner and their assistant Fritz Strassmann in heavy elements. In 1941, Rubby Sherr, Kenneth Bainbridge and Herbert Lawrence Anderson reported the nuclear transmutation of mercury into gold. Later in the twentieth century the transmutation of elements within stars was elaborated, accounting for the relative abundance of heavier elements in the universe. Save for the first five elements, which were produced in the Big Bang and other cosmic ray processes, stellar nucleosynthesis accounted for the abundance of all elements heavier than boron. In their 1957 paper Synthesis of the Elements in Stars, William Alfred Fowler, Margaret Burbidge, Geoffrey Burbidge, and Fred Hoyle explained how the abundances of essentially all but the lightest chemical elements could be explained by the process of nucleosynthesis in stars. Transmutation of other elements into gold The alchemical tradition sought to turn the "base metal", lead, into gold. As a nuclear transmutation, it requires far less energy to turn gold into lead; for example, this would occur via neutron capture and beta decay if gold were left in a nuclear reactor for a sufficiently long period of time. Glenn Seaborg succeeded in producing a minuscule amount of gold from bismuth, at a net energy loss. Transmutation in the universe The Big Bang is thought to be the origin of the hydrogen (including all deuterium) and helium in the universe. Hydrogen and helium together account for 98% of the mass of ordinary matter in the universe, while the other 2% makes up everything else. The Big Bang also produced small amounts of lithium, beryllium and perhaps boron. More lithium, beryllium and boron were produced later, in a natural nuclear reaction, cosmic ray spallation. Stellar nucleosynthesis is responsible for all of the other elements occurring naturally in the universe as stable isotopes and primordial nuclide, from carbon to uranium. These occurred after the Big Bang, during star formation. Some lighter elements from carbon to iron were formed in stars and released into space by asymptotic giant branch (AGB) stars. These are a type of red giant that "puffs" off its outer atmosphere, containing some elements from carbon to nickel and iron. Nuclides with mass number greater than 64 are predominantly produced by neutron capture processes—the s-process and r-process–in supernova explosions and neutron star mergers. The Solar System is thought to have condensed approximately 4.6 billion years before the present, from a cloud of hydrogen and helium containing heavier elements in dust grains formed previously by a large number of such stars. These grains contained the heavier elements formed by transmutation earlier in the history of the universe. All of these natural processes of transmutation in stars are continuing today, in our own galaxy and in others. Stars fuse hydrogen and helium into heavier and heavier elements (up to iron), producing energy. For example, the observed light curves of supernova stars such as SN 1987A show them blasting large amounts (comparable to the mass of Earth) of radioactive nickel and cobalt into space. However, little of this material reaches Earth. Most natural transmutation on the Earth today is mediated by cosmic rays (such as production of carbon-14) and by the radioactive decay of radioactive primordial nuclides left over from the initial formation of the Solar System (such as potassium-40, uranium and thorium), plus the radioactive decay of products of these nuclides (radium, radon, polonium, etc.). See decay chain. Artificial transmutation of nuclear waste Overview Transmutation of transuranium elements (i.e. actinides minus actinium to uranium) such as the isotopes of plutonium (about 1wt% in the light water reactors' used nuclear fuel or the minor actinides (MAs, i.e. neptunium, americium, and curium), about 0.1wt% each in light water reactors' used nuclear fuel) has the potential to help solve some problems posed by the management of radioactive waste by reducing the proportion of long-lived isotopes it contains. (This does not rule out the need for a deep geological repository for high level radioactive waste.) When irradiated with fast neutrons in a nuclear reactor, these isotopes can undergo nuclear fission, destroying the original actinide isotope and producing a spectrum of radioactive and nonradioactive fission products. Ceramic targets containing actinides can be bombarded with neutrons to induce transmutation reactions to remove the most difficult long-lived species. These can consist of actinide-containing solid solutions such as , , , , or just actinide phases such as , , , mixed with some inert phases such as , , , and . The role of non-radioactive inert phases is mainly to provide stable mechanical behaviour to the target under neutron irradiation. There are issues with this P&T (partitioning and transmutation) strategy however: it is limited by the costly and cumbersome need to separate long-lived fission product isotopes before they can undergo transmutation. some long-lived fission products, including the nuclear waste product caesium-137, are unable to capture enough neutrons for effective transmutation to occur due to their small neutron cross-section and resultingly low capture rate. The new study led by Satoshi Chiba at Tokyo Tech (called "Method to Reduce Long-lived Fission Products by Nuclear Transmutations with Fast Spectrum Reactors") shows that effective transmutation of long-lived fission products can be achieved in fast spectrum reactors without the need for isotope separation. This can be achieved by adding a yttrium deuteride moderator. Reactor types For instance, plutonium can be reprocessed into mixed oxide fuels and transmuted in standard reactors. However, this is limited by the accumulation of plutonium-240 in spent MOX fuel, which is neither particularly fertile (transmutation to fissile plutonium-241 does occur, but at lower rates than production of more plutonium-240 from neutron capture by plutonium-239) nor fissile with thermal neutrons. Even countries like France which practice nuclear reprocessing extensively, usually do not reuse the Plutonium content of used MOX-fuel. The heavier elements could be transmuted in fast reactors, but probably more effectively in a subcritical reactor which is sometimes known as an energy amplifier and which was devised by Carlo Rubbia. Fusion neutron sources have also been proposed as well suited. Fuel types There are several fuels that can incorporate plutonium in their initial composition at their beginning of cycle and have a smaller amount of this element at the end of cycle. During the cycle, plutonium can be burnt in a power reactor, generating electricity. This process is not only interesting from a power generation standpoint, but also due to its capability of consuming the surplus weapons grade plutonium from the weapons program and plutonium resulting of reprocessing used nuclear fuel. Mixed oxide fuel is one of these. Its blend of oxides of plutonium and uranium constitutes an alternative to the low enriched uranium fuel predominantly used in light water reactors. Since uranium is present in mixed oxide, although plutonium will be burnt, second generation plutonium will be produced through the radiative capture of uranium-238 and the two subsequent beta minus decays. Fuels with plutonium and thorium are also an option. In these, the neutrons released in the fission of plutonium are captured by thorium-232. After this radiative capture, thorium-232 becomes thorium-233, which undergoes two beta minus decays resulting in the production of the fissile isotope uranium-233. The radiative capture cross section for thorium-232 is more than three times that of uranium-238, yielding a higher conversion to fissile fuel than that from uranium-238. Due to the absence of uranium in the fuel, there is no second generation plutonium produced, and the amount of plutonium burnt will be higher than in mixed oxide fuels. However, uranium-233, which is fissile, will be present in the used nuclear fuel. Weapons-grade and reactor-grade plutonium can be used in plutonium–thorium fuels, with weapons-grade plutonium being the one that shows a bigger reduction in the amount of plutonium-239. Long-lived fission products Some radioactive fission products can be converted into shorter-lived radioisotopes by transmutation. Transmutation of all fission products with half-life greater than one year is studied in Grenoble, with varying results. Strontium-90 and caesium-137, with half-lives of about 30 years, are the largest radiation (including heat) emitters in used nuclear fuel on a scale of decades to ~305 years (tin-121m is insignificant because of the low yield), and are not easily transmuted because they have low neutron absorption cross sections. Instead, they should simply be stored until they decay. Given that this length of storage is necessary, the fission products with shorter half-lives can also be stored until they decay. The next longer-lived fission product is samarium-151, which has a half-life of 90 years, and is such a good neutron absorber that most of it is transmuted while the nuclear fuel is still being used; however, effectively transmuting the remaining in nuclear waste would require separation from other isotopes of samarium. Given the smaller quantities and its low-energy radioactivity, is less dangerous than and and can also be left to decay for ~970 years. Finally, there are seven long-lived fission products. They have much longer half-lives in the range 211,000 years to 15.7 million years. Two of them, technetium-99 and iodine-129, are mobile enough in the environment to be potential dangers, are free (Technetium has no known stable isotopes) or mostly free of mixture with stable isotopes of the same element, and have neutron cross sections that are small but adequate to support transmutation. Additionally, can substitute for uranium-238 in supplying Doppler broadening for negative feedback for reactor stability. Most studies of proposed transmutation schemes have assumed , , and transuranium elements as the targets for transmutation, with other fission products, activation products, and possibly reprocessed uranium remaining as waste. Technetium-99 is also produced as a waste product in nuclear medicine from Technetium-99m, a nuclear isomer that decays to its ground state which has no further use. Due to the decay product of (the result of capturing a neutron) decaying with a relatively short half life to a stable isotope of ruthenium, a precious metal, there might also be some economic incentive to transmutation, if costs can be brought low enough. Of the remaining five long-lived fission products, selenium-79, tin-126 and palladium-107 are produced only in small quantities (at least in today's thermal neutron, -burning light water reactors) and the last two should be relatively inert. The other two, zirconium-93 and caesium-135, are produced in larger quantities, but also not highly mobile in the environment. They are also mixed with larger quantities of other isotopes of the same element. Zirconium is used as cladding in fuel rods due to being virtually "transparent" to neutrons, but a small amount of is produced by neutron absorption from the regular zircalloy without much ill effect. Whether could be reused for new cladding material has not been subject of much study thus far. See also Neutron activation Nuclear power List of nuclear waste treatment technologies Synthesis of precious metals Fertile material References External links "Radioactive change", Rutherford & Soddy article (1903), online and analyzed on Bibnum [click 'à télécharger' for English version]. Nuclear physics Nuclear chemistry Radioactivity
Nuclear transmutation
[ "Physics", "Chemistry" ]
3,602
[ "Nuclear chemistry", "nan", "Radioactivity", "Nuclear physics" ]
27,739,767
https://en.wikipedia.org/wiki/Forces%20on%20sails
Forces on sails result from movement of air that interacts with sails and gives them motive power for sailing craft, including sailing ships, sailboats, windsurfers, ice boats, and sail-powered land vehicles. Similar principles in a rotating frame of reference apply to windmill sails and wind turbine blades, which are also wind-driven. They are differentiated from forces on wings, and propeller blades, the actions of which are not adjusted to the wind. Kites also power certain sailing craft, but do not employ a mast to support the airfoil and are beyond the scope of this article. Forces on sails depend on wind speed and direction and the speed and direction of the craft. The direction that the craft is traveling with respect to the "true wind" (the wind direction and speed over the surface) is called the point of sail. The speed of the craft at a given point of sail contributes to the "apparent wind"—the wind speed and direction as measured on the moving craft. The apparent wind on the sail creates a total aerodynamic force, which may be resolved into drag—the force component in the direction of the apparent wind—and lift—the force component normal (90°) to the apparent wind. Depending on the alignment of the sail with the apparent wind, lift or drag may be the predominant propulsive component. Total aerodynamic force also resolves into a forward, propulsive, driving force—resisted by the medium through or over which the craft is passing (e.g. through water, air, or over ice, sand)—and a lateral force, resisted by the underwater foils, ice runners, or wheels of the sailing craft. For apparent wind angles aligned with the entry point of the sail, the sail acts as an airfoil and lift is the predominant component of propulsion. For apparent wind angles behind the sail, lift diminishes and drag increases as the predominant component of propulsion. For a given true wind velocity over the surface, a sail can propel a craft to a higher speed, on points of sail when the entry point of the sail is aligned with the apparent wind, than it can with the entry point not aligned, because of a combination of the diminished force from airflow around the sail and the diminished apparent wind from the velocity of the craft. Because of limitations on speed through the water, displacement sailboats generally derive power from sails generating lift on points of sail that include close-hauled through broad reach (approximately 40° to 135° off the wind). Because of low friction over the surface and high speeds over the ice that create high apparent wind speeds for most points of sail, iceboats can derive power from lift further off the wind than displacement boats. Various mathematical models address lift and drag by taking into account the density of air, coefficients of lift and drag that result from the shape and area of the sail, and the speed and direction of the apparent wind, among other factors. This knowledge is applied to the design of sails in such a manner that sailors can adjust sails to the strength and direction of the apparent wind in order to provide motive power to sailing craft. Overview The combination of a sailing craft's speed and direction with respect to the wind, together with wind strength, generate an apparent wind velocity. When the craft is aligned in a direction where the sail can be adjusted to align with its leading edge parallel to the apparent wind, the sail acts as an airfoil to generate lift in a direction perpendicular to the apparent wind. A component of this lift pushes the craft crosswise to its course, which is resisted by a sailboat's keel, an ice boat's blades or a land-sailing craft's wheels. An important component of lift is directed forward in the direction of travel and propels the craft. Language of velocity and force To understand forces and velocities, discussed here, one must understand what is meant by a "vector" and a "scalar." Velocity (V), denoted as boldface in this article, is an example of a vector, because it implies both direction and speed. The corresponding speed (V ), denoted as italics in this article is a scalar value. Likewise, a force vector, F, denotes direction and strength, whereas its corresponding scalar (F ) denotes strength alone. Graphically, each vector is represented with an arrow that shows direction and a length that shows speed or strength. Vectors of consistent units (e.g. V in m/s or F in N) may be added and subtracted, graphically, by positioning tips and tails of the arrows, representing the input variables and drawing the resulting derived vector. Components of force: lift vs. drag and driving vs. lateral force Lift on a sail (L), acting as an airfoil, occurs in a direction perpendicular to the incident airstream (the apparent wind velocity, VA, for the head sail) and is a result of pressure differences between the windward and leeward surfaces and depends on angle of attack, sail shape, air density, and speed of the apparent wind. Pressure differences result from the normal force per unit area on the sail from the air passing around it. The lift force results from the average pressure on the windward surface of the sail being higher than the average pressure on the leeward side. These pressure differences arise in conjunction with the curved air flow. As air follows a curved path along the windward side of a sail, there is a pressure gradient perpendicular to the flow direction with lower pressure on the outside of the curve and higher pressure on the inside. To generate lift, a sail must present an "angle of attack" (α) between the chord line of the sail and the apparent wind velocity (VA). Angle of attack is a function of both the craft's point of sail and how the sail is adjusted with respect to the apparent wind. As the lift generated by a sail increases, so does lift-induced drag, which together with parasitic drag constitutes total drag, (D). This occurs when the angle of attack increases with sail trim or change of course to cause the lift coefficient to increase up to the point of aerodynamic stall, so does the lift-induced drag coefficient. At the onset of stall, lift is abruptly decreased, as is lift-induced drag, but viscous pressure drag, a component of parasitic drag, increases due to the formation of separated flow on the surface of the sail. Sails with the apparent wind behind them (especially going downwind) operate in a stalled condition. Lift and drag are components of the total aerodynamic force on sail (FT). Since the forces on the sail are resisted by forces in the water (for a boat) or on the traveled surface (for an ice boat or land sailing craft), their corresponding forces can also be decomposed from total aerodynamic force into driving force (FR) and lateral force (FLAT). Driving force overcomes resistance to forward motion. Lateral force is met by lateral resistance from a keel, blade or wheel, but also creates a heeling force. Effect of points of sail on forces Apparent wind (VA) is the air velocity acting upon the leading edge of the most forward sail or as experienced by instrumentation or crew on a moving sailing craft. It is the vector sum of true wind velocity and the apparent wind component resulting from boat velocity (VA = −VB + VT). In nautical terminology, wind speeds are normally expressed in knots and wind angles in degrees. The craft's point of sail affects its velocity (VB) for a given true wind velocity (VT). Conventional sailing craft cannot derive power from the wind in a "no-go" zone that is approximately 40° to 50° away from the true wind, depending on the craft. Likewise, the directly downwind speed of all conventional sailing craft is limited to the true wind speed. Effect of apparent wind on sailing craft at three points of sail Boat velocity (in black) generates an equal and opposite apparent wind component (not shown), which adds to the true wind to become apparent wind. Sailing craft A is close-hauled. Sailing craft B is on a beam reach. Sailing craft C is on a broad reach. A sailboat's speed through the water is limited by the resistance that results from hull drag in the water. Sail boats on foils are much less limited. Ice boats typically have the least resistance to forward motion of any sailing craft. Craft with the higher forward resistance achieve lower forward velocities for a given wind velocity than ice boats, which can travel at speeds several multiples of the true wind speed. Consequently, a sailboat experiences a wider range of apparent wind angles than does an ice boat, whose speed is typically great enough to have the apparent wind coming from a few degrees to one side of its course, necessitating sailing with the sail sheeted in for most points of sail. On conventional sail boats, the sails are set to create lift for those points of sail where it's possible to align the leading edge of the sail with the apparent wind. For a sailboat, point of sail affects lateral force significantly. The higher the boat points to the wind under sail, the stronger the lateral force, which requires resistance from a keel or other underwater foils, including daggerboard, centerboard, skeg and rudder. Lateral force also induces heeling in a sailboat, which requires resistance by weight of ballast from the crew or the boat itself and by the shape of the boat, especially with a catamaran. As the boat points off the wind, lateral force and the forces required to resist it become less important. On ice boats, lateral forces are countered by the lateral resistance of the blades on ice and their distance apart, which generally prevents heeling. Forces on sailing craft Each sailing craft is a system that mobilizes wind force through its sails—supported by spars and rigging—which provide motive power and reactive force from the underbody of a sailboat—including the keel, centerboard, rudder or other underwater foils—or the running gear of an ice boat or land craft, which allows it to be kept on a course. Without the ability to mobilize reactive forces in directions different from the wind direction, a craft would simply be adrift before the wind. Accordingly, motive and heeling forces on sailing craft are either components of or reactions to the total aerodynamic force (FT) on sails, which is a function of apparent wind velocity (VA) and varies with point of sail. The forward driving force (FR) component contributes to boat velocity (VB), which is, itself, a determinant of apparent wind velocity. Absent lateral reactive forces to FT from a keel (in water), a skate runner (on ice) or a wheel (on land), a craft would only be able to move downwind and the sail would not be able to develop lift. At a stable angle of heel (for a sailboat) and a steady speed, aerodynamic and hydrodynamic forces are in balance. Integrated over the sailing craft, the total aerodynamic force (FT) is located at the centre of effort (CE), which is a function of the design and adjustment of the sails on a sailing craft. Similarly, the total hydrodynamic force (Fl) is located at the centre of lateral resistance (CLR), which is a function of the design of the hull and its underwater appendages (keel, rudder, foils, etc.). These two forces act in opposition to one another with Fl a reaction to FT. Whereas ice boats and land-sailing craft resist lateral forces with their wide stance and high-friction contact with the surface, sailboats travel through water, which provides limited resistance to side forces. In a sailboat, side forces are resisted in two ways: Leeway: Leeway is the rate of travel perpendicular to the course. It is constant when the lateral force on the sail (FLAT) equals the lateral force on the boat's keel and other underwater appendages (PLAT). This causes the boat to travel through the water on a course that is different from the direction in which the boat is pointed by the angle (λ ), which is called the "leeway angle." Heeling: The heeling angle (θ) is constant when the torque between the centre of effort (CE) on the sail and the centre of resistance on the hull (CR) over moment arm (h) equals the torque between the boat's centre of buoyancy (CB) and its centre of gravity (CG) over moment arm (b), described as heeling moment. All sailing craft reach a constant forward speed (VB) for a given wind speed (VT) and point of sail, when the forward driving force (FR) equals the forward resisting force (Rl). For an ice boat, the dominant forward resisting force is aerodynamic, since the coefficient of friction on smooth ice is as low as 0.02. Accordingly, high-performance ice boats are streamlined to minimize aerodynamic drag. Aerodynamic forces in balance with hydrodynamic forces on a close-hauled sailboat Force components on sails The approximate locus of net aerodynamic force on a craft with a single sail is the centre of effort (CE ) at the geometric centre of the sail. Filled with wind, the sail has a roughly spherical polygon shape and if the shape is stable, then the location of centre of effort is stable. On sailing craft with multiple sails, the position of centre of effort varies with the sail plan. Sail trim or airfoil profile, boat trim and point of sail also affect CE. On a given sail, the net aerodynamic force on the sail is located approximately at the maximum draught intersecting the camber of the sail and passing through a plane intersecting the centre of effort, normal to the leading edge (luff), roughly perpendicular to the chord of the sail (a straight line between the leading edge (luff) and the trailing edge (leech)). Net aerodynamic force with respect to the air stream is usually considered in reference to the direction of the apparent wind (VA) over the surface plane (ocean, land or ice) and is decomposed into lift (L), perpendicular with VA, and drag (D), in line with VA. For windsurfers, lift component vertical to the surface plane is important, because in strong winds windsurfer sails are leaned into the wind to create a vertical lifting component ( FVERT) that reduces drag on the board (hull) through the water. Note that FVERT acts downwards for boats heeling away from the wind, but is negligible under normal conditions. The three dimensional vector relationship for net aerodynamic force with respect to apparent wind (VA) is: Likewise, net aerodynamic force may be decomposed into the three translational directions with respect to a boat's course over the surface: surge (forward/astern), sway (starboard/port—relevant to leeway), and heave (up/down). The scalar values and direction of these components can be dynamic, depending on wind and waves (for a boat). In this case, FT is considered in reference to the direction of the boat's course and is decomposed into driving force (FR), in line with the boat's course, and lateral force (FLAT), perpendicular with the boat's course. Again for windsurfers, the lift component vertical to the surface plane ( FVERT) is important. The three dimensional vector relationship for net aerodynamic force with respect to the course over the surface is: The values of driving force (FR ) and lateral force (FLAT ) with apparent wind angle (α), assuming no heeling, relate to the values of lift (L ) and drag (D ), as follows: Reactive forces on sailing craft Reactive forces on sailing craft include forward resistance—sailboat's hydrodynamic resistance (Rl), an ice boat's sliding resistance or a land sailing craft's rolling resistance in the direction of travel—which are to be minimized in order to increase speed, and lateral force, perpendicular to the direction of travel, which is to be made sufficiently strong to minimize sideways motion and to guide the craft on course. Forward resistance comprises the types of drag that impede a sailboat's speed through water (or an ice boat's speed over the surface) include components of parasitic drag, consisting primarily of form drag, which arises because of the shape of the hull, and skin friction, which arises from the friction of the water (for boats) or air (for ice boats and land sailing craft) against the "skin" of the hull that is moving through it. Displacement vessels are also subject to wave resistance from the energy that goes into displacing water into waves and that is limited by hull speed, which is a function of waterline length, Wheeled vehicles' forward speed is subject to rolling friction and ice boats are subject to kinetic or sliding friction. Parasitic drag in water or air increases with the square of speed (VB2 or VA2, respectively); rolling friction increases linearly with velocity; whereas kinetic friction is normally a constant, but on ice may become reduced with speed as it transitions to lubricated friction with melting. Ways to reduce wave-making resistance used on sailing vessels include reduced displacement—through planing or (as with a windsurfer) offsetting vessel weight with a lifting sail—and fine entry, as with catamarans, where a narrow hull minimizes the water displaced into a bow wave. Sailing hydrofoils also substantially reduce forward friction with an underwater foil that lifts the vessel free of the water. Sailing craft with low forward resistance and high lateral resistance. Sailing craft with low forward resistance can achieve high velocities with respect to the wind velocity: High-performance catamarans, including the Extreme 40 catamaran and International C-class catamaran can sail at speeds up to twice the speed of the wind. Sailing hydrofoils achieve boat speeds up to twice the speed of the wind, as did the AC72 catamarans used for the 2013 America's Cup. Ice boats can sail up to five times the speed of the wind. Lateral force is a reaction supplied by the underwater shape of a sailboat, the blades of an ice boat and the wheels of a land sailing craft. Sailboats rely on keels, centerboards, and other underwater foils, including rudders, that provide lift in the lateral direction, to provide hydrodynamic lateral force (PLAT) to offset the lateral force component acting on the sail (FLAT) and minimize leeway. Such foils provide hydrodynamic lift and, for keels, ballast to offset heeling. They incorporate a wide variety of design considerations. Rotational forces on sailing craft The forces on sails that contribute to torque and cause rotation with respect to the boat's longitudinal (fore and aft), horizontal (abeam) and vertical (aloft) rotational axes result in: roll (e.g. heeling). pitch (e.g. pitch-poling), and yaw (e.g. broaching). Heeling, which results from the lateral force component (FLAT), is the most significant rotational effect of total aerodynamic force (FT). In stasis, heeling moment from the wind and righting moment from the boat's heel force (FH ) and its opposing hydrodynamic lift force on hull (Fl ), separated by a distance (h = "heeling arm"), versus its hydrostatic displacement weight (W ) and its opposing buoyancy force (Δ), separated by a distance (b = "righting arm") are in balance: (heeling arm × heeling force = righting arm × buoyancy force = heeling arm × hydrodynamic lift force on hull = righting arm × displacement weight) Sails come in a wide variety of configurations that are designed to match the capabilities of the sailing craft to be powered by them. They are designed to stay within the limitations of a craft's stability and power requirements, which are functions of hull (for boats) or chassis (for land craft) design. Sails derive power from wind that varies in time and with height above the surface. In order to do so, they are designed to adjust to the wind force for various points of sail. Both their design and method for control include means to match their lift and drag capabilities to the available apparent wind, by changing surface area, angle of attack, and curvature. Wind variation with elevation Wind speed increases with height above the surface; at the same time, wind speed may vary over short periods of time as gusts. These considerations may be described empirically. Measurements show that wind speed, (V (h ) ) varies, according to a power law with height (h ) above a non-zero measurement height datum (h0 —e.g. at the height of the foot of a sail), using a reference wind speed measured at the datum height (V (h0 ) ), as follows: Where the power law exponent (p) has values that have been empirically determined to range from 0.11 over the ocean to 0.31 over the land. This means that a V (3 m) = 5-m/s (≈10-knot) wind at 3 m above the water would be approximately V (15 m) = 6 m/s (≈12 knots) at 15 m above the water. In hurricane-force winds with V (3 m) = 40-m/s (≈78 knots) the speed at 15 m would be V (15 m) = 49 m/s (≈95 knots) with p = 0.128. This suggests that sails that reach higher above the surface can be subject to stronger wind forces that move the centre of effort (CE ) higher above the surface and increase the heeling moment. Additionally, apparent wind direction moves aft with height above water, which may necessitate a corresponding twist in the shape of the sail to achieve attached flow with height. Wind variation with time Hsu gives a simple formula for a gust factor (G ) for winds as a function of the exponent (p ), above, where G is the ratio of the wind gust speed to baseline wind speed at a given height: So, for a given windspeed and Hsu's recommended value of p = 0.126, one can expect G = 1.5 (a 10-knot wind might gust up to 15 knots). This, combined with changes in wind direction suggest the degree to which a sailing craft must adjust to wind gusts on a given course. Forces on sails A sailing craft's motive system comprises one or more sails, supported by spars and rigging, that derive power from the wind and induce reactive force from the underbody of a sailboat or the running gear of an ice boat or land craft. Depending on the angle of attack of a set of sails with respect to the apparent wind, each sail is providing motive force to the sailing craft either from lift-dominant attached flow or drag-dominant separated flow. Additionally, sails may interact with one another to create forces that are different from the sum of the individual contributions each sail, when used alone. Lift predominant (attached flow) Sails allow progress of a sailing craft to windward, thanks to their ability to generate lift (and the craft's ability to resist the lateral forces that result). Each sail configuration has a characteristic coefficient of lift and attendant coefficient of drag, which can be determined experimentally and calculated theoretically. Sailing craft orient their sails with a favorable angle of attack between the entry point of the sail and the apparent wind as their course changes. The ability to generate lift is limited by sailing too close to the wind when no effective angle of attack is available to generate lift (luffing) and sailing sufficiently off the wind that the sail cannot be oriented at a favorable angle of attack (running downwind). Instead, past a critical angle of attack, the sail stalls and promotes flow separation. Effect of angle of attack on coefficients of lift and drag Each type of sail, acting as an airfoil, has characteristic coefficients of lift (CL ) and lift-induced drag (CD ) at a given angle of attack, which follow that same basic form of: Where force (F) equals lift (L) for forces measured perpendicular to the airstream to determine C = CL or force (F) equals drag (D) for forces measured in line with the airstream to determine C = CD on a sail of area (A) and a given aspect ratio (length to average cord width). These coefficients vary with angle of attack (αj for a headsail) with respect to the incident wind (VA for a headsail). This formulation allows determination of CL and CD experimentally for a given sail shape by varying angle of attack at an experimental wind velocity and measuring force on the sail in the direction of the incident wind (D—drag) and perpendicular to it (L—lift). As the angle of attack grows larger, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the convex surface of the sail; there is less deflection of air to windward, so the sail as airfoil generates less lift. The sail is said to be stalled. At the same time, induced drag increases with angle of attack (for the headsail: αj ). Determination of coefficients of lift (CL ) and drag (CD ) for angle of attack and aspect ratio Fossati presents polar diagrams that relate coefficients of lift and drag for different angles of attack based on the work of Gustave Eiffel, who pioneered wind tunnel experiments on airfoils, which he published in 1910. Among them were studies of cambered plates. The results shown are for plates of varying camber and aspect ratios, as shown. They show that, as aspect ratio decreases, maximum lift shifts further towards increased drag (rightwards in the diagram). They also show that, for lower angles of attack, a higher aspect ratio generates more lift and less drag than for lower aspect ratios. Effect of coefficients of lift and drag on forces If the lift and drag coefficients (CL and CD) for a sail at a specified angle of attack are known, then the lift (L) and drag (D) forces produced can be determined, using the following equations, which vary as the square of apparent wind speed (VA ): Garrett demonstrates how those diagrams translate into lift and drag, for a given sail, on different points of sail, in diagrams similar to these: Polar diagrams, showing lift (L), drag (D), total aerodynamic force (FT), forward driving force (FR), and lateral force (FLAT) for upwind points of sail In these diagrams the direction of travel changes with respect to the apparent wind (VA), which is constant for the purpose of illustration. In reality, for a constant true wind, apparent wind would vary with point of sail. Constant VA in these examples means that either VT or VB varies with point of sail; this allows the same polar diagram to be used for comparison with the same conversion of coefficients into units of force (in this case Newtons). In the examples for close-hauled and reach (left and right), the sail's angle of attack (α ) is essentially constant, although the boom angle over the boat changes with point of sail to trim the sail close to the highest lift force on the polar curve. In these cases, lift and drag are the same, but the decomposition of total aerodynamic force (FT) into forward driving force (FR) and lateral force (FLAT) vary with point of sail. Forward driving force (FR) increases, as the direction of travel is more aligned with the wind, and lateral force (FLAT) decreases. In reference to the above diagrams relating lift and drag, Garrett explains that for a maximum speed made good to windward, the sail must be trimmed to an angle of attack that is greater than the maximum lift/drag ratio (more lift), while the hull is operated in a manner that is lower than its maximum lift/drag ratio (more drag). Drag predominant (separated flow) When sailing craft are on a course where the angle of attack between the sail and the apparent wind (α ) exceeds the point of maximum lift on the CL–CD polar diagram, separation of flow occurs. The separation becomes more pronounced until at α = 90° lift becomes small and drag predominates. In addition to the sails used upwind, spinnakers provide area and curvature appropriate for sailing with separated flow on downwind points of sail. Polar diagrams, showing lift (L), drag (D), total aerodynamic force (FT), forward driving force (FR), and lateral force (FLAT) for downwind points of sail Again, in these diagrams the direction of travel changes with respect to the apparent wind (VA), which is constant for the sake of illustration, but would in reality vary with point of sail for a constant true wind. In the left-hand diagram (broad reach), the boat is on a point of sail, where the sail can no longer be aligned into the apparent wind to create an optimum angle of attack. Instead, the sail is in a stalled condition, creating about 80% of the lift as in the upwind examples and drag has doubled. Total aerodynamic force (FT) has moved away from the maximum lift value. In the right-hand diagram (running before the wind), lift is one-fifth of the upwind cases (for the same strength apparent wind) and drag has almost quadrupled. Downwind sailing with a spinnaker A velocity prediction program can translate sail performance and hull characteristics into a polar diagram, depicting boat speed for various windspeeds at each point of sail. Displacement sailboats exhibit a change in what course has the best velocity made good (VMG), depending on windspeed. For the example given, the sailboat achieves best downwind VMG for windspeed of 10 knots and less at a course about 150° off the wind. For higher windspeed the optimum downwind VMG occurs at more than 170° off the wind. This "downwind cliff" (abrupt change in optimum downwind course) results from the change of balance in drag forces on the hull with speed. Sail interactions Sailboats often have a jib that overlaps the mainsail—called a genoa. Arvel Gentry demonstrated in his series of articles published in "Best of sail trim" published in 1977 (and later reported and republished in summary in 1981) that the genoa and the mainsail interact in a symbiotic manner, owing to the circulation of air between them slowing down in the gap between the two sails (contrary to traditional explanations), which prevents separation of flow along the mainsail. The presence of a jib causes the stagnation line on the mainsail to move forward, which reduces the suction velocities on the main and reduces the potential for boundary layer separation and stalling. This allows higher angles of attack. Likewise, the presence of the mainsail causes the stagnation line on the jib to be shifted aft and allows the boat to point closer to the wind, owing to higher leeward velocities of the air over both sails. The two sails cause an overall larger displacement of air perpendicular to the direction of flow when compared to one sail. They act to form a larger wing, or airfoil, around which the wind must pass. The total length around the outside has also increased and the difference in air speed between windward and leeward sides of the two sails is greater, resulting in more lift. The jib experiences a greater increase in lift with the two sail combination. Sail performance design variables Sails characteristically have a coefficient of lift (CL) and coefficient of drag (CD) for each apparent wind angle. The planform, curvature and area of a given sail are dominant determinants of each coefficient. Sail terminology Sails are classified as "triangular sails", "quadrilateral fore-and-aft sails" (gaff-rigged, etc.), and "square sails". The top of a triangular sail, the head, is raised by a halyard, The forward lower corner of the sail, the tack, is shackled to a fixed point on the boat in a manner to allow pivoting about that point—either on a mast, e.g. for a mainsail, or on the deck, e.g. for a jib or staysail. The trailing lower corner, the clew, is positioned with an outhaul on a boom or directly with a sheet, absent a boom. Symmetrical sails have two clews, which may be adjusted forward or back. The windward edge of a sail is called the luff, the trailing edge, the leach, and the bottom edge the foot. On symmetrical sails, either vertical edge may be presented to windward and, therefore, there are two leaches. On sails attached to a mast and boom, these edges may be curved, when laid on a flat surface, to promote both horizontal and vertical curvature in the cross-section of the sail, once attached. The use of battens allows a sail have an arc of material on the leech, beyond a line drawn from the head to the clew, called the roach. Lift variables As with aircraft wings, the two dominant factors affecting sail efficiency are its planform—primarily sail width versus sail height, expressed as an aspect ratio—and cross-sectional curvature or draft. Aspect ratio In aerodynamics, the aspect ratio of a sail is the ratio of its length to its breadth (chord). A high aspect ratio indicates a long, narrow sail, whereas a low aspect ratio indicates a short, wide sail. For most sails, the length of the chord is not a constant but varies along the wing, so the aspect ratio AR is defined as the square of the sail height b divided by the area A of the sail planform: Aspect ratio and planform can be used to predict the aerodynamic performance of a sail. For a given sail area, the aspect ratio, which is proportional to the square of the sail height, is of particular significance in determining lift-induced drag, and is used to calculate the induced drag coefficient of a sail : where is the Oswald efficiency number that accounts for the variable sail shapes. This formula demonstrates that a sail's induced drag coefficient decreases with increased aspect ratio. Sail curvature The horizontal curvature of a sail is termed "draft" and corresponds to the camber of an airfoil. Increasing the draft generally increases the sail's lift force. The Royal Yachting Association categorizes draft by depth and by the placement of the maximum depth as a percentage of the distance from the luff to the leach. Sail draft is adjusted for wind speed to achieve a flatter sail (less draft) in stronger winds and a fuller sails (more draft) in lighter winds. Staysails and sails attached to a mast (e.g. a mainsail) have different, but similar controls to achieve draft depth and position. On a staysail, tightening the luff with the halyard helps flatten the sail and adjusts the position of maximum draft. On a mainsail curving the mast to fit the curvature of the luff helps flatten the sail. Depending on wind strength, Dellenbaugh offers the following advice on setting the draft of a sailboat mainsail: For light air (less than 8 knots), the sail is at its fullest with the depth of draft between 13-16% of the cord and maximum fullness 50% aft from the luff. For medium air (8-15 knots), the mainsail has minimal twist with a depth of draft set between 11-13% of the cord and maximum fullness 45% aft from the luff. For heavy (greater than15 knots), the sail is flattened and allowed to twist in a manner that dumps lift with a depth of draft set between 9-12% of cord and maximum fullness 45% aft of the luff. Plots by Larsson et al show that draft is a much more significant factor affecting sail propulsive force than the position of maximum draft. Coefficients of propulsive forces and heeling forces as a function of draft (camber) depth or position. The primary tool for adjusting mainsail shape is mast bend; a straight mast increases draft and lift; a curved mast decreases draft and lift—the backstay tensioner is a primary tool for bending the mast. Secondary tools for sail shape adjustment are the mainsheet, traveler, outhaul, and Cunningham. Drag variables Spinnakers have traditionally been optimized to mobilize drag as a more important propulsive component than lift. As sailing craft are able to achieve higher speeds, whether on water, ice or land, the velocity made good (VMG) at a given course off the wind occurs at apparent wind angles that are increasingly further forward with speed. This suggests that the optimum VMG for a given course may be in a regime where a spinnaker may be providing significant lift. Traditional displacement sailboats may at times have optimum VMG courses close to downwind; for these the dominant force on sails is from drag. According to Kimball, CD ≈ 4/3 for most sails with the apparent wind angle astern, so drag force on a downwind sail becomes substantially a function of area and wind speed, approximated as follows: Measurement and computation tools Sail design relies on empirical measurements of pressures and their resulting forces on sails, which validate modern analysis tools, including computational fluid dynamics. Measurement of pressure on the sail Modern sail design and manufacture employs wind tunnel studies, full-scale experiments, and computer models as a basis for efficiently harnessing forces on sails. Instruments for measuring air pressure effects in wind tunnel studies of sails include pitot tubes, which measure air speed and manometers, which measure static pressures and atmospheric pressure (static pressure in undisturbed flow). Researchers plot pressure across the windward and leeward sides of test sails along the chord and calculate pressure coefficients (static pressure difference over wind-induced dynamic pressure). Research results describe airflow around the sail and in the boundary layer. Wilkinson, modelling the boundary layer in two dimensions, described nine regions around the sail: Upper mast attached airflow. Upper separation bubble. Upper reattachment region. Upper aerofoil attached flow region. Trailing edge separation region. Lower mast attached flow region. Lower separation bubble. Lower reattachment region. Lower aerofoil attached flow region. Analysis Sail design differs from wing design in several respects, especially since on a sail air flow varies with wind and boat motion and sails are usually deformable airfoils, sometimes with a mast for a leading edge. Often simplifying assumptions are employed when making design calculations, including: a flat travel surface—water, ice or land, constant wind velocity and unchanging sail adjustment. The analysis of the forces on sails takes into account the aerodynamic surface force, its centre of effort on a sail, its direction, and its variable distribution over the sail. Modern analysis employs fluid mechanics and aerodynamics airflow calculations for sail design and manufacture, using aeroelasticity models, which combine computational fluid dynamics and structural analysis. Secondary effects pertaining to turbulence and separation of the boundary layer are secondary factors. Computational limitations persist. Theoretical results require empirical confirmation with wind tunnel tests on scale models and full-scale testing of sails. Velocity prediction programs combine elements of hydrodynamic forces (mainly drag) and aerodynamic forces (lift and drag) to predict sailboat performance at various windspeed for all points of sail See also Sail Sailing Sailcloth Point of sail Polar diagram (sailing) Sail-plan Rigging Wing Sail twist High-performance sailing Stays (nautical) Sheet (sailing) References Aerodynamics Naval architecture Sailing Marine propulsion
Forces on sails
[ "Chemistry", "Engineering" ]
8,247
[ "Naval architecture", "Aerodynamics", "Aerospace engineering", "Marine engineering", "Marine propulsion", "Fluid dynamics" ]
27,740,507
https://en.wikipedia.org/wiki/C3H6Cl2
{{DISPLAYTITLE:C3H6Cl2}} The molecular formula C3H6Cl2 (molar mass: 112.98 g/mol, exact mass: 111.9847 u) may refer to: 1,2-Dichloropropane 1,3-Dichloropropane
C3H6Cl2
[ "Chemistry" ]
71
[ "Isomerism", "Set index articles on molecular formulas" ]
931,064
https://en.wikipedia.org/wiki/YORP%20effect
The Yarkovsky–O'Keefe–Radzievskii–Paddack effect, or YORP effect for short, changes the rotation state of a small astronomical body – that is, the body's spin rate and the obliquity of its pole(s) – due to the scattering of solar radiation off its surface and the emission of its own thermal radiation. The YORP effect is typically considered for asteroids with their heliocentric orbit in the Solar System. The effect is responsible for the creation of binary and tumbling asteroids as well as for changing an asteroid's pole towards 0°, 90°, or 180° relative to the ecliptic plane and so modifying its heliocentric radial drift rate due to the Yarkovsky effect. Term The term was coined by David P. Rubincam in 2000 to honor four important contributors to the concepts behind the so-named YORP effect. In the 19th century, Ivan Yarkovsky realized that the thermal radiation escaping from a body warmed by the Sun carries off momentum as well as heat. Translated into modern physics, each emitted photon possesses a momentum p = E/c where E is its energy and c is the speed of light. Vladimir Radzievskii applied the idea to rotation based on changes in albedo and Stephen Paddack realized that shape was a much more effective means of altering a body's spin rate. Stephen Paddack and John O'Keefe suggested that the YORP effect leads to rotational bursting and by repeatedly undergoing this process, small asymmetric bodies are eventually reduced to dust. Physical mechanism In principle, electromagnetic radiation interacts with the surface of an asteroid in three significant ways: radiation from the Sun is (1) absorbed and (2) diffusively reflected by the surface of the body and the body's internal energy is (3) emitted as thermal radiation. Since photons possess momentum, each of these interactions leads to changes in the angular momentum of the body relative to its center of mass. If considered for only a short period of time, these changes are very small, but over longer periods of time, these changes may integrate to significant changes in the angular momentum of the body. For bodies in a heliocentric orbit, the relevant long period of time is the orbital period (i.e. year), since most asteroids have rotation periods (i.e. days) shorter than their orbital periods. Thus, for most asteroids, the YORP effect is the secular change in the rotation state of the asteroid after averaging the solar radiation torques over first the rotational period and then the orbital period. Observations In 2007 there was direct observational confirmation of the YORP effect on the small asteroids 54509 YORP (then designated ) and 1862 Apollo. The spin rate of 54509 YORP will double in just 600,000 years, and the YORP effect can also alter the axial tilt and precession rate, so that the entire suite of YORP phenomena can send asteroids into interesting resonant spin states, and helps explain the existence of binary asteroids. Observations show that asteroids larger than 125 km in diameter have rotation rates that follow a Maxwellian frequency distribution, while smaller asteroids (in the 50 to 125 km size range) show a small excess of fast rotators. The smallest asteroids (size less than 50 km) show a clear excess of very fast and slow rotators, and this becomes even more pronounced as smaller-sized populations are measured. These results suggest that one or more size-dependent mechanisms are depopulating the centre of the spin rate distribution in favour of the extremes. The YORP effect is a prime candidate. It is not capable of significantly modifying the spin rates of large asteroids by itself, so a different explanation must be sought for objects such as 253 Mathilde. In late 2013 asteroid P/2013 R3 was observed breaking apart, likely because of a high rotation speed from the YORP effect. Examples Assume a rotating spherical asteroid has two wedge-shaped fins attached to its equator, irradiated by parallel rays of sunlight. The reaction force from photons departing from any given surface element of the spherical core will be normal to the surface, such that no torque is produced (the force vectors all pass through the centre of mass). Thermally-emitted photons reradiated from the sides of the wedges, however, can produce a torque, as the normal vectors do not pass through the centre of mass. Both fins present the same cross section to the incoming light (they have the same height and width), and so absorb and reflect the same amount of energy each and produce an equal force. Due to the fin surfaces being oblique, however, the normal forces from the reradiated photons do not cancel out. In the diagram, fin A's outgoing radiation produces an equatorial force parallel to the incoming light and no vertical force, but fin B's force has a smaller equatorial component and a vertical component. The unbalanced forces on the two fins lead to torque and the object spins. The torque from the outgoing light does not average out, even over a full rotation, so the spin accelerates over time. An object with some "windmill" asymmetry can therefore be subjected to minuscule torque forces that will tend to spin it up or down as well as make its axis of rotation precess. The YORP effect is zero for a rotating ellipsoid if there are no irregularities in surface temperature or albedo. In the long term, the object's changing obliquity and rotation rate may wander randomly, chaotically or regularly, depending on several factors. For example, assuming the Sun remains on its equator, asteroid 951 Gaspra, with a radius of 6 km and a semi-major axis of 2.21 AU, would in 240 Ma (240 million years) go from a rotation period of 12 h to 6 h and vice versa. If 243 Ida were given the same radius and orbit values as Gaspra, it would spin up or down twice as fast, while a body with Phobos' shape would take several billion years to change its spin by the same amount. Size as well as shape affects the amount of the effect. Smaller objects will spin up or down much more quickly. If Gaspra were smaller by a factor of 10 (to a radius of 500 m), its spin will halve or double in just a few million years. Similarly, the YORP effect intensifies for objects closer to the Sun. At 1 AU, Gaspra would double/halve its spin rate in a mere 100,000 years. After one million years, its period may shrink to ~2 h, at which point it could start to break apart. According to a 2019 model, the YORP effect is likely to cause "widespread fragmentation of asteroids" as the Sun expands into a luminous red giant, and may explain the dust disks and apparent infalling matter observed at many white dwarfs. This is one mechanism through which binary asteroids may form, and it may be more common than collisions and planetary near-encounter tidal disruption as the primary means of binary formation. Asteroid was later named 54509 YORP to honor its part in the confirmation of this phenomenon. See also 25143 Itokawa—Smallest asteroid to be visited by a spacecraft Citations General and cited references Draft manuscript/report. Further reading External links Asteroid rotation discovery reported Asteroids Orbital perturbations Radiation effects Rotation
YORP effect
[ "Physics", "Materials_science", "Engineering" ]
1,521
[ "Physical phenomena", "Classical mechanics", "Rotation", "Materials science", "Motion (physics)", "Radiation", "Condensed matter physics", "Radiation effects" ]
931,250
https://en.wikipedia.org/wiki/Screened%20Coulomb%20potentials%20implicit%20solvent%20model
SCP-ISM, or screened Coulomb potentials implicit solvent model, is a continuum approximation of solvent effects for use in computer simulations of biological macromolecules, such as proteins and nucleic acids, usually within the framework of molecular dynamics. It is based on the classic theory of polar liquids, as developed by Peter Debye and corrected by Lars Onsager to incorporate reaction field effects. The model can be combined with quantum chemical calculations to formally derive a continuum model of solvent effects suitable for computer simulations of small and large molecular systems. External links An essay on SCP-ISM CHARMM website Molecular dynamics
Screened Coulomb potentials implicit solvent model
[ "Physics", "Chemistry" ]
130
[ "Molecular physics", "Theoretical chemistry stubs", "Computational physics", "Molecular dynamics", "Computational chemistry stubs", "Computational chemistry", "Physical chemistry stubs" ]
932,217
https://en.wikipedia.org/wiki/Classical%20unified%20field%20theories
Since the 19th century, some physicists, notably Albert Einstein, have attempted to develop a single theoretical framework that can account for all the fundamental forces of nature – a unified field theory. Classical unified field theories are attempts to create a unified field theory based on classical physics. In particular, unification of gravitation and electromagnetism was actively pursued by several physicists and mathematicians in the years between the two World Wars. This work spurred the purely mathematical development of differential geometry. This article describes various attempts at formulating a classical (non-quantum), relativistic unified field theory. For a survey of classical relativistic field theories of gravitation that have been motivated by theoretical concerns other than unification, see Classical theories of gravitation. For a survey of current work toward creating a quantum theory of gravitation, see quantum gravity. Overview The early attempts at creating a unified field theory began with the Riemannian geometry of general relativity, and attempted to incorporate electromagnetic fields into a more general geometry, since ordinary Riemannian geometry seemed incapable of expressing the properties of the electromagnetic field. Einstein was not alone in his attempts to unify electromagnetism and gravity; a large number of mathematicians and physicists, including Hermann Weyl, Arthur Eddington, and Theodor Kaluza also attempted to develop approaches that could unify these interactions. These scientists pursued several avenues of generalization, including extending the foundations of geometry and adding an extra spatial dimension. Early work The first attempts to provide a unified theory were by G. Mie in 1912 and Ernst Reichenbacher in 1916. However, these theories were unsatisfactory, as they did not incorporate general relativity because general relativity had yet to be formulated. These efforts, along with those of Rudolf Förster, involved making the metric tensor (which had previously been assumed to be symmetric and real-valued) into an asymmetric and/or complex-valued tensor, and they also attempted to create a field theory for matter as well. Differential geometry and field theory From 1918 until 1923, there were three distinct approaches to field theory: the gauge theory of Weyl, Kaluza's five-dimensional theory, and Eddington's development of affine geometry. Einstein corresponded with these researchers, and collaborated with Kaluza, but was not yet fully involved in the unification effort. Weyl's infinitesimal geometry In order to include electromagnetism into the geometry of general relativity, Hermann Weyl worked to generalize the Riemannian geometry upon which general relativity is based. His idea was to create a more general infinitesimal geometry. He noted that in addition to a metric field there could be additional degrees of freedom along a path between two points in a manifold, and he tried to exploit this by introducing a basic method for comparison of local size measures along such a path, in terms of a gauge field. This geometry generalized Riemannian geometry in that there was a vector field Q, in addition to the metric g, which together gave rise to both the electromagnetic and gravitational fields. This theory was mathematically sound, albeit complicated, resulting in difficult and high-order field equations. The critical mathematical ingredients in this theory, the Lagrangians and curvature tensor, were worked out by Weyl and colleagues. Then Weyl carried out an extensive correspondence with Einstein and others as to its physical validity, and the theory was ultimately found to be physically unreasonable. However, Weyl's principle of gauge invariance was later applied in a modified form to quantum field theory. Kaluza's fifth dimension Kaluza's approach to unification was to embed space-time into a five-dimensional cylindrical world, consisting of four space dimensions and one time dimension. Unlike Weyl's approach, Riemannian geometry was maintained, and the extra dimension allowed for the incorporation of the electromagnetic field vector into the geometry. Despite the relative mathematical elegance of this approach, in collaboration with Einstein and Einstein's aide Grommer it was determined that this theory did not admit a non-singular, static, spherically symmetric solution. This theory did have some influence on Einstein's later work and was further developed later by Klein in an attempt to incorporate relativity into quantum theory, in what is now known as Kaluza–Klein theory. Eddington's affine geometry Sir Arthur Stanley Eddington was a noted astronomer who became an enthusiastic and influential promoter of Einstein's general theory of relativity. He was among the first to propose an extension of the gravitational theory based on the affine connection as the fundamental structure field rather than the metric tensor which was the original focus of general relativity. Affine connection is the basis for parallel transport of vectors from one space-time point to another; Eddington assumed the affine connection to be symmetric in its covariant indices, because it seemed plausible that the result of parallel-transporting one infinitesimal vector along another should produce the same result as transporting the second along the first. (Later workers revisited this assumption.) Eddington emphasized what he considered to be epistemological considerations; for example, he thought that the cosmological constant version of the general-relativistic field equation expressed the property that the universe was "self-gauging". Since the simplest cosmological model (the De Sitter universe) that solves that equation is a spherically symmetric, stationary, closed universe (exhibiting a cosmological red shift, which is more conventionally interpreted as due to expansion), it seemed to explain the overall form of the universe. Like many other classical unified field theorists, Eddington considered that in the Einstein field equations for general relativity the stress–energy tensor , which represents matter/energy, was merely provisional, and that in a truly unified theory the source term would automatically arise as some aspect of the free-space field equations. He also shared the hope that an improved fundamental theory would explain why the two elementary particles then known (proton and electron) have quite different masses. The Dirac equation for the relativistic quantum electron caused Eddington to rethink his previous conviction that fundamental physical theory had to be based on tensors. He subsequently devoted his efforts into development of a "Fundamental Theory" based largely on algebraic notions (which he called "E-frames"). Unfortunately his descriptions of this theory were sketchy and difficult to understand, so very few physicists followed up on his work. Einstein's geometric approaches When the equivalent of Maxwell's equations for electromagnetism is formulated within the framework of Einstein's theory of general relativity, the electromagnetic field energy (being equivalent to mass as defined by Einstein's equation E=mc2) contributes to the stress tensor and thus to the curvature of space-time, which is the general-relativistic representation of the gravitational field; or putting it another way, certain configurations of curved space-time incorporate effects of an electromagnetic field. This suggests that a purely geometric theory ought to treat these two fields as different aspects of the same basic phenomenon. However, ordinary Riemannian geometry is unable to describe the properties of the electromagnetic field as a purely geometric phenomenon. Einstein tried to form a generalized theory of gravitation that would unify the gravitational and electromagnetic forces (and perhaps others), guided by a belief in a single origin for the entire set of physical laws. These attempts initially concentrated on additional geometric notions such as vierbeins and "distant parallelism", but eventually centered around treating both the metric tensor and the affine connection as fundamental fields. (Because they are not independent, the metric-affine theory was somewhat complicated.) In general relativity, these fields are symmetric (in the matrix sense), but since antisymmetry seemed essential for electromagnetism, the symmetry requirement was relaxed for one or both fields. Einstein's proposed unified-field equations (fundamental laws of physics) were generally derived from a variational principle expressed in terms of the Riemann curvature tensor for the presumed space-time manifold. In field theories of this kind, particles appear as limited regions in space-time in which the field strength or the energy density is particularly high. Einstein and coworker Leopold Infeld managed to demonstrate that, in Einstein's final theory of the unified field, true singularities of the field did have trajectories resembling point particles. However, singularities are places where the equations break down, and Einstein believed that in an ultimate theory the laws should apply everywhere, with particles being soliton-like solutions to the (highly nonlinear) field equations. Further, the large-scale topology of the universe should impose restrictions on the solutions, such as quantization or discrete symmetries. The degree of abstraction, combined with a relative lack of good mathematical tools for analyzing nonlinear equation systems, make it hard to connect such theories with the physical phenomena that they might describe. For example, it has been suggested that the torsion (antisymmetric part of the affine connection) might be related to isospin rather than electromagnetism; this is related to a discrete (or "internal") symmetry known to Einstein as "displacement field duality". Einstein became increasingly isolated in his research on a generalized theory of gravitation, and most physicists consider his attempts ultimately unsuccessful. In particular, his pursuit of a unification of the fundamental forces ignored developments in quantum physics (and vice versa), most notably the discovery of the strong nuclear force and weak nuclear force. Schrödinger's pure-affine theory Inspired by Einstein's approach to a unified field theory and Eddington's idea of the affine connection as the sole basis for differential geometric structure for space-time, Erwin Schrödinger from 1940 to 1951 thoroughly investigated pure-affine formulations of generalized gravitational theory. Although he initially assumed a symmetric affine connection, like Einstein he later considered the nonsymmetric field. Schrödinger's most striking discovery during this work was that the metric tensor was induced upon the manifold via a simple construction from the Riemann curvature tensor, which was in turn formed entirely from the affine connection. Further, taking this approach with the simplest feasible basis for the variational principle resulted in a field equation having the form of Einstein's general-relativistic field equation with a cosmological term arising automatically. Skepticism from Einstein and published criticisms from other physicists discouraged Schrödinger, and his work in this area has been largely ignored. Later work After the 1930s, progressively fewer scientists worked on classical unification, due to the continued development of quantum-theoretical descriptions of the non-gravitational fundamental forces of nature and the difficulties encountered in developing a quantum theory of gravity. Einstein pressed on with his attempts to theoretically unify gravity and electromagnetism, but he became increasingly isolated in this research, which he pursued until his death. Einstein's celebrity status brought much attention to his final quest, which ultimately saw limited success. Most physicists, on the other hand, eventually abandoned classical unified theories. Current mainstream research on unified field theories focuses on the problem of creating a quantum theory of gravity and unifying with the other fundamental theories in physics, all of which are quantum field theories. (Some programs, such as string theory, attempt to solve both of these problems at once.) Of the four known fundamental forces, gravity remains the one force for which unification with the others proves problematic. Although new "classical" unified field theories continue to be proposed from time to time, often involving non-traditional elements such as spinors or relating gravitation to an electromagnetic force, none have been generally accepted by physicists yet. See also Affine gauge theory Classical field theory Gauge gravitation theory Metric-affine gravitation theory References History of physics Theoretical physics
Classical unified field theories
[ "Physics" ]
2,412
[ "Theoretical physics" ]
932,460
https://en.wikipedia.org/wiki/Myers%27s%20theorem
Myers's theorem, also known as the Bonnet–Myers theorem, is a celebrated, fundamental theorem in the mathematical field of Riemannian geometry. It was discovered by Sumner Byron Myers in 1941. It asserts the following: In the special case of surfaces, this result was proved by Ossian Bonnet in 1855. For a surface, the Gauss, sectional, and Ricci curvatures are all the same, but Bonnet's proof easily generalizes to higher dimensions if one assumes a positive lower bound on the sectional curvature. Myers' key contribution was therefore to show that a Ricci lower bound is all that is needed to reach the same conclusion. Corollaries The conclusion of the theorem says, in particular, that the diameter of is finite. Therefore must be compact, as a closed (and hence compact) ball of finite radius in any tangent space is carried onto all of by the exponential map. As a very particular case, this shows that any complete and noncompact smooth Einstein manifold must have nonpositive Einstein constant. Since is connected, there exists the smooth universal covering map One may consider the pull-back metric on Since is a local isometry, Myers' theorem applies to the Riemannian manifold and hence is compact and the covering map is finite. This implies that the fundamental group of is finite. Cheng's diameter rigidity theorem The conclusion of Myers' theorem says that for any one has . In 1975, Shiu-Yuen Cheng proved: See also References Ambrose, W. A theorem of Myers. Duke Math. J. 24 (1957), 345–348. Differential geometry Geometric inequalities Theorems in Riemannian geometry
Myers's theorem
[ "Mathematics" ]
342
[ "Geometric inequalities", "Inequalities (mathematics)", "Theorems in geometry" ]
932,711
https://en.wikipedia.org/wiki/Carmichael%27s%20theorem
In number theory, Carmichael's theorem, named after the American mathematician R. D. Carmichael, states that, for any nondegenerate Lucas sequence of the first kind Un(P, Q) with relatively prime parameters P, Q and positive discriminant, an element Un with n ≠ 1, 2, 6 has at least one prime divisor that does not divide any earlier one except the 12th Fibonacci number F(12) = U12(1, −1) = 144 and its equivalent U12(−1, −1) = −144. In particular, for n greater than 12, the nth Fibonacci number F(n) has at least one prime divisor that does not divide any earlier Fibonacci number. Carmichael (1913, Theorem 21) proved this theorem. Recently, Yabuta (2001) gave a simple proof. Bilu, Hanrot, Voutier and Mignotte (2001) extended it to the case of negative discriminants (where it is true for all n > 30). Statement Given two relatively prime integers P and Q, such that and , let be the Lucas sequence of the first kind defined by Then, for n ≠ 1, 2, 6, Un(P, Q) has at least one prime divisor that does not divide any Um(P, Q) with m < n, except U12(±1, −1) = ±F(12) = ±144. Such a prime p is called a characteristic factor or a primitive prime divisor of Un(P, Q). Indeed, Carmichael showed a slightly stronger theorem: For n ≠ 1, 2, 6, Un(P, Q) has at least one primitive prime divisor not dividing D except U3(±1, −2) = 3, U5(±1, −1) = F(5) = 5, or U12(1, −1) = −U12(−1, −1) = F(12) = 144. In Camicharel's theorem, D should be greater than 0; thus the cases U13(1, 2), U18(1, 2) and U30(1, 2), etc. are not included, since in this case D = −7 < 0. Fibonacci and Pell cases The only exceptions in Fibonacci case for n up to 12 are: F(1) = 1 and F(2) = 1, which have no prime divisors F(6) = 8, whose only prime divisor is 2 (which is F(3)) F(12) = 144, whose only prime divisors are 2 (which is F(3)) and 3 (which is F(4)) The smallest primitive prime divisor of F(n) are 1, 1, 2, 3, 5, 1, 13, 7, 17, 11, 89, 1, 233, 29, 61, 47, 1597, 19, 37, 41, 421, 199, 28657, 23, 3001, 521, 53, 281, 514229, 31, 557, 2207, 19801, 3571, 141961, 107, 73, 9349, 135721, 2161, 2789, 211, 433494437, 43, 109441, ... Carmichael's theorem says that every Fibonacci number, apart from the exceptions listed above, has at least one primitive prime divisor. If n > 1, then the nth Pell number has at least one prime divisor that does not divide any earlier Pell number. The smallest primitive prime divisor of nth Pell number are 1, 2, 5, 3, 29, 7, 13, 17, 197, 41, 5741, 11, 33461, 239, 269, 577, 137, 199, 37, 19, 45697, 23, 229, 1153, 1549, 79, 53, 113, 44560482149, 31, 61, 665857, 52734529, 103, 1800193921, 73, 593, 9369319, 389, 241, ... See also Zsigmondy's theorem References . Fibonacci numbers Theorems in number theory
Carmichael's theorem
[ "Mathematics" ]
930
[ "Mathematical theorems", "Recurrence relations", "Fibonacci numbers", "Golden ratio", "Theorems in number theory", "Mathematical relations", "Mathematical problems", "Number theory" ]
932,823
https://en.wikipedia.org/wiki/Calcineurin
Calcineurin (CaN) is a calcium and calmodulin dependent serine/threonine protein phosphatase (also known as protein phosphatase 3, and calcium-dependent serine-threonine phosphatase). It activates the T cells of the immune system and can be blocked by drugs. Calcineurin activates nuclear factor of activated T cell cytoplasmic (NFATc), a transcription factor, by dephosphorylating it. The activated NFATc is then translocated into the nucleus, where it upregulates the expression of interleukin 2 (IL-2), which, in turn, stimulates the growth and differentiation of the T cell response. Calcineurin is the target of a class of drugs called calcineurin inhibitors, which include ciclosporin, voclosporin, pimecrolimus and tacrolimus. Structure Calcineurin is a heterodimer of a 61-kD calmodulin-binding catalytic subunit, calcineurin A and a 19-kD Ca2+-binding regulatory subunit, calcineurin B. There are three isozymes of the catalytic subunit, each encoded by a separate gene (PPP3CA, PPP3CB, and PPP3CC) and two isoforms of the regulatory, also encoded by separate genes (PPP3R1, PPP3R2). Mechanism of action When an antigen-presenting cell interacts with a T cell receptor on T cells, there is an increase in the cytoplasmic level of calcium, which activates calcineurin by binding a regulatory subunit and activating calmodulin binding. Calcineurin induces transcription factors (NFATs) that are important in the transcription of IL-2 genes. IL-2 activates T-helper lymphocytes and induces the production of other cytokines. In this way, it governs the action of cytotoxic lymphocytes. The amount of IL-2 being produced by the T-helper cells is believed to influence the extent of the immune response significantly. Clinical relevance Rheumatic diseases Calcineurin inhibitors are prescribed for adult rheumatoid arthritis (RA) as a single drug or in combination with methotrexate. The microemulsion formulation is approved by the U.S. Food and Drug Administration for treatment of severely active RA. It is also prescribed for: psoriatic arthritis, psoriasis, acute ocular Behçet's disease, juvenile idiopathic arthritis, adult and juvenile polymyositis and dermatomyositis, adult and juvenile systemic lupus erythematosus, adult lupus membranous nephritis, systemic sclerosis, aplastic anemia, steroid-resistant nephrotic syndrome, atopic dermatitis, severe corticosteroid-dependent asthma, severe ulcerative colitis, pemphigus vulgaris, myasthenia gravis, and dry eye disease, with or without Sjögren's syndrome (administered as ophthalmic emulsion). Schizophrenia Calcineurin is linked to receptors for several brain chemicals including glutamate, dopamine and GABA. An experiment with genetically-altered mice that could not produce calcineurin showed similar symptoms as in humans with schizophrenia: impairment in working memory, attention deficits, aberrant social behavior, and several other abnormalities characteristic of schizophrenia. Diabetes Calcineurin along with NFAT, may improve the function of diabetics' pancreatic beta cells. Thus tacrolimus contributes to the frequent development of new diabetes following renal transplantation. Calcineurin/NFAT signaling is required for perinatal lung maturation and function. Organ transplantation Calcineurin inhibitors such as tacrolimus and ciclosporin are used to suppress the immune system in organ allotransplant recipients to prevent rejection of the transplanted tissue. Interactions Calcineurin has been shown to interact with RCAN1 and AKAP5. References Further reading External links Signal transduction EC 3.1.3
Calcineurin
[ "Chemistry", "Biology" ]
893
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
31,580,571
https://en.wikipedia.org/wiki/Liquid-crystal%20laser
A liquid-crystal laser is a laser that uses a liquid crystal as the resonator cavity, allowing selection of emission wavelength and polarization from the active laser medium. The lasing medium is usually a dye doped into the liquid crystal. Liquid-crystal lasers are comparable in size to diode lasers, but provide the continuous wide spectrum tunability of dye lasers while maintaining a large coherence area. The tuning range is typically several tens of nanometers. Self-organization at micrometer scales reduces manufacturing complexity compared to using layered photonic metamaterials. Operation may be either in continuous wave mode or in pulsed mode. History Distributed feedback lasing using Bragg reflection of a periodic structure instead of external mirrors was first proposed in 1971, predicted theoretically with cholesteric liquid crystals in 1978, achieved experimentally in 1980, and explained in terms of a photonic band gap in 1998. A United States Patent issued in 1973 described a liquid-crystal laser that uses "a liquid lasing medium having internal distributed feedback by virtue of the molecular structure of a cholesteric liquid-crystal material." Mechanism Starting with a liquid crystal in the nematic phase, the desired helical pitch (the distance along the helical axis for one complete rotation of the nematic plane subunits) can be achieved by doping the liquid crystal with a chiral molecule. For light circularly polarized with the same handedness, this regular modulation of the refractive index yields selective reflection of the wavelength given by the helical pitch, allowing the liquid-crystal laser to serve as its own resonator cavity. Photonic crystals are amenable to band theory methods, with the periodic dielectric structure playing the role of the periodic electric potential and a photonic band gap (reflection notch) corresponding to forbidden frequencies. The lower photon group velocity and higher density of states near the photonic bandgap suppresses spontaneous emission and enhances stimulated emission, providing favorable conditions for lasing. If the electronic band edge falls in the photonic bandgap, electron-hole recombination is strictly suppressed. This allows for devices with high lasing efficiency, low lasing threshold, and stable frequency, where the liquid-crystal laser acts its own waveguide. "Colossal" nonlinear change in refractive index is achievable in doped nematic-phase liquid crystals, that is the refractive index can change with illumination intensity at a rate of about 103cm2/W of illumination intensity. Most systems use a semiconductor pumping laser to achieve population inversion, though flash lamp and electrical pumping systems are possible. Tuning of the output wavelength is achieved by smoothly varying the helical pitch: as the winding changes, so does the length scale of the crystal. This in turn shifts the band edge and changes the optical path length in the lasing cavity. Applying a static electric field perpendicular to the dipole moment of the local nematic phase rotates the rod-like subunits in the hexagonal plane and reorders the chiral phase, winding or unwinding the helical pitch. Similarly, optical tuning of the output wavelength is available using laser light far from the pick-up frequency of the gain medium, with degree of rotation governed by intensity and the angle between the polarization of the incident light and the dipole moment. Reorientation is stable and reversible. The chiral pitch of a cholesteric phase tends to unwind with increasing temperature, with a disorder-order transition to the higher symmetry nematic phase at the high end. By applying a temperature gradient perpendicular to the direction of emission varying the location of stimulation, frequency may be selected across a continuous spectrum. Similarly, a quasi-continuous doping gradient yields multiple laser lines from different locations on the same sample. Spatial tuning may also be accomplished using a wedge cell. The boundary conditions of the narrower cell squeeze the helical pitch by requiring a particular orientation at the edge, with discrete jumps where the outer cells rotate to the next stable orientation; frequency variation between jumps is continuous. If a defect is introduced into the liquid crystal to disturb the periodicity, a single allowed mode may be created inside of the photonic bandgap, reducing power leeching by spontaneous emission at adjacent frequencies. Defect mode lasing was first predicted in 1987, and was demonstrated in 2003. While most such thin films lase on the axis normal to the film's surface, some will lase on a conic angle around that axis. Applications Biomedical sensing: small size, low cost, and low power consumption offer a variety of advantages in biomedical sensing applications. Potentially, liquid-crystal lasers could form the basis for "lab on a chip" devices that provide immediate readings without sending a sample away to a separate lab. Medical: low emission power limits such medical procedures as cutting during surgeries, but liquid-crystal lasers show potential to be used in microscopy techniques and in vivo techniques such as photodynamic therapy. Display screens: liquid-crystal-laser-based displays offer most of the advantages of standard liquid-crystal displays, but the low spectral spread gives more precise control over color. Individual elements are small enough to act as single pixels while retaining high brightness and color definition. A system in which each pixel is a single spatially tuned device could avoid the sometimes long relaxation times of dynamic tuning, and could emit any color using spatial addressing and the same monochromatic pumping source. Environmental sensing: using a material with a helical pitch highly sensitive to temperature, electric field, magnetic field, or mechanical strain, color shift of the output laser provides a simple, direct measurement of environmental conditions. References Bibliography Further reading External links a list of papers related to photonic properties of chiral liquid crystals Laser types Liquid crystals Optical materials Photonics
Liquid-crystal laser
[ "Physics" ]
1,179
[ "Materials", "Optical materials", "Matter" ]
31,583,410
https://en.wikipedia.org/wiki/Elitzur%27s%20theorem
In quantum field theory and statistical field theory, Elitzur's theorem states that in gauge theories, the only operators that can have non-vanishing expectation values are ones that are invariant under local gauge transformations. An important implication is that gauge symmetry cannot be spontaneously broken. The theorem was first proved in 1975 by Shmuel Elitzur in lattice field theory, although the same result is expected to hold in the continuum limit. The theorem shows that the naive interpretation of the Higgs mechanism as the spontaneous symmetry breaking of a gauge symmetry is incorrect, although the phenomenon can be reformulated entirely in terms of gauge invariant quantities in what is known as the Fröhlich–Morchio–Strocchi mechanism. Theory A field theory admits different types of symmetries, with the two most common ones being global and local symmetries. Global symmetries are fields transformations acting the same way everywhere while local symmetries act on fields in a position dependent way. The latter correspond to redundancies in the description of the system. This is a consequence of Noether's second theorem which states that each local symmetry degree of freedom corresponds to a relation among the Euler–Lagrange equations, making the system underdetermined. Underdeterminacy requires gauge fixing of the non-propagating degrees of freedom so that the equations of motion admit a unique solution. Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetry. In that case there will exist a local operator that is non-invariant under the symmetry giving it a nonzero vacuum expectation value. Such non-invariant local operators always have vanishing vacuum expectation values for finite size systems, prohibiting spontaneous symmetry breaking. This occurs because over large timescales, finite systems always transition between all possible ground states, averaging away the expectation value of the operator. While spontaneous symmetry breaking can occur for global symmetries, Elitzur's theorem states that the same is not the case for gauge symmetries; all vacuum expectation values of gauge non-invariant operators are vanishing, even in systems of infinite size. On the lattice this follows from the fact that integrating gauge non-invariant observables over a group measure always yields zero for compact gauge groups. Positivity of the measure and gauge invariance are sufficient to prove the theorem. This is also an explanation for why gauge symmetries are mere redundancies in lattice field theories, where the equations of motion need not define a well-posed problem as they do not need to be solved. Instead, Elitzur's theorem shows that any observable that is not invariant under the symmetry has a vanishing expectation value, making it unobservable and therefore redundant. Showing that a system admits spontaneous symmetry breaking requires introducing a weak external source field that breaks the symmetry and gives rise to a preferred ground state. The system is then taken to the thermodynamic limit after which the external source field is switched off. If the vacuum expectation value of symmetry non-invariant operators is nonzero in this limit then there is spontaneous symmetry breaking. Physically it means that the system never leaves the original ground state into which it was placed by the external field. For global symmetries this occurs because the energy barrier between the various ground states is proportional to the volume, so in the thermodynamic limit this diverges, locking the system into the ground state. Local symmetries get around this construction because the energy barrier between two ground states depends only on local features so transitions to different gauge related ground states can occur locally and does not require the field to change everywhere at the same time as it does for global symmetries. Limitations and implications There are a number of limitations to the theorem. In particular, spontaneous symmetry breaking of a gauge symmetry is allowed in a system with infinite spatial dimensions or a symmetry with an infinite number of variables, since in these cases there are infinite energy barriers between gauge related configurations. The theorem also does not apply to residual gauge degrees of freedom nor large gauge transformations, which can in principle be spontaneously broken. Furthermore, all current proofs rely on a lattice field theory formulation so they may be invalid in a genuine continuum field theory. It is therefore in principle plausible that there may exist exotic continuum theories for which gauge symmetries can be spontaneously broken, although such a scenario remains unlikely due to the absence of any known examples. Landau's classification of phases uses expectation values of local operators to determine the phase of the system. However, Elitzur's theorem shows that this approach is inadmissible in certain systems such as Yang–Mills theories for which no local operator can act as an order operator for confinement. Instead, to get around the theorem requires constructing nonlocal gauge invariant operators, whose expectation values need not be zero. The most common ones are Wilson loops and their thermal equivalents, Polyakov loops. Another nonlocal operator that acts as a order operator is the 't Hooft loop. Since gauge symmetries cannot be spontaneously broken, this calls into question the validity of the Higgs mechanism. In the usual presentation, the Higgs field has a potential that appears to give the Higgs field a non-vanishing vacuum expectation value. However, this is merely a consequence of imposing a gauge fixing, usually the unitary gauge. Any value of the vacuum expectation value can be acquired by an appropriate gauge fixing choice. Calculating the expectation value in a gauge invariant way always gives zero, in agreement with Elitzur's theorem. The Higgs mechanism can however be reformulated entirely in a gauge invariant way in what is known as the Fröhlich–Morchio–Strocchi mechanism which does not involve spontaneous symmetry breaking of any symmetry. For non-abelian gauge groups that have a subgroup, this mechanism agrees with the Higgs mechanism, but for other gauge groups there can appear discrepancies between the two approaches. Elitzur's theorem can also be generalized to a larger notion of local symmetries where in a D-dimensional space, there can be symmetries that act uniformly on a d-dimensional hyperplanes. In this view, global symmetries act on D-dimensional hyperplanes while local symmetries act on 0-dimensional ones. The generalized Elitzur's theorem then provides bounds on the vacuum expectation values of operators that are non-invariant under such d-dimensional symmetries. This theorem has numerous applications in condensed matter systems where such symmetries appear. See also Mermin–Wagner theorem References External links Notes on lattice gauge theory by A. Muramatsu Gauge theories Lattice field theory Symmetry Theorems in quantum mechanics Statistical mechanics theorems
Elitzur's theorem
[ "Physics", "Mathematics" ]
1,366
[ "Theorems in dynamical systems", "Theorems in quantum mechanics", "Equations of physics", "Quantum mechanics", "Statistical mechanics theorems", "Theorems in mathematical physics", "Geometry", "Statistical mechanics", "Symmetry", "Physics theorems" ]
31,585,964
https://en.wikipedia.org/wiki/Industrial%20catalysts
The first time a catalyst was used in the industry was in 1746 by J. Roebuck in the manufacture of lead chamber sulfuric acid. Since then catalysts have been in use in a large portion of the chemical industry. In the start only pure components were used as catalysts, but after the year 1900 multicomponent catalysts were studied and are now commonly used in the industry. In the chemical industry and industrial research, catalysis play an important role. Different catalysts are in constant development to fulfil economic, political and environmental demands. When using a catalyst, it is possible to replace a polluting chemical reaction with a more environmentally friendly alternative. Today, and in the future, this can be vital for the chemical industry. In addition, it's important for a company/researcher to pay attention to market development. If a company's catalyst is not continually improved, another company can make progress in research on that particular catalyst and gain market share. For a company, a new and improved catalyst can be a huge advantage for a competitive manufacturing cost. It's extremely expensive for a company to shut down the plant because of an error in the catalyst, so the correct selection of a catalyst or a new improvement can be key to industrial success. To achieve the best understanding and development of a catalyst it is important that different special fields work together. These fields can be: organic chemistry, analytic chemistry, inorganic chemistry, chemical engineers and surface chemistry. The economics must also be taken into account. One of the issues that must be considered is if the company should use money on doing the catalyst research themselves or buy the technology from someone else. As the analytical tools are becoming more advanced, the catalysts used in the industry are improving. One example of an improvement can be to develop a catalyst with a longer lifetime than the previous version. Some of the advantages an improved catalyst gives, that affects people's lives, are: cheaper and more effective fuel, new drugs and medications and new polymers. Some of the large chemical processes that use catalysis today are the production of methanol and ammonia. Both methanol and ammonia synthesis take advantage of the water-gas shift reaction and heterogeneous catalysis, while other chemical industries use homogenous catalysis. If the catalyst exists in the same phase as the reactants it is said to be homogenous; otherwise it is heterogeneous. Water gas shift reaction The water gas shift reaction was first used industrially at the beginning of the 20th century. Today the WGS reaction is used primarily to produce hydrogen that can be used for further production of methanol and ammonia. WGS reaction The reaction refers to carbon monoxide (CO) that reacts with water (H2O) to form carbon dioxide (CO2) and hydrogen (H2). The reaction is exothermic with ΔH -41.1 kJ/mol and have an adiabatic temperature rise of 8–10 °C per percent CO converted to CO2 and H2. The most common catalysts used in the water-gas shift reaction are the high temperature shift (HTS) catalyst and the low temperature shift (LTS) catalyst. The HTS catalyst consists of iron oxide stabilized by chromium oxide, while the LTS catalyst is based on copper. The main purpose of the LTS catalyst is to reduce CO content in the reformate which is especially important in the ammonia production for high yield of H2. Both catalysts are necessary for thermal stability, since using the LTS reactor alone increases exit-stream temperatures to unacceptable levels. The equilibrium constant for the reaction is given as: Low temperatures will therefore shift the reaction to the right, and more products will be produced. The equilibrium constant is extremely dependent on the reaction temperature, for example is the Kp equal to 228 at 200 °C, but only 11.8 at 400 °C. The WGS reaction can be performed both homogenously and heterogeneously, but only the heterogeneous method is used commercially. High temperature shift (HTS) catalyst The first step in the WGS reaction is the high temperature shift which is carried out at temperatures between 320 °C and 450 °C. As mentioned before, the catalyst is a composition of iron-oxide, Fe2O3(90-95%), and chromium oxides Cr2O3 (5-10%) which have an ideal activity and selectivity at these temperatures. When preparing this catalyst, one of the most important step is washing to remove sulfate that can turn into hydrogen sulfide and poison the LTS catalyst later in the process. Chromium is added to the catalyst to stabilize the catalyst activity over time and to delay sintering of iron oxide. Sintering will decrease the active catalyst area, so by decreasing the sintering rate the lifetime of the catalyst will be extended. The catalyst is usually used in pellets form, and the size play an important role. Large pellets will be strong, but the reaction rate will be limited. In the end, the dominant phase in the catalyst consist of Cr3+ in α-Fe2O3 but the catalyst is still not active. To be active α-Fe2O3 must be reduced to Fe and CrO3 must be reduced to Cr in presence of H2. This usually happens in the reactor start-up phase and because the reduction reactions are exothermic the reduction should happen under controlled circumstances. The lifetime of the iron-chrome catalyst is approximately 3–5 years, depending on how the catalyst is handled. Even though the mechanism for the HTS catalyst has been done a lot of research on, there is no final agreement on the kinetics/mechanism. Research has narrowed it down to two possible mechanisms: a regenerative redox mechanism and an adsorptive(associative) mechanism. The redox mechanism is given below: First a CO molecule reduces an O molecule, yielding CO2 and a vacant surface center: The vacant side is then reoxidized by water, and the oxide center is regenerated: The adsorptive mechanism assumes that format species is produced when an adsorbed CO molecule reacts with a surface hydroxyl group: The format decomposes then in the presence of steam: Low temperature shift (LTS) catalyst The low temperature process is the second stage in the process, and is designed to take advantage of higher hydrogen equilibrium at low temperatures. The reaction is carried out between 200 °C and 250 °C, and the most commonly used catalyst is based on copper. While the HTS reactor used an iron-chrome based catalyst, the copper-catalyst is more active at lower temperatures thereby yielding a lower equilibrium concentration of CO and a higher equilibrium concentration of H2. The disadvantage with a copper catalysts is that it is very sensitive when it comes to sulfide poisoning, a future use of for example a cobalt- molybdenum catalyst could solve this problem. The catalyst mainly used in the industry today is a copper-zinc-alumina (Cu/ZnO/Al2O3) based catalyst. Also the LTS catalyst has to be activated by reduction before it can be used. The reduction reaction CuO + H2 →Cu + H2O is highly exothermic and should be conducted in dry gas for an optimal result. As for the HTS catalyst mechanism, two similar reaction mechanisms are suggested. The first mechanism that was proposed for the LTS reaction was a redox mechanism, but later evidence showed that the reaction can proceed via associated intermediates. The different intermediates that is suggested are: HOCO, HCO and HCOO. In 2009 there are in total three mechanisms that are proposed for the water-gas shift reaction over Cu(111), given below. Intermediate mechanism (usually called associative mechanism): An intermediate is first formed and then decomposes into the final products: Associative mechanism: CO2 produced from the reaction of CO with OH without the formation of an intermediate: Redox mechanism: Water dissociation that yields surface oxygen atoms which react with CO to produce CO2: It is not said that just one of these mechanisms is controlling the reaction, it is possible that several of them are active. Q.-L. Tang et al. has suggested that the most favorable mechanism is the intermediate mechanism (with HOCO as intermediate) followed by the redox mechanism with the rate determining step being the water dissociation. For both HTS catalyst and LTS catalyst the redox mechanism is the oldest theory and most published articles support this theory, but as technology has developed the adsorptive mechanism has become more of interest. One of the reasons to the fact that the literature is not agreeing on one mechanism can be because of experiments are carried out under different assumptions. Carbon Monoxide CO must be produced for the WGS reaction to take place. This can be done in different ways from a variety of carbon sources such as: passing steam over coal: steam reforming methane, over a nickel catalyst: or by using biomass. Both the reactions shown above are highly endothermic and can be coupled to an exothermic partial oxidation. The products of CO and H2 are known as syngas. When dealing with a catalyst and CO, it is common to assume that the intermediate CO-Metal is formed before the intermediate reacts further into the products. When designing a catalyst this is important to remember. The strength of interaction between the CO molecule and the metal should be strong enough to provide a sufficient concentration of the intermediate, but not so strong that the reaction will not continue. CO is a common molecule to use in a catalytic reaction, and when it interacts with a metal surface it is actually the molecular orbitals of CO that interacts with the d-band of the metal surface. When considering a molecular orbital(MO)-diagram CO can act as an σ-donor via the lone pair of the electrons on C, and a π-acceptor ligand in transition metal complexes. When a CO molecule is adsorbed on a metal surface, the d-band of the metal will interact with the molecular orbitals of CO. It is possible to look at a simplified picture, and only consider the LUMO (2π*) and HOMO (5σ) to CO. The overall effect of the σ-donation and the π- back donation is that a strong bond between C and the metal is being formed and in addition the bond between C and O will be weakened. The latter effect is due to charge depletion of the CO 5σ bonding and charge increase of the CO 2π* antibonding orbital. When looking at chemical surfaces, many researchers seems to agree on that the surface of the Cu/Al2O3/ZnO is most similar to the Cu(111) surface. Since copper is the main catalyst and the active phase in the LTS catalyst, many experiments has been done with copper. In the figure given here experiments has been done on Cu(110) and Cu(111). The figure shows Arrhenius plot derived from reaction rates. It can be seen from the figure that Cu(110) shows a faster reaction rate and a lower activation energy. This can be due to the fact that Cu(111) is more closely packed than Cu(110). Methanol production Production of methanol is an important industry today and methanol is one of the largest volume carbonylation products. The process uses syngas as feedstock and for that reason the water gas shift reaction is important for this synthesis. The most important reaction based on methanol is the decomposition of methanol to yield carbon monoxide and hydrogen. Methanol is therefore an important raw material for production of CO and H2 that can be used in generation of fuel. BASF was the first company (in 1923) to produce methanol on large-scale, then using a sulfur-resistant ZnO/Cr2O3 catalyst. The feed gas was produced by gasification over coal. Today the synthesis gas is usually manufactured via steam reforming of natural gas. The most effective catalysts for methanol synthesis are Cu, Ni, Pd and Pt, while the most common metals used for support are Al and Si. In 1966 ICI (Imperial Chemical Industries) developed a process that is still in use today. The process is a low-pressure process that uses a Cu/ZnO/Al2O3 catalyst where copper is the active material. This catalyst is actually the same that the low-temperature shift catalyst in the WGS reaction is using. The reaction described below is carried out at 250 °C and 5-10 MPa: Both of these reactions are exothermic and proceeds with volume contraction. Maximum yield of methanol is therefore obtained at low temperatures and high pressure and with use of a catalyst that has a high activity at these conditions. A catalyst with sufficiently high activity at the low temperature does still not exist, and this is one of the main reasons that companies keep doing research and catalyst development. A reaction mechanism for methanol synthesis has been suggested by Chinchen et al.: Today there are four different ways to catalytically obtain hydrogen production from methanol, and all reactions can be carried out by using a transition metal catalyst (Cu, Pd): Steam reforming The reaction is given as: Steam reforming is a good source for production of hydrogen, but the reaction is endothermic. The reaction can be carried out over a copper-based catalyst, but the reaction mechanism is dependent on the catalyst. For a copper-based catalyst two different reaction mechanisms have been proposed, a decomposition-water-gas shift sequence and a mechanism that proceeds via methanol dehydrogenation to methyl formate. The first mechanism aims at methanol decomposition followed by the WGS reaction and has been proposed for the Cu/ZnO/Al2O3: The mechanism for the methyl format reaction can be dependent of the composition of the catalyst. The following mechanism has been proposed over Cu/ZnO/Al2O3: When methanol is almost completely converted CO is being produced as a secondary product via the reverse water-gas shift reaction. Methanol decomposition The second way to produce hydrogen from methanol is by methanol decomposition: As the enthalpy shows, the reaction is endothermic and this can be taken further advantage of in the industry. This reaction is the opposite of the methanol synthesis from syngas, and the most effective catalysts seems to be Cu, Ni, Pd and Pt as mentioned before. Often, a Cu/ZnO-based catalyst is used at temperatures between 200 and 300 °C but by-products of production like dimethyl ether, methyl format, methane and water are common. The reaction mechanism is not fully understood and there are two possible mechanism proposed (2002) : one producing CO2 and H2 by decomposition of formate intermediates and the other producing CO and H2 via a methyl formate intermediate. Partial oxidation Partial oxidation is a third way for producing hydrogen from methanol. The reaction is given below, and is often carried out with air or oxygen as oxidant : The reaction is exothermic and has, under favorable conditions, a higher reaction rate than steam reforming. The catalyst used is often Cu (Cu/ZnO) or Pd and they differ in qualities such as by-product formation, product distribution and the effect of oxygen partial pressure. Combined reforming Combined reforming is a combination of partial oxidation and steam reforming and is the last reaction that is used for hydrogen production. The general equation is given below: and are the stoichiometric coefficients for steam reforming and partial oxidation, respectively. The reaction can be both endothermic and exothermic determined by the conditions, and combine both the advantages of steam reforming and partial oxidation. Ammonia synthesis Ammonia synthesis was discovered by Fritz Haber, by using iron catalysts. The ammonia synthesis advanced between 1909 and 1913, and two important concepts were developed; the benefits of a promoter and the poisoning effect (see catalysis for more details). Ammonia production was one of the first commercial processes that required the production of hydrogen, and the cheapest and best way to obtain hydrogen was via the water-gas shift reaction. The Haber–Bosch process is the most common process used in the ammonia industry. A lot of research has been done on the catalyst used in the ammonia process, but the main catalyst that is used today is not that dissimilar to the one that was first developed. The catalyst the industry use is a promoted iron catalyst, where the promoters can be K2O (potassium oxide), Al2O3 (aluminium oxide) and CaO (calcium oxide) and the basic catalytic material is iron. The most common is to use fixed bed reactors for the synthesis catalyst. The main ammonia reaction is given below: The produced ammonia can be used further in production of nitric acid via the Ostwald process. See also Ammonia Chemical plant Chemical industry References Catalysis
Industrial catalysts
[ "Chemistry" ]
3,501
[ "Catalysis", "Chemical kinetics" ]
31,586,191
https://en.wikipedia.org/wiki/Dana%2030
The Dana/Spicer Model 30 is an automotive axle manufactured by Dana Holding Corporation. It has been manufactured as a beam axle and independent suspension axle with several versions. General specifications Ring Gear measures OEM Inner axle shaft spline count: 27 GAWR up to 2770 lbs. Dana 30 solid axles Dana 23 The Dana Spicer 23 is an axle the Dana 30 is loosely based, with improvements throughout time. This axle was only made for the rear of vehicles. Full floating and semi floating variations were produced. Dana 25 The Dana Spicer 25 was based on the Dana 23 and was made only as a front axle for four-wheel drive vehicles. This was the company's first front drive axle. Dana 27 The Dana Spicer 27 unit phased out Dana 23 and Dana 25 units in the 1960s Independent front suspension Dana 30 axle Jeep Liberty 4x4 models use the Dana 30 in the form of independent suspension in the front (IFS). The AMC Eagle front axle is also a Dana 30 IFS. References Automotive engineering Automobile axles
Dana 30
[ "Engineering" ]
212
[ "Automotive engineering", "Mechanical engineering by discipline" ]
31,587,252
https://en.wikipedia.org/wiki/Information%20projection
In information theory, the information projection or I-projection of a probability distribution q onto a set of distributions P is . where is the Kullback–Leibler divergence from q to p. Viewing the Kullback–Leibler divergence as a measure of distance, the I-projection is the "closest" distribution to q of all the distributions in P. The I-projection is useful in setting up information geometry, notably because of the following inequality, valid when P is convex: . This inequality can be interpreted as an information-geometric version of Pythagoras' triangle-inequality theorem, where KL divergence is viewed as squared distance in a Euclidean space. It is worthwhile to note that since and continuous in p, if P is closed and non-empty, then there exists at least one minimizer to the optimization problem framed above. Furthermore, if P is convex, then the optimum distribution is unique. The reverse I-projection also known as moment projection or M-projection is . Since the KL divergence is not symmetric in its arguments, the I-projection and the M-projection will exhibit different behavior. For I-projection, will typically under-estimate the support of and will lock onto one of its modes. This is due to , whenever to make sure KL divergence stays finite. For M-projection, will typically over-estimate the support of . This is due to whenever to make sure KL divergence stays finite. The reverse I-projection plays a fundamental role in the construction of optimal e-variables. The concept of information projection can be extended to arbitrary f-divergences and other divergences. See also Sanov's theorem References K. Murphy, "Machine Learning: a Probabilistic Perspective", The MIT Press, 2012. Information theory
Information projection
[ "Mathematics", "Technology", "Engineering" ]
374
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
6,844,674
https://en.wikipedia.org/wiki/Dangerously%20irrelevant%20operator
In statistical mechanics and quantum field theory, a dangerously irrelevant operator (or dangerous irrelevant operator) is an operator which is irrelevant at a renormalization group fixed point, yet affects the infrared (IR) physics significantly (e.g. because the vacuum expectation value (VEV) of some field depends sensitively upon the coefficient of this operator). Critical phenomena In the theory of critical phenomena, free energy of a system near the critical point depends analytically on the coefficients of generic (not dangerous) irrelevant operators, while the dependence on the coefficients of dangerously irrelevant operators is non-analytic ( p. 49). The presence of dangerously irrelevant operators leads to the violation of the hyperscaling relation between the critical exponents and in dimensions. The simplest example ( p. 93) is the critical point of the Ising ferromagnet in dimensions, which is a gaussian theory (free massless scalar ), but the leading irrelevant perturbation is dangerously irrelevant. Another example occurs for the Ising model with random-field disorder, where the fixed point occurs at zero temperature, and the temperature perturbation is dangerously irrelevant ( p. 164). Quantum field theory Let us suppose there is a field with a potential depending upon two parameters, and . Let us also suppose that is positive and nonzero and > . If is zero, there is no stable equilibrium. If the scaling dimension of is , then the scaling dimension of is where is the number of dimensions. It is clear that if the scaling dimension of is negative, is an irrelevant parameter. However, the crucial point is, that the VEV . depends very sensitively upon , at least for small values of . Because the nature of infrared physics also depends upon the VEV, it looks very different even for a tiny change in not because the physics in the vicinity of changes much — it hardly changes at all — but because the VEV we are expanding about has changed enormously. Supersymmetric models with a modulus can often have dangerously irrelevant parameters. Other uses of the term Consider a renormalization group (RG) flow triggered at short distances by a relevant perturbation of an ultra-violet (UV) fixed point, and flowing at long distances to an infra-red (IR) fixed point. It may be possible (e.g. in perturbation theory) to monitor how dimensions of UV operators change along the RG flow. In such a situation, one sometimes calls dangerously irrelevant a UV operator whose scaling dimension, while irrelevant at short distances: , receives a negative correction along a renormalization group flow, so that the operator becomes relevant at long distances: . This usage of the term is different from the one originally introduced in statistical physics. References Renormalization group
Dangerously irrelevant operator
[ "Physics" ]
566
[ "Physical phenomena", "Critical phenomena", "Quantum mechanics", "Renormalization group", "Statistical mechanics", "Quantum physics stubs" ]
6,847,430
https://en.wikipedia.org/wiki/Sec-Butyllithium
sec-Butyllithium is an organometallic compound with the formula CH3CHLiCH2CH3, abbreviated sec-BuLi or s-BuLi. This chiral organolithium reagent is used as a source of sec-butyl carbanion in organic synthesis. Synthesis sec-BuLi can be prepared by the reaction of sec-butyl halides with lithium metal: Properties Physical properties sec-Butyllithium is a colorless viscous liquid. Using mass spectrometry, it was determined that the pure compound has a tetrameric structure. It also exists as tetramers when dissolved in organic solvents such as benzene, cyclohexane or cyclopentane. The cyclopentane solution has been detected with 6Li-NMR spectroscopy to have a hexameric structure at temperatures below −41 °C. In electron-donating solvents such as tetrahydrofuran, there exists an equilibrium between monomeric and dimeric forms. Chemical properties The carbon-lithium bond is highly polar, rendering the carbon basic, as in other organolithium reagents. Sec-butyllithium is more basic than the primary organolithium reagent, n-butyllithium. It is also more sterically hindered. sec-BuLi is employed for deprotonations of particularly weak carbon acids where the more conventional reagent n-BuLi is unsatisfactory. It is, however, so basic that its use requires greater care than for n-BuLi. For example diethyl ether is attacked by sec-BuLi at room temperature in minutes, whereas ether solutions of n-BuLi are stable. The compound decomposes slowly at room temperature and more rapidly at higher temperatures, giving lithium hydride and a mixture of butenes. Applications Many transformations involving sec-butyllithium are similar to those involving other organolithium reagents. In combination with sparteine as a chiral auxiliary, sec-butyllithium is useful in enantioselective deprototonations. It is also effective for lithiation of arenes. References Organolithium compounds Sec-Butyl compounds
Sec-Butyllithium
[ "Chemistry" ]
472
[ "Organolithium compounds", "Reagents for organic chemistry" ]
6,847,848
https://en.wikipedia.org/wiki/Ternary%20complex
A ternary complex is a protein complex containing three different molecules that are bound together. In structural biology, ternary complex can also be used to describe a crystal containing a protein with two small molecules bound, such as a cofactor and a substrate; or a complex formed between two proteins and a single substrate. In Immunology, ternary complex can refer to the MHC–peptide–T-cell-receptor complex formed when T cells recognize epitopes of an antigen. Another important example is the ternary complex formed during eukaryotic translation, in which ternary complex composed of eIF2 + GTP + Met-tRNAiMet is formed. A ternary complex can be a complex formed between two substrate molecules and an enzyme. This is seen in multi-substrate enzyme-catalyzed reactions where two substrates and two products can be formed. The ternary complex is an intermediate species in this type of enzyme-catalyzed reaction. An example for a ternary complex is seen in the random-order mechanism or the compulsory-order mechanism of enzyme catalysis for multiple substrates. The term ternary complex can also refer to a polymer formed by electrostatic interactions. References Protein complexes Trevor Palmer (Enzymes, 2nd edition)
Ternary complex
[ "Chemistry", "Biology" ]
255
[ "Protein stubs", "Biotechnology stubs", "Molecular biology stubs", "Biochemistry stubs", "Molecular biology", "Biochemistry" ]
6,849,966
https://en.wikipedia.org/wiki/Big%20Dig%20ceiling%20collapse
The Big Dig ceiling collapse occurred on July 10, 2006, when a concrete ceiling panel and debris weighing and measuring fell in Boston's Fort Point Channel Tunnel (which connects to the Ted Williams Tunnel). The panel fell on a car traveling on the two-lane ramp connecting northbound I-93 to eastbound I-90 in South Boston, killing a passenger and injuring the driver. Investigation and repair of the collapse caused a section of the Big Dig project to be closed for almost a full year, causing chronic traffic backups. Cause The east ends of the westbound and eastbound connector tunnels were designed and constructed in the same manner. Both ends of the tunnel were built sooner than the connecting section, in order to allow the D Street bridge above to be constructed sooner. The end sections had not been designed to incorporate a hanging ceiling system like that used in the connecting section. The collapse of the ceiling structure began with the simultaneous creep-type failure of several anchors embedded in epoxy in the tunnel's roof slab. Each of the panel's intersecting connection points consists of several individual bolts anchored into the roof slab concrete. The failure of a group of anchors set off a chain reaction which caused other adjacent connection groups to creep then fail, dropping of concrete to the roadway below. Numerous problems with this same system of bolts and epoxy in the Ted Williams Tunnel had been previously revealed in a 1998 Office of the Inspector General report. Not only were the bolts too short, but the epoxy used to glue the bolts into the concrete was not up to standard. The state Turnpike Authority and the Federal Highway Administration, citing the ongoing criminal investigation, refused requests received after the accident to release documents relating to the work conducted along the Seaport connector, including: Deficiency reports that would have shown problems flagged during initial work on the tunnel. Construction change orders that would have shown costly repairs and contract revisions that occurred because of deficiencies. Inspection reports and other documents that would show who would have knowledge of the workmanship and building material quality. One year earlier, US House Representative Stephen Lynch also had trouble obtaining records regarding the Big Dig tunnel leaks for the Congress' Committee on Government Oversight. Aftermath and response After the ceiling collapse, Attorney General Tom Reilly described the tunnel as a crime scene and issued subpoenas to the companies and individuals responsible for the tunnel construction and testing. Governor Mitt Romney returned from a vacation in New Hampshire to view the condition of the tunnels. The Governor ordered the closure of the connecting roads that lead into the Fort Point Channel Tunnel and several ramps to the westbound section from within the city. These closures caused dramatic overflow congestion throughout the city as motorists sought alternate routes to and from Logan International Airport and several other key arterial routes. Beyond the difficulties posed within the city, the Fort Point Channel Tunnel and Ted Williams Tunnel link the Massachusetts Turnpike and Interstate 93 to Logan, so this also blocked a key inbound link for airport travelers coming from outside the city, forcing them to seek alternate routes like the Callahan Tunnel or follow poorly marked detours that wound through the city, often resulting in additional travel times of one hour or more. The legislature approved the governor's plan to assume oversight of the investigation into the collapse (as Romney had only gained office in 2003, long after any decisions about the construction had been made, he was seen as a good choice for an independent investigator), taking responsibility away from the Massachusetts Turnpike Authority, and additionally allocating $20 million for a "stem to stern" safety review of the Central Artery system. At the request of all the members of the Massachusetts congressional delegation, the National Transportation Safety Board dispatched a six-member civil engineering team to Boston to inspect the accident scene and determine whether a full-scale investigation was warranted. Problems identified Safety inspections following the accident identified 242 potentially dangerous bolt fixtures supporting the ceiling tiles in the Interstate 90 connector tunnel. As problems throughout the tunnels were identified, various sections of roadway were closed to make repairs, then later re-opened. New concerns about ceiling fans, weighing approximately three tons each, used to circulate air throughout the tunnel system, were also identified. The National Transportation Safety Board released a report on the one-year anniversary of the disaster, that attributed the major cause of the collapse to "epoxy creep". On August 8, 2007, a Suffolk County Grand Jury indicted epoxy company Powers Fasteners, Inc., on one charge of involuntary manslaughter, with the maximum penalty in Massachusetts being a fine of $1,000. In 2008, the company agreed to pay the city and state a total of $16 million to dismiss the charges. It also paid an additional $6 million to the family of the killed passenger. It also agreed to stop production of the type of epoxy that had been used in the tunnel construction and to issue a recall to customers who had purchased it in the past. The epoxy used in the D Street portal that failed cost $1,287.60. The cost to redesign, inspect, and repair all of the tunnels after the collapse was $54 million. Political fallout On July 13, 2006, the leaders of the state legislature, Senate President Robert Travaglini and House Speaker Sal DiMasi, called upon Turnpike Authority chairman Matthew J. Amorello, who provided oversight of the project, to consider stepping down from his position and accepting a diminished role. Governor Romney and Attorney General Reilly both called for the resignation of Amorello. This stance was supported in editorials in Boston's two major newspapers, the Boston Herald and The Boston Globe. On July 18, Amorello was presented with a formal list of charges that Romney intended to use to justify Amorello's removal. Amorello made an unsuccessful effort to ask the Massachusetts Supreme Judicial Court to postpone the removal hearing before Romney. On July 27, 2006, after the Supreme Judicial Court rejected his request and shortly before the hearing was to have begun, Armorello announced his intention to resign as Chairman of the Massachusetts Turnpike Authority effective August 15, 2006. Massachusetts Secretary of Transportation John Cogliano also came under fire after he chose to hire Bechtel/Parsons Brinckerhoff, the company that was responsible for overseeing the original construction of the tunnel, to inspect the repairs. The hiring of Bechtel/Parsons Brinckerhoff resulted in an inquiry from the Office of Inspector General for the Department of Transportation. Cogliano admitted that he regretted reusing the firm and the state promised not to hire any Bechtel/Parsons Brinckerhoff employees to work on repairs in the I-90 tunnel. Lawsuits On November 27, 2006, departing Attorney General Tom Reilly announced that the state would launch a civil suit over the collapse of the ceiling in the Ted Williams Tunnel. The Commonwealth will be seeking over $150 million from project manager Bechtel/Parsons Brinckerhoff, builder Modern Continental Construction Co. and the manufacturer of the epoxy used to hold the ceiling bolts. Attorney General Martha Coakley on March 1, 2007, named Paul Ware from Goodwin Procter, a Boston law firm, as the lead in the criminal investigation into whether there was criminal culpability in the Big Dig tunnel collapse and was appointed as a special assistant attorney general. On December 24, 2007, the family of Milena Del Valle (who was killed in the collapse) and Angel Del Valle (who was injured) announced that they had reached a settlement with Powers Fasteners, in which they would be paid $6 million. The Del Valle family stated, "We are grateful that the Powers family company has done the right thing." Powers denied responsibility, but said that the settlement would "allow the healing process to begin." Powers also stated "We also hope that this will lead others who, unlike Powers, truly were responsible for the accident, to do the same." In January 2008, the state and the office of United States Attorney for the District of Massachusetts, Michael Sullivan, reached a settlement with the contractors responsible for the failure, which included no criminal charges and no bar against receiving future contracts. The Bechtel/Parsons Brinckerhoff joint venture paid $405 million, and smaller contractors paid a total of $51 million. In September 2008, the Del Valle family announced that they had reached a $28 million settlement, resolving the lawsuits against all 15 companies involved in construction of the tunnel, including the Massachusetts Turnpike Authority. Other problems There were other difficulties with the design and construction of the Big Dig project, including numerous leaks, dangerous guardrails, and the threat of heavy lighting fixtures also falling from the ceilings. The Georgia DOT found that failure of the same epoxy at fault for the ceiling collapse was also to blame for the 2011 fall of a fenced and lighted covered-walkway structure attached to the south side of the relatively new 17th Street Bridge, which links Atlantic Station to Midtown Atlanta over I-75/I-85. No injuries occurred in that incident, as the collapse was in the overnight hours, with very little traffic on the freeway. See also Sasago Tunnel — Japanese tunnel where a similar ceiling collapse occurred in 2012 References Engineering failures 2006 road incidents Political scandals in Massachusetts 2006 in Boston Tunnel disasters 2006 disasters in the United States July 2006 events in the United States Disasters in Boston
Big Dig ceiling collapse
[ "Technology", "Engineering" ]
1,881
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
6,851,367
https://en.wikipedia.org/wiki/Langer%20correction
The Langer correction, named after the mathematician Rudolf Ernest Langer, is a correction to the WKB approximation for problems with radial symmetry. Description In 3D systems When applying WKB approximation method to the radial Schrödinger equation, where the effective potential is given by ( the azimuthal quantum number related to the angular momentum operator), the eigenenergies and the wave function behaviour obtained are different from the real solution. In 1937, Rudolf E. Langer suggested a correction which is known as Langer correction or Langer replacement. This manipulation is equivalent to inserting a 1/4 constant factor whenever appears. Heuristically, it is said that this factor arises because the range of the radial Schrödinger equation is restricted from 0 to infinity, as opposed to the entire real line. By such a changing of constant term in the effective potential, the results obtained by WKB approximation reproduces the exact spectrum for many potentials. That the Langer replacement is correct follows from the WKB calculation of the Coulomb eigenvalues with the replacement which reproduces the well known result. In 2D systems Note that for 2D systems, as the effective potential takes the form so Langer correction goes: This manipulation is also equivalent to insert a 1/4 constant factor whenever appears. Justification An even more convincing calculation is the derivation of Regge trajectories (and hence eigenvalues) of the radial Schrödinger equation with Yukawa potential by both a perturbation method (with the old factor) and independently the derivation by the WKB method (with Langer replacement)-- in both cases even to higher orders. For the perturbation calculation see Müller-Kirsten book and for the WKB calculation Boukema. See also Einstein–Brillouin–Keller method References Theoretical physics
Langer correction
[ "Physics" ]
382
[ "Theoretical physics" ]
6,852,602
https://en.wikipedia.org/wiki/Hormonal%20imprinting
Hormonal imprinting (HI) is a phenomenon which takes place at the first encounter between a hormone and its developing receptor in the critical periods of life (in unicellulars during the whole life) and determines the later signal transduction capacity of the cell. The most important period in mammals is the perinatal one, however this system can be imprinted at weaning, at puberty and in case of continuously dividing cells during the whole life. Faulty imprinting is caused by drugs, environmental pollutants and other hormone-like molecules present in excess at the critical periods with lifelong receptorial, morphological, biochemical and behavioral consequences. HI is transmitted to the hundreds of progeny generations in unicellulars and (as proved) to a few generations also in mammals. References External links Phylogeny of hormone receptors Cell biology Physiology Perception Signal transduction
Hormonal imprinting
[ "Chemistry", "Biology" ]
181
[ "Cell biology", "Physiology", "Signal transduction", "Biochemistry", "Neurochemistry" ]
34,586,557
https://en.wikipedia.org/wiki/Rabbit%E2%80%93duck%20illusion
The rabbit–duck illusion is an ambiguous image in which a rabbit or a duck can be seen. The earliest known version is an unattributed drawing from the 23 October 1892 issue of , a German humour magazine. It was captioned, in older German spelling, "" ("Which animals are most like each other?"), with "" ("Rabbit and Duck") written underneath. After being used by psychologist Joseph Jastrow, the image was made famous by Ludwig Wittgenstein, who included it in his Philosophical Investigations as a means of describing two different ways of seeing: "seeing that" versus "seeing as". Correlations Whether one sees a rabbit or a duck, and how often, may correlate with sociological, biological, and psychological factors. For example, Swiss, both young and old, tend to see a bunny during Easter and a bird/duck in October. It may also indicate creativity. A standard test of creativity is to list as many novel uses as one can for an everyday object (e.g., a paper clip) in a limited time. Wiseman et al. found that participants who easily could see the image as either a rabbit or duck came up with an average of about 5 novel uses for their everyday item, while those who could not flip between rabbit and duck at all came up with fewer than 2 novel uses. Philosophical implications Several scholars suggested that the illusion resonates philosophically and politically. Wittgenstein, as Shirley Le Penne commented, employed the rabbit–duck illusion to distinguish perception from interpretation. If you see only a rabbit, you would say "this is a rabbit", but once you become aware of the duality you would say "now I see it as a rabbit". You may also say "it's a rabbit–duck", which, for Wittgenstein, is a perceptual report. Thomas Kuhn used the rabbit–duck illusion as a metaphor for revolutionary change in science, illustrating the way in which a paradigm shift could cause one to see the same information in an entirely different way. Uriel Abulof said that the illusion crystallizes the interplay between freedom (choice) and facticity (forced reality). If you see just a duck, you may need to actively choose to work on seeing the rabbit too, and once you do, to then choose which you see at any given point. While submitting that "once you see the duck you cannot unsee it", Abulof said that "trying to unsee what we already did might be less about choosing one perspective over another but about negating one, so that we don't have to choose." References External links The illusion in at the University Library Heidelberg Rabbitduck, a sculpture by Paul St George Optical illusions 1892 in art Rabbits and hares in art Ducks in art
Rabbit–duck illusion
[ "Physics" ]
591
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
21,484,024
https://en.wikipedia.org/wiki/International%20Symbol%20of%20Access
The International Symbol of Access (ISA), also known as the International Wheelchair Symbol, denotes areas where access has been improved, mostly for those with disabilities. It consists of a usually blue square overlaid in white (or in contrasting colours) with a stylized image of a person in a wheelchair. It is maintained as the international standard ISO 7001, image of the International Commission on Technology and Accessibility (ICTA), a committee of Rehabilitation International (RI). History In the late 1960s, with the rise of universal design, there grew a need for a symbol to identify accessible facilities. In 1968, Norman Acton, President of Rehabilitation International (RI), tasked Karl Montan, chairman of the International Commission of Technology and Accessibility (ICTA), to develop a symbol as a technical aid and present in the group's 1969 World Congress convention in Dublin. The project was arranged with the Scandinavian Students Organization (SDO) in Konstfack's College of Arts. The symbol which would become the ISA was designed by Danish design student Susanne Koefoed. She presented an early version of the symbol at the July 1968 exhibition held during the end of the SDO seminar. Koefoed's symbol depicts a stickfigure on a wheelchair. It is influenced by the contemporary design movement of Scandinavia in the 20th century, especially that of Austrian-American designer and lecturer Victor Papanek. The committee founded by Montan selected Koefoed's sketch alongside five other symbols. The revised design was modified with the addition of a circle for a head to give the impression of a seated figure, as Montan noted: "a slight inconvenience with the symbol is the equally thick lines, which may give an impression of a monogram of letters. With a 'head' on the symbol this inconvenience would disappear". This was done without Koefoed's knowledge according to her own recounting. The design was made public in 1969 and was widely promoted around Sweden. It was approved in the conference gained prominence and usage through convenient signage created by 3M Corporation, and was later incorporated into the ISO 7001 standard published by the International Organization for Standardization. In 1974, it was formally accepted by the United Nations in an experts' meeting on disability. Functions The symbol is often seen where access has been improved, particularly for wheelchair users, but also for other disability issues. Frequently, the symbol denotes the removal of environmental barriers, such as steps, which also helps older people, parents with baby carriages, and travellers. Universal design aims to obviate such symbols by creating products and facilities that are accessible to nearly all users from the start. The wheelchair symbol is "international" and therefore not accompanied by Braille in any particular language. Specific uses of the ISA include: Marking a parking space reserved for vehicles used by people with disabilities/blue badge holders Marking a vehicle used by a person with a disability, often for permission to use a space Marking a public lavatory with facilities designed for wheelchair users Indicating a button to activate an automatic door Indicating an accessible transit station or vehicle Indicating a transit route that uses accessible vehicles Building codes such as the California Building Code, require "a white figure on a blue background. The blue shall be equal to Color No. 15090 in Federal Standard 595B." Accessible Icon In 2010, artists Sara Hendren and Brian Glenney co-founded the Accessible Icon project, an art project in order to design a new icon with focus on the person with disability, as they felt that the old icon felt "robotic" and "stiff". It underwent many versions until arriving on the current, dynamic design depicting a person leaning forward and arms raised to indicate movement. Some disability organizations such as Enabling Unit in India have promoted it, This version of the symbol is officially used in the U.S. states of New York and Connecticut. The Modified ISA is in the permanent collection of Museum of Modern Art in New York. In Canada, it is permitted as an alternative option in the British Columbia Building Codes 2024 edition, but not yet permitted in the national parent code or Alberta edition. The Accessible Icon has also had detractors within the disabled community. According to Emma Teitel of the Toronto Star, critics say that the modified image does not universally represent all disabled people, since it socially stigmatizes those who have a disability but do not use a wheelchair. Critics have defended the old International Symbol of Access for its more abstract design, which leaves more to the imagination and can represent any disability. In May 2015, the Federal Highway Administration rejected the new design for use on road signs in the United States, citing the fact that it has not been adopted or endorsed by the U.S. Access Board, the agency responsible for developing the federal criteria for accessible design. The International Organization for Standardization, which established the regular use of the original symbol under ISO 7001, has also rejected the design. In 2024, the new design has been integrated in the improved European Parking card for persons with disabilities. Unicode The International Symbol of Access is assigned the Unicode emoji code point , and it was added to Unicode 4.1 in 2005 as part of Emoji 1.0. In 2016 with the release of iOS 10.0, Apple updated the emoji to use the Accessible Icon. References External links ISO's catalog entry for ISO 7001 Symbol Of Accessibility on Rehabilitation International's website Accessible Icon Project by Brian Glenney and Sara Hendren Symbols Accessibility Symbols introduced in 1968 Danish inventions ISO standards
International Symbol of Access
[ "Mathematics", "Engineering" ]
1,127
[ "Accessibility", "Symbols", "Design" ]
21,485,619
https://en.wikipedia.org/wiki/Riemann%E2%80%93Siegel%20formula
In mathematics, the Riemann–Siegel formula is an asymptotic formula for the error of the approximate functional equation of the Riemann zeta function, an approximation of the zeta function by a sum of two finite Dirichlet series. It was found by in unpublished manuscripts of Bernhard Riemann dating from the 1850s. Siegel derived it from the Riemann–Siegel integral formula, an expression for the zeta function involving contour integrals. It is often used to compute values of the Riemann–Siegel formula, sometimes in combination with the Odlyzko–Schönhage algorithm which speeds it up considerably. When used along the critical line, it is often useful to use it in a form where it becomes a formula for the Z function. If M and N are non-negative integers, then the zeta function is equal to where is the factor appearing in the functional equation , and is a contour integral whose contour starts and ends at +∞ and circles the singularities of absolute value at most . The approximate functional equation gives an estimate for the size of the error term. and derive the Riemann–Siegel formula from this by applying the method of steepest descent to this integral to give an asymptotic expansion for the error term R(s) as a series of negative powers of Im(s). In applications s is usually on the critical line, and the positive integers M and N are chosen to be about . found good bounds for the error of the Riemann–Siegel formula. Riemann's integral formula Riemann showed that where the contour of integration is a line of slope −1 passing between 0 and 1 . He used this to give the following integral formula for the zeta function: References Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966. External links Zeta and L-functions Theorems in analytic number theory Bernhard Riemann
Riemann–Siegel formula
[ "Mathematics" ]
389
[ "Theorems in mathematical analysis", "Theorems in number theory", "Theorems in analytic number theory" ]
21,487,040
https://en.wikipedia.org/wiki/Uranyl%20carbonate
Uranyl carbonate refers to the inorganic compound with the formula UO2CO3. Also known by its mineral name rutherfordine, this material consists of uranyl (UO22+) and carbonate (CO32-). Like most uranyl salts, the compound is a polymeric, each uranium(VI) center being bonded to eight O atoms. Hydrolysis products of rutherfordine are also found in both the mineral and organic fractions of coal and its fly ash and is the main component of uranium in mine tailing seepage water. Uranyl carbonates as a class of materials Many uranyl carbonates exist, rutherfordine being the simplest stoichiometry. Most uranyl carbonates additional components including water and diverse anions and cations. A common method for concentrating uranium from a solution uses solutions of uranyl carbonates, which are passed through a resin bed where the complex ions are transferred to the resin by ion exchange with a negative ion like chloride. After build-up of the uranium complex on the resin, the uranium is eluted with a salt solution and the uranium is precipitated in another process. Uranyl carbonate minerals Uranyl carbonates include: Andersonite (hydrated sodium calcium uranyl carbonate) Astrocyanite-(Ce) (hydrated copper cerium neodymium lanthanum praseodymium samarium calcium yttrium uranyl carbonate hydroxide) Bayleyite (hydrated magnesium uranyl carbonate) Bijvoetite-(Y) (hydrated yttrium dysprosium uranyl carbonate hydroxide) Fontanite (hydrated calcium uranyl carbonate) Grimselite (hydrated potassium sodium uranyl carbonate) Joliotite (hydrated uranyl carbonate) Liebigite (hydrated calcium uranyl carbonate) Mckelveyite-(Y) (hydrated barium sodium calcium uranium yttrium carbonate) Metazellerite (hydrated calcium uranyl carbonate) Rabbittite (hydrated calcium magnesium uranyl carbonate hydroxide) Roubaultite (copper uranyl carbonate oxide hydroxide) Rutherfordine (uranyl carbonate) Schröckingerite (hydrated sodium calcium uranyl sulfate carbonate fluoride) Shabaite (hydrated copper cerium neodymium lanthanum praseodymium samarium calcium yttrium uranyl carbonate hydroxide) Sharpite (hydrated calcium uranyl carbonate hydroxide) Swartzite (hydrated calcium magnesium uranyl carbonate) Voglite (hydrated calcium copper uranyl carbonate) Wyartite (hydrated calcium uranyl carbonate hydroxide) Widenmannite (lead uranyl carbonate) Zellerite (hydrated calcium uranyl carbonate) Znucalite (hydrated calcium zinc uranyl carbonate hydroxide) References Uranyl compounds Carbonates Nuclear materials
Uranyl carbonate
[ "Physics" ]
612
[ "Materials", "Nuclear materials", "Matter" ]
38,570,862
https://en.wikipedia.org/wiki/Cas9
Cas9 (CRISPR associated protein 9, formerly called Cas5, Csn1, or Csx12) is a 160 kilodalton protein which plays a vital role in the immunological defense of certain bacteria against DNA viruses and plasmids, and is heavily utilized in genetic engineering applications. Its main function is to cut DNA and thereby alter a cell's genome. The CRISPR-Cas9 genome editing technique was a significant contributor to the Nobel Prize in Chemistry in 2020 being awarded to Emmanuelle Charpentier and Jennifer Doudna. More technically, Cas9 is a RNA-guided DNA endonuclease enzyme associated with the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) adaptive immune system in Streptococcus pyogenes. S. pyogenes utilizes CRISPR to memorize and Cas9 to later interrogate and cleave foreign DNA, such as invading bacteriophage DNA or plasmid DNA. Cas9 performs this interrogation by unwinding foreign DNA and checking for sites complementary to the 20 nucleotide spacer region of the guide RNA (gRNA). If the DNA substrate is complementary to the guide RNA, Cas9 cleaves the invading DNA. In this sense, the CRISPR-Cas9 mechanism has a number of parallels with the RNA interference (RNAi) mechanism in eukaryotes. Apart from its original function in bacterial immunity, the Cas9 protein has been heavily utilized as a genome engineering tool to induce site-directed double-strand breaks in DNA. These breaks can lead to gene inactivation or the introduction of heterologous genes through non-homologous end joining and homologous recombination respectively in many laboratory model organisms. Research on the development of various cas9 variants has been a promising way of overcoming the limitation of the CRISPR-Cas9 genome editing. Some examples include Cas9 nickase (Cas9n), a variant that induces single-stranded breaks (SSBs) or variants recognizing different PAM sequences. Alongside zinc finger nucleases and transcription activator-like effector nuclease (TALEN) proteins, Cas9 is becoming a prominent tool in the field of genome editing. Cas9 has gained traction in recent years because it can cleave nearly any sequence complementary to the guide RNA. Because the target specificity of Cas9 stems from the guide RNA:DNA complementarity and not modifications to the protein itself (like TALENs and zinc fingers), engineering Cas9 to target new DNA is straightforward. Versions of Cas9 that bind but do not cleave cognate DNA can be used to locate transcriptional activator or repressors to specific DNA sequences in order to control transcriptional activation and repression. Native Cas9 requires a guide RNA composed of two disparate RNAs that associate – the CRISPR RNA (crRNA), and the trans-activating crRNA (tracrRNA). Cas9 targeting has been simplified through the engineering of a chimeric single guide RNA (chiRNA). Scientists have suggested that Cas9-based gene drives may be capable of editing the genomes of entire populations of organisms. In 2015, Cas9 was used to modify the genome of human embryos for the first time. CRISPR-mediated immunity To survive in a variety of challenging, inhospitable habitats that are filled with bacteriophages, bacteria and archaea have evolved methods to evade and fend off predatory viruses. This includes the CRISPR system of adaptive immunity. In practice, CRISPR/Cas systems act as self-programmable restriction enzymes. CRISPR loci are composed of short, palindromic repeats that occur at regular intervals composed of alternate CRISPR repeats and variable CRISPR spacers between 24 and 48 nucleotides long. These CRISPR loci are usually accompanied by adjacent CRISPR-associated (cas) genes. In 2005, it was discovered by three separate groups that the spacer regions were homologous to foreign DNA elements, including plasmids and viruses. These reports provided the first biological evidence that CRISPRs might function as an immune system. Cas9 has been used often as a genome-editing tool. Cas9 has been used in recent developments in preventing viruses from manipulating hosts' DNA. Since the CRISPR-Cas9 was developed from bacterial genome systems, it can be used to target the genetic material in viruses. The use of the enzyme Cas9 can be a solution to many viral infections. Cas9 possesses the ability to target specific viruses by the targeting of specific strands of the viral genetic information. More specifically the Cas9 enzyme targets certain sections of the viral genome that prevents the virus from carrying out its normal function. Cas9 has also been used to disrupt the detrimental strand of DNA and RNA that cause diseases and mutated strands of DNA. Cas9 has already showed promise in disrupting the effects of HIV-1. Cas9 has been shown to suppress the expression of the long terminal repeats in HIV-1. When introduced into the HIV-1 genome Cas9 has shown the ability to mutate strands of HIV-1. Cas9 has also been used in the treatment of Hepatitis B through targeting of the ends of certain of long terminal repeats in the Hepatitis B viral genome. Cas9 has been used to repair the mutations causing cataracts in mice. CRISPR-Cas systems are divided into three major types (type I, type II, and type III) and twelve subtypes, which are based on their genetic content and structural differences. However, the core defining features of all CRISPR-Cas systems are the cas genes and their proteins: cas1 and cas2 are universal across types and subtypes, while cas3, cas9, and cas10 are signature genes for type I, type II, and type III, respectively. CRISPR-Cas defense stages Adaptation Adaptation involves recognition and integration of spacers between two adjacent repeats in the CRISPR locus. The "Protospacer" refers to the sequence on the viral genome that corresponds to the spacer. A short stretch of conserved nucleotides exists proximal to the protospacer, which is called the protospacer adjacent motif (PAM). The PAM is a recognition motif that is used to acquire the DNA fragment. In type II, Cas9 recognizes the PAM during adaptation in order to ensure the acquisition of functional spacers. Loss of spacers and even groups of several have also been observed by Aranaz et al. 2004 and Pourcel et al. 2007. This probably occurs through homologous recombination of the between-repeat material. CRISPR processing/biogenesis CRISPR expression includes the transcription of a primary transcript called a CRISPR RNA (pre-crRNA), which is transcribed from the CRISPR locus by RNA polymerase. Specific endoribonucleases then cleave the pre-crRNAs into small CRISPR RNAs (crRNAs). Interference/immunity Interference involves the crRNAs within a multi-protein complex called CASCADE, which can recognize and specifically base-pair with regions of inserting complementary foreign DNA. The crRNA-foreign nucleic acid complex is then cleaved, however if there are mismatches between the spacer and the target DNA, or if there are mutations in the PAM, then cleavage will not be initiated. In the latter scenario, the foreign DNA is not targeted for attack by the cell, thus the replication of the virus proceeds and the host is not immune to viral infection. The interference stage can be mechanistically and temporally distinct from CRISPR acquisition and expression, yet for complete function as a defense system, all three phases must be functional. Stage 1: CRISPR spacer integration. Protospacers and protospacer-associated motifs (shown in red) are acquired at the "leader" end of a CRISPR array in the host DNA. The CRISPR array is composed of spacer sequences (shown in colored boxes) flanked by repeats (black diamonds). This process requires Cas1 and Cas2 (and Cas9 in type II), which are encoded in the cas locus, which are usually located near the CRISPR array. Stage 2: CRISPR expression. Pre-crRNA is transcribed starting at the leader region by the host RNA polymerase and then cleaved by Cas proteins into smaller crRNAs containing a single spacer and a partial repeat (shown as hairpin structure with colored spacers). Stage 3: CRISPR interference. crRNA with a spacer that has strong complementarity to the incoming foreign DNA begins a cleavage event (depicted with scissors), which requires Cas proteins. DNA cleavage interferes with viral replication and provides immunity to the host. The interference stage can be functionally and temporarily distinct from CRISPR acquisition and expression (depicted by white line dividing the cell). Transcription deactivation using dCas9 dCas9, also referred to as endonuclease deficient Cas9 can be utilized to edit gene expression when applied to the transcription binding site of the desired section of a gene. The optimal function of dCas9 is attributed to its mode of action. Gene expression is inhibited when nucleotides are no longer added to the RNA chain and therefore terminating elongation of that chain, and as a result affects the transcription process. This process occurs when dCas9 is mass-produced so it is able to affect the most genes at any given time via a sequence specific guide RNA molecule. Since dCas9 appears to down regulate gene expression, this action is amplified even more when it is used in conjunction with repressive chromatin modifier domains. The dCas9 protein has other functions outside of the regulation of gene expression. A promoter can be added to the dCas9 protein which allows them to work with each other to become efficient at beginning or stopping transcription at different sequences along a strand of DNA. These two proteins are specific in where they act on a gene. This is prevalent in certain types of prokaryotes when a promoter and dCas9 align themselves together to impede the ability of elongation of polymer of nucleotides coming together to form a transcribed piece of DNA. Without the promoter, the dCas9 protein does not have the same effect by itself or with a gene body. When examining the effects of repression of transcription further, H3K27, an amino acid component of a histone, becomes methylated through the interaction of dCas9 and a peptide called FOG1. Essentially, this interaction causes gene repression on the C + N terminal section of the amino acid complex at the specific junction of the gene, and as a result, terminates transcription. dCas9 also proves to be efficient when it comes to altering certain proteins that can create diseases. When the dCas9 attaches to a form of RNA called guide-RNA, it prevents the proliferation of repeating codons and DNA sequences that might be harmful to an organism's genome. Essentially, when multiple repeat codons are produced, it elicits a response, or recruits an abundance of dCas9 to combat the overproduction of those codons and results in the shut-down of transcription. dCas9 works synergistically with gRNA and directly affects the DNA polymerase II from continuing transcription. Further explanation of how the dCas9 protein works can be found in their utilization of plant genomes by the regulation of gene production in plants to either increase or decrease certain characteristics. The CRISPR-CAS9 system has the ability to either upregulate or downregulate genes. The dCas9 proteins are a component of the CRISPR-CAS9 system and these proteins can repress certain areas of a plant gene. This happens when dCAS9 binds to repressor domains, and in the case of the plants, deactivation of a regulatory gene such as AtCSTF64, does occur. Bacteria are another focus of the usage of dCas9 proteins as well. Since eukaryotes have a larger DNA makeup and genome; the much smaller bacteria are easy to manipulate. As a result, eukaryotes use dCas9 to inhibit RNA polymerase from continuing the process of transcription of genetic material. Structural and biochemical studies Crystal structure Cas9 features a bi-lobed architecture with the guide RNA nestled between the alpha-helical lobe (blue) and the nuclease lobe (cyan, orange, and gray). These two lobes are connected through a single bridge helix. There are two nuclease domains located in the multi-domain nuclease lobe, the RuvC (gray) which cleaves the non-target DNA strand, and the HNH nuclease domain (cyan) that cleaves the target strand of DNA. The RuvC domain is encoded by sequentially disparate sites that interact in the tertiary structure to form the RuvC cleavage domain (See right figure). A key feature of the target DNA is that it must contain a protospacer adjacent motif (PAM) consisting of the three-nucleotide sequence- NGG. This PAM is recognized by the PAM-interacting domain (PI domain, orange) located near the C-terminal end of Cas9. Cas9 undergoes distinct conformational changes between the apo, guide RNA bound, and guide RNA:DNA bound states. Cas9 recognizes the stem-loop architecture inherent in the CRISPR locus, which mediates the maturation of crRNA-tracrRNA ribonucleoprotein complex. Cas9 in complex with CRISPR RNA (crRNA) and trans-activating crRNA (tracrRNA) further recognizes and degrades the target dsDNA. In the co-crystal structure shown here, the crRNA-tracrRNA complex is replaced by a chimeric single-guide RNA (sgRNA, in red) which has been proved to have the same function as the natural RNA complex. The sgRNA base paired with target ssDNA is anchored by Cas9 as a T-shaped architecture. This crystal structure of the DNA-bound Cas9 enzyme reveals distinct conformational changes in the alpha-helical lobe with respect to the nuclease lobe, as well as the location of the HNH domain. The protein consists of a recognition lobe (REC) and a nuclease lobe (NUC). All regions except the HNH form tight interactions with each other and sgRNA-ssDNA complex, while the HNH domain forms few contacts with the rest of the protein. In another conformation of Cas9 complex observed in the crystal, the HNH domain is not visible. These structures suggest the conformational flexibility of HNH domain. To date, at least three crystal structures have been studied and published. One representing a conformation of Cas9 in the apo state, and two representing Cas9 in the DNA bound state. Interactions with sgRNA In sgRNA-Cas9 complex, based on the crystal structure, REC1, BH and PI domains have important contacts with backbone or bases in both repeat and spacer region. Several Cas9 mutants including REC1 or REC2 domains deletion and residues mutations in BH have been tested. REC1 and BH related mutants show lower or none activity compared with wild type, which indicate these two domains are crucial for the sgRNA recognition at repeat sequence and stabilization of the whole complex. Although the interactions between spacer sequence and Cas9 as well as PI domain and repeat region need further studies, the co-crystal demonstrates clear interface between Cas9 and sgRNA. DNA cleavage Previous sequence analysis and biochemical studies have posited that Cas9 contains two nuclease domains: an McrA-like HNH nuclease domain and a RuvC-like nuclease domain. These HNH and RuvC-like nuclease domains are responsible for cleavage of the complementary/target and non-complementary/non-target DNA strands, respectively. Despite low sequence similarity, the sequence similar to RNase H has a RuvC fold (one member of RNase H family) and the HNH region folds as T4 Endo VII (one member of HNH endonuclease family). Wild-type S. pyogenes Cas9 requires magnesium (Mg2+) cofactors for the RNA-mediated DNA cleavage; however, Cas9 has been shown to exhibit varying levels of activity in the presence of other divalent metal ions. For instance, Cas9 in the presence of manganese (Mn2+) has been shown to be capable of RNA-independent DNA cleavage. The kinetics of DNA cleavage by Cas9 have been of great interest to the scientific community, as this data provides insight into the intricacies of the reaction. While the cleavage of DNA by RNA-bound Cas9 has been shown to be relatively rapid (k ≥ 700 s−1), the release of the cleavage products is very slow (t1/2 = ln(2)/k ≈ 43–91 h), essentially rendering Cas9 a single-turnover enzyme. Additional studies regarding the kinetics of Cas9 have shown engineered Cas9 to be effective in reducing off-target effects by modifying the rate of the reaction. The cleavage efficiency of Cas9 depends on numerous factors. A key requirement is the presence of a valid PAM at the non-target strand 3 nucleotides downstream from the cleavage site. The canonical PAM sequence for S. Pyogenes Cas9 is NGG, but alternative motifs are tolerated with lower cleavage activity. The most efficient alternative PAM motifs for the wild-type S. Pyogenes Cas9 are NAG and NGA. The sequence composition at the target DNA site complementary to the 20 nucleotide spacer region of the gRNA also affects cleavage efficiency. The most relevant nucleotide composition properties that impact efficiency are those in the PAM-proximal region. Free energy changes of nucleic acids are also highly relevant in defining cleavage activity.. In addition to efficiency, the nucleotide composition of the five nucleotides closest to the PAM in the target sequence also affects the scission profile, influencing whether DNA cleavage is blunt or staggered . Guide RNAs that bind to the DNA forming a duplex that falls into a restricted range of binding free energy changes that excludes extremely weak or stable bindings generally perform efficiently. Stable guide RNA folding conformations can also impair cleavage. DNA cleavage patterns The Cas9 nuclease contains two nuclease domains, HNH and RuvC, responsible for cleaving the target strand (TS) and nontarget strand (NTS), respectively. The seminal structural characterization of the nuclease demonstrated that the HNH domain precisely cleaves between the positions 18 and 17 (18|17) of the protospacer, while the RuvC cuts between the same bases and at additional downstream positions . Using molecular dynamics simulation, a study reported that cleavage of the NTS between 17|16 of the target sequence was more energetically favored than 18|17, generating 1 nucleotide 5’ ssDNA overhangs . Notably, the authors demonstrated that the 5’ overhangs are filled in, and the product of DNA repair are templated insertions, where the 5’ overhang is used as a template by Pol4 for the repair reaction. The association between staggered cleavage and precise templated insertions was supported by additional studies in human cells . Recently, a high-throughput investigation of Cas9 scission profile revealed that ~85% of on-target cleavage is blunt, whereas ~15% had a 1 nucleotide 5' overhang . Off-targets had a higher staggered cleavage rate compared to on-target sites, with approximately 1/3 of off-targets displaying 5' overhangs from 1 to 3 nucleotides. The scission profile analysis revealed that sequence patterns in the target sequence favor the formation of blunt or staggered DNA cuts, and staggered cleavage favored the formation of predictable indels. Problems bacteria pose to Cas9 editing Most archaea and bacteria stubbornly refuse to allow a Cas9 to edit their genome. This is because they can attach foreign DNA, that does not affect them, into their genome. Another way that these cells defy Cas9 is by process of restriction modification (RM) system. When a bacteriophage enters a bacteria or archaea cell it is targeted by the RM system. The RM system then cuts the bacteriophages DNA into separate pieces by restriction enzymes and uses endonucleases to further destroy the strands of DNA. This poses a problem to Cas9 editing because the RM system also targets the foreign genes added by the Cas9 process. Applications of Cas9 to transcription tuning Interference of transcription by dCas9 Due to the unique ability of Cas9 to bind to essentially any complement sequence in any genome, researchers wanted to use this enzyme to repress transcription of various genomic loci. To accomplish this, the two crucial catalytic residues of the RuvC and HNH domain can be mutated to alanine abolishing all endonuclease activity of Cas9. The resulting protein coined 'dead' Cas9 or 'dCas9' for short, can still tightly bind to dsDNA. This catalytically inactive Cas9 variant has been used for both mechanistic studies into Cas9 DNA interrogative binding and as a general programmable DNA binding RNA-Protein complex. The interaction of dCas9 with target dsDNA is so tight that high molarity urea protein denaturant can not fully dissociate the dCas9 RNA-protein complex from dsDNA target. dCas9 has been targeted with engineered single guide RNAs to transcription initiation sites of any loci where dCas9 can compete with RNA polymerase at promoters to halt transcription. Also, dCas9 can be targeted to the coding region of loci such that inhibition of RNA Polymerase occurs during the elongation phase of transcription. In Eukaryotes, silencing of gene expression can be extended by targeting dCas9 to enhancer sequences, where dCas9 can block assembly of transcription factors leading to silencing of specific gene expression. Moreover, the guide RNAs provided to dCas9 can be designed to include specific mismatches to its complementary cognate sequence that will quantitatively weaken the interaction of dCas9 for its programmed cognate sequence allowing a researcher to tune the extent of gene silencing applied to a gene of interest. This technology is similar in principle to RNAi such that gene expression is being modulated at the RNA level. However, the dCas9 approach has gained much traction as there exist less off-target effects and in general larger and more reproducible silencing effects through the use of dCas9 compared to RNAi screens. Furthermore, because the dCas9 approach to gene silencing can be quantitatively controlled, a researcher can now precisely control the extent to which a gene of interest is repressed allowing more questions about gene regulation and gene stoichiometry to be answered. Beyond direct binding of dCas9 to transcriptionally sensitive positions of loci, dCas9 can be fused to a variety of modulatory protein domains to carry out a myriad of functions. Recently, dCas9 has been fused to chromatin remodeling proteins (HDACs/HATs) to reorganize the chromatin structure around various loci. This is important in targeting various eukaryotic genes of interest as heterochromatin structures hinder Cas9 binding. Furthermore, because Cas9 can react to heterochromatin, it is theorized that this enzyme can be further applied to studying the chromatin structure of various loci. Additionally, dCas9 has been employed in genome wide screens of gene repression. By employing large libraries of guide RNAs capable of targeting thousands of genes, genome wide genetic screens using dCas9 have been conducted. Another method for silencing transcription with Cas9 is to directly cleave mRNA products with the catalytically active Cas9 enzyme. This approach is made possible by hybridizing ssDNA with a PAM complement sequence to ssRNA allowing for a dsDNA-RNA PAM site for Cas9 binding. This technology makes available the ability to isolate endogenous RNA transcripts in cells without the need to induce chemical modifications to RNA or RNA tagging methods. Transcription activation by dCas9 fusion proteins In contrast to silencing genes, dCas9 can also be used to activate genes when fused to transcription activating factors. These factors include subunits of bacterial RNA Polymerase II and traditional transcription factors in eukaryotes. Recently, genome-wide screens of transcription activation have also been accomplished using dCas9 fusions named 'CRISPRa' for activation. See also DCas9 activation system CRISPR CRISPR gene editing Genome editing Zinc finger nuclease Transcription activator-like effector nuclease References Further reading External links Deoxyribonucleases Repetitive DNA sequences Immune system Bacterial proteins Genome editing
Cas9
[ "Engineering", "Biology" ]
5,175
[ "Genetics techniques", "Genome editing", "Immune system", "Genetic engineering", "Organ systems", "Molecular genetics", "Repetitive DNA sequences" ]
38,571,537
https://en.wikipedia.org/wiki/SQuORE
SQUORE is a software analytics and static code analysis tool for software projects. It gathers information from different artefacts types (e.g. source code, test results, bug tracking system) and tools (reads outputs of Checkstyle, PMD, FindBugs, Polyspace, Coverity or SonarQube) and publishes a summarised view of the project quality or progress. The quality model used for analysis is fully customisable, and many different quality models have been implemented: SQALE, ISO9126 maintainability, European Cooperation for Space Standardization or HIS Automotive group. It is used in the industry and academic research for software engineering and data mining related concerns. History Squore was initially developed by Squoring Technologies, a French software editor founded in 2010 in Toulouse and specialized in the evaluation and monitoring of software and systems development projects.. In June 2018, Vector Informatik acquired Squoring Technologies and is now the owner of the Squore tool. Common uses The main goal of Squore's software analysis is the assessment of quality characteristics like maintainability, reliability or maturity. Software quality is subject to many definitions and debates; hence evaluation, sub-characteristics and metrics used will differ depending on the context of the analysis: e.g. critical flight systems, medical devices, desktop products. Contract management may rely on code analysis to define levels of quality between contractors: e.g. cloning ratio, complexity of functions, specific ratings. By using such constraints stakeholders may accept or refuse a delivery based on the analysis result of the product. See also SQALE Static code analysis List of tools for static code analysis References Journal article: "Un outil pour évaluer la qualité des logiciels" (French), in Mesures (2010/09). Journal article: "Une plateforme collaborative d'évaluation de la qualité logicielle" (French), in Programmez! (2011/02). Schneider Electric press release: Schneider Electric uses SQuORING technologies software quality control (2012/03). Journal article: "SQUORE as a Software Qualimetry solution at Continental PES", in (2018/02). Journal article: "Software Quality Assurance Dashboard for Renault Software Robustness plan with SQUORE tool", in (2018/02). Vector press release: Vector Acquires French Squoring Technologies (2018/09). Journal article: "Squore – Software Analytics for Project Monitoring", in (2018). External links Software metrics Software quality
SQuORE
[ "Mathematics", "Engineering" ]
527
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
38,573,239
https://en.wikipedia.org/wiki/Acceptable%20ring
In mathematics, an acceptable ring is a generalization of an excellent ring, with the conditions about regular rings in the definition of an excellent ring replaced by conditions about Gorenstein rings. Acceptable rings were introduced by . All finite-dimensional Gorenstein rings are acceptable, as are all finitely generated algebras over acceptable rings and all localizations of acceptable rings. References Commutative algebra Ring theory
Acceptable ring
[ "Mathematics" ]
80
[ "Fields of abstract algebra", "Commutative algebra", "Ring theory" ]
37,125,686
https://en.wikipedia.org/wiki/MTOR%20inhibitors
mTOR inhibitors are a class of drugs used to treat several human diseases, including cancer, autoimmune diseases, and neurodegeneration. They function by inhibiting the mammalian target of rapamycin (mTOR) (also known as the mechanistic target of rapamycin), which is a serine/threonine-specific protein kinase that belongs to the family of phosphatidylinositol-3 kinase (PI3K) related kinases (PIKKs). mTOR regulates cellular metabolism, growth, and proliferation by forming and signaling through two protein complexes, mTORC1 and mTORC2. The most established mTOR inhibitors are so-called rapalogs (rapamycin and its analogs), which have shown tumor responses in clinical trials against various tumor types. History The discovery of mTOR was made in 1994 while investigating the mechanism of action of its inhibitor, rapamycin. Rapamycin was first discovered in 1975 in a soil sample from Easter Island of South Pacific, also known as Rapa Nui, from where its name is derived. Rapamycin is a macrolide, produced by the microorganism Streptomyces hygroscopicus and showed antifungal properties. Shortly after its discovery, immunosuppressive properties were detected, which later led to the establishment of rapamycin as an immunosuppressant. In the 1980s, rapamycin was also found to have anticancer activity although the exact mechanism of action remained unknown until many years later. In the 1990s there was a dramatic change in this field due to studies on the mechanism of action of rapamycin and the identification of the drug target. It was found that rapamycin inhibited cellular proliferation and cell cycle progression. Research on mTOR inhibition has been a growing branch in science and has promising results. Protein kinases and their inhibitors In general, protein kinases are classified in two major categories based on their substrate specificity, protein tyrosine kinases and protein serine/threonine kinases. Dual-specificity kinases are subclass of the tyrosine kinases. mTOR is a kinase within the family of phosphatidylinositol-3 kinase-related kinases (PIKKs), which is a family of serine/threonine protein kinases, with a sequence similarity to the family of lipid kinases, PI3Ks. These kinases have different biological functions, but are all large proteins with common domain structure. PIKKs have four domains at the protein level, which distinguish them from other protein kinases. From the N-terminus to the C-terminus, these domains are named FRAP-ATM-TRAAP (FAT), the kinase domain (KD), the PIKK-regulatory domain (PRD), and the FAT-C-terminal (FATC). The FAT domain, consisting of four α-helices, is N-terminal to KD, but that part is referred to as the FKBP12-rapamycin-binding (FRB) domain, which binds the FKBP12-rapamycin complex. The FAT domain consists of repeats, referred to as HEAT (Huntingtin, Elongation factor 3, A subunit of protein phosphatase 2A and TOR1). Specific protein activators regulate the PIKK kinases but binding of them to the kinase complex causes a conformational change that increases substrate access to the kinase domain. Protein kinases have become popular drug targets. They have been targeted for the discovery and design of small molecule inhibitors and biologics as potential therapeutic agents. Small-molecule inhibitors of protein kinases generally prevent either phosphorylation of proteins substrates or autophosphorylation of the kinase itself. mTOR signaling pathway It appears that growth factors, amino acids, ATP, and oxygen levels regulate mTOR signaling. Several downstream pathways that regulate cell-cycle progression, translation, initiation, transcriptional stress responses, protein stability, and survival of cells are signaling through mTOR. The serine/threonine kinase mTOR is a downstream effector of the PI3K/AKT pathway, and forms two distinct multiprotein complexes, mTORC1 and mTORC2. These two complexes have a separate network of protein partners, feedback loops, substrates, and regulators. mTORC1 consists of mTOR and two positive regulatory subunits, raptor and mammalian LST8 (mLST8), and two negative regulators, proline-rich AKT substrate 40 (PRAS40) and DEPTOR. mTORC2 consists of mTOR, mLST8, mSin1, protor, rictor, and DEPTOR. mTORC1 is sensitive to rapamycin but mTORC2 is considered to be resistant and is generally insensitive to nutrients and energy signals. mTORC2 is activated by growth factors, phosphorylates PKCα, AKT and paxillin, and regulates the activity of the small GTPase, Rac, and Rho related to cell survival, migration and regulation of the actin cytoskeleton. The mTORC1 signaling cascade is activated by phosphorylated AKT and results in phosphorylation of S6K1, and 4EBP1, which lead to mRNA translation. mTOR signaling pathway in human cancer Many human tumors occur because of dysregulation of mTOR signaling, and can confer higher susceptibility to inhibitors of mTOR. Deregulations of multiple elements of the mTOR pathway, like PI3K amplification/mutation, PTEN loss of function, AKT overexpression, and S6K1, 4EBP1, and eIF4E overexpression have been related to many types of cancers. Therefore, mTOR is an interesting therapeutic target for treating multiple cancers, both the mTOR inhibitors themselves or in combination with inhibitors of other pathways. Upstream, PI3K/AKT signalling is deregulated through a variety of mechanisms, including overexpression or activation of growth factor receptors, such as HER-2 (human epidermal growth factor receptor 2) and IGFR (insulin-like growth factor receptor), mutations in PI3K and mutations/amplifications of AKT. Tumor suppressor phosphatase and tensin homologue deleted on chromosome 10 (PTEN) is a negative regulator of PI3K signaling. In many cancers the PTEN expression is decreased and may be downregulated through several mechanisms, including mutations, loss of heterozygosity, methylation, and protein instability. Downstream, the mTOR effectors S6 kinase 1 (S6K1), eukaryotic initiation factor 4E-binding protein 1 (4EBP1) and eukaryotic initiation factor 4E (eIF4E) are related to cellular transformation. S6K1 is a key regulator of cell growth and also phosphorylates other important targets. Both eIF4E and S6K1 are included in cellular transformation and their overexpression has been linked to poor cancer prognosis. Development of mTOR inhibitors Since the discovery of mTOR, much research has been done on the subject, using rapamycin and rapalogs to understand its biological functions. The clinical results from targeting this pathway were not as straight forward as thought at first. Those results have changed the course of clinical research in this field. Initially, rapamycin was developed as an antifungal drug against Candida albicans, Aspergillus fumigatus and Cryptococcus neoformans. A few years later its immunosuppressive properties were detected. Later studies led to the establishment of rapamycin as a major immunosuppressant against transplant rejection, along with cyclosporine A. Combining rapamycin with cyclosporine A, enhanced rejection prevention in renal transplantation. Therefore, it was possible to use lower doses of cyclosporine, which minimized toxicity of the drug. In the 1980s the Developmental Therapeutic Branch of the National Cancer Institute (NCI) evaluated rapamycin and discovered it had an anticancer activity and was non-cytotoxic, but had cytostatic activity against several human cancer types. However, due to unfavorable pharmacokinetic properties, the development of mTOR inhibitors for the treatment of cancer was not successful at that time. Since then, rapamycin has also shown to be effective for preventing coronary artery re-stenosis and for the treatment of neurodegenerative diseases. First generation mTOR inhibitors The development of rapamycin as an anticancer agent began again in the 1990s with the discovery of temsirolimus (CCI-779). This novel soluble rapamycin derivative had a favorable toxicological profile in animals. More rapamycin derivatives with improved pharmacokinetics and reduced immunosuppressive effects have since then been developed for the treatment of cancer. These rapalogs include temsirolimus (CCI-779), everolimus (RAD001), and ridaforolimus (AP-23573) which are being evaluated in cancer clinical trials. Rapamycin analogs have similar therapeutic effects as rapamycin. However they have improved hydrophilicity and can be used for oral and intravenous administration. In 2012 National Cancer Institute listed more than 200 clinical trials testing the anticancer activity of rapalogs both as monotherapy or as a part of combination therapy for many cancer types. Rapalogs, which are the first generation mTOR inhibitors, have proven effective in a range of preclinical models. However, the success in clinical trials is limited to only a few rare cancers. Animal and clinical studies show that rapalogs are primarily cytostatic, and therefore effective as disease stabilizers rather than for regression. The response rate in solid tumors where rapalogs have been used as a single-agent therapy have been modest. Due to partial mTOR inhibition as mentioned before, rapalogs are not sufficient for achieving a broad and robust anticancer effect, at least when used as monotherapy. Another reason for the limited success is that there is a feedback loop between mTORC1 and AKT in certain tumor cells. It seems that mTORC1 inhibition by rapalogs fails to repress a negative feedback loop that results in phosphorylation and activation of AKT. These limitations have led to the development of the second generation of mTOR inhibitors. Rapamycin and rapalogs Rapamycin and rapalogs (rapamycin derivatives) are small molecule inhibitors, which have been evaluated as anticancer agents. The rapalogs have more favorable pharmacokinetic profile compared to rapamycin, the parent drug, despite the same binding sites for mTOR and FKBP12. Sirolimus The bacterial natural product rapamycin or sirolimus, a cytostatic agent, has been used in combination therapy with corticosteroids and cyclosporine in patients who received kidney transplantation to prevent organ rejection both in the US and Europe, due to its unsatisfying pharmacokinetic properties. In 2003, the U.S. Food and Drug Administration approved sirolimus-eluting coronary stents, which are used in patients with narrowing of coronary arteries, or so-called atherosclerosis. Recently rapamycin has shown effective in the inhibition of growth of several human cancers and murine cell lines. Rapamycin is the main mTOR inhibitor, but ridaforolimus/deforolimus (AP23573), everolimus (RAD001), and temsirolimus (CCI-779), are the newly developed rapamycin analogs. Temsirolimus The rapamycin analog temsirolimus (CCI-779) is also a noncytotoxic agent which delays tumor proliferation. Temsirolimus is a prodrug of rapamycin. It is approved by the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA), for the treatment of renal cell carcinoma (RCC). Temsirolimus has higher water solubility than rapamycin and is therefore administered by intravenous injection. It was approved on May 30, 2007, by FDA for the treatment of advanced RCC. Temsirolimus has also been used in a Phase I clinical trial in conjunction with neratinib, a small-molecule irreversible pan-HER tyrosine kinase inhibitor. This study enrolled patients being treated for HER2-amplified breast cancer, HER2-mutant non-small-cell lung cancer, and other advanced solid tumors. While common toxicities included nausea, stomatitis, and anemia; responses were noted. Everolimus Everolimus is the second novel Rapamycin analog. Compared with the parent compound rapamycin, everolimus is more selective for the mTORC1 protein complex, with little impact on the mTORC2 complex. mTORC1 inhibition by everolimus has been shown to normalize tumor blood vessels, to increase tumor-infiltrating lymphocytes, and to improve adoptive cell transfer therapy. From March 30, 2009, to May 5, 2011, the U.S. FDA approved everolimus for the treatment of advanced renal cell carcinoma after failure of treatment with sunitinib or sorafenib, subependymal giant cell astrocytoma (SEGA) associated with tuberous sclerosis (TS), and progressive neuroendocrine tumors of pancreatic origin (PNET). In July and August 2012, two new indications were approved, for advanced hormone receptor-positive, HER2-negative breast cancer in combination with exemestane, and pediatric and adult patients with SEGA. In 2009 and 2011, it was also approved throughout the European Union for advanced breast cancer, pancreatic neuroendocrine tumours, advanced renal cell carcinoma, and SEGA in patients with tuberous sclerosis. Ridaforolimus Ridaforolimus (AP23573, MK-8669), or deforolimus, is another rapamycin analogue that is not a prodrug for sirolimus. Like temsirolimus it can be administered intravenously, and oral formulation is being estimated for treatment of sarcoma. Umirolimus Umirolimus is an immunosuppressant used in drug-eluting stents. Zotarolimus Zotarolimus is an immunosuppressant used in coronary drug-eluting stents. Second generation mTOR inhibitors The second generation of mTOR inhibitors is known as ATP-competitive mTOR kinase inhibitors. mTORC1/mTORC2 dual inhibitors such as torin-1, torin-2 and vistusertib, are designed to compete with ATP in the catalytic site of mTOR. They inhibit all of the kinase-dependent functions of mTORC1 and mTORC2 and block the feedback activation of PI3K/AKT signaling, unlike rapalogs, which only target mTORC1. Development of these drugs has reached clinical trials, although some, such as vistusertib, have been discontinued. Like rapalogs, they decrease protein translation, attenuate cell cycle progression, and inhibit angiogenesis in many cancer cell lines and also in human cancer. In fact, they have been proven to be more potent than rapalogs. Theoretically, the most important advantages of these mTOR inhibitors is the considerable decrease of AKT phosphorylation on mTORC2 blockade and in addition to a better inhibition on mTORC1. However, some drawbacks exist. Even though these compounds have been effective in rapamycin-insensitive cell lines, they have only shown limited success in KRAS driven tumors. This suggests that combinational therapy may be necessary for the treatment of these cancers. Another drawback is also their potential toxicity. These facts have raised concerns about the long term efficacy of these types of inhibitors. The close interaction of mTOR with the PI3K pathway has also led to the development of mTOR/PI3K dual inhibitors. Compared with drugs that inhibit either mTORC1 or PI3K, these drugs have the benefit of inhibiting mTORC1, mTORC2, and all the catalytic isoforms of PI3K. Targeting both kinases at the same time reduces the upregulation of PI3K, which is typically produced with an inhibition on mTORC1. The inhibition of the PI3K/mTOR pathway has been shown to potently block proliferation by inducing G1 arrest in different tumor cell lines. Strong induction of apoptosis and autophagy has also been seen. Despite good promising results, there are preclinical evidence that some types of cancers may be insensitive to this dual inhibition. The dual PI3K/mTOR inhibitors are also likely to have increased toxicity. Mechanism of action The studies of rapamycin as immunosuppressive agent enabled us to understand its mechanism of action. It inhibits T-cell proliferation and proliferative responses induced by several cytokines, including interleukin 1 (IL-1), IL-2, IL-3, IL-4, IL-6, IGF, PDGF, and colony-stimulating factors (CSFs). Rapamycin inhibitors and rapalogs can target tumor growth both directly and indirectly. Direct impact of them on cancer cells depend on the concentration of the drug and certain cellular characteristics. The indirect way, is based on interaction with processes required for tumor angiogenesis. Effects in cancer cells Rapamycin and rapalogs crosslink the immunophilin FK506 binding protein, tacrolimus or FKBP-12, through its methoxy group. The rapamycin-FKBP12 complex interferes with FRB domain of mTOR. Molecular interaction between FKBP12, mTOR, and rapamycin can last for about three days (72 hours). The inhibition of mTOR blocks the binding of the accessory protein raptor (regulatory-associated protein of mTOR) to mTOR, but that is necessary for downstream phosphorylation of S6K1 and 4EBP1. As a consequence, S6K1 dephosphorylates, which reduces protein synthesis and decreases cell mortality and size. Rapamycin induces dephosphorylation of 4EBP1 as well, resulting in an increase in p27 and a decrease in cyclin D1 expression. That leads to late blockage of G1/S cell cycle. Rapamycin has shown to induce cancer cell death by stimulating autophagy or apoptosis, but the molecular mechanism of apoptosis in cancer cells has not yet been fully resolved. One suggestion of the relation between mTOR inhibition and apoptosis might be through the downstream target S6K1, which can phosphorylate BAD, a pro-apoptotic molecule, on Ser136. That reaction breaks the binding of BAD to BCL-XL and BCL2, a mitochondrial death inhibitors, resulting in inactivation of BAD and decreased cell survival. Rapamycin has also shown to induce p53-independent apoptosis in certain types of cancer. Effects on tumor angiogenesis Tumor angiogenesis rely on interactions between endothelial vascular growth factors which can all activate the PI3K/AKT/mTOR in endothelial cells, pericytes, or cancer cells. Example of these growth factors are angiopoietin 1 (ANG1), ANG 2, basic fibroblast growth factor (bFGF), ephrin-B2, vascular endothelial growth factor (VEGF), and members of the tumor growth factor-β (TGFβ) superfamily. One of the major stimuli of angiogenesis is hypoxia, resulting in activation of hypoxia-inducible transcription factors (HIFs) and expression of ANG2, bFGF, PDGF, VEGF, and VEGFR. Inhibition of HIF1α translation by preventing PDGF/PDGFR and VEGF/VEGFR can result from mTOR inhibition. A G0-G1 cell-cycle blockage can be the consequence of inactivation of mTOR in hypoxia-activated pericytes and endothelial cells. There is some evidence that extended therapy with rapamycin may have effect on AKT and mTORC2 as well. Effects on chemotherapy Pharmacologic down-regulation of (mTOR) pathway during chemotherapy in a mouse model prevents activation of primordial follicles, preserves ovarian function, and maintains normal fertility using clinically available inhibitors INK and RAD. In that way, it helps to maintain fertility while undergoing chemotherapy treatments. These mTOR inhibitors, when administered as pretreatment or co-treatment with standard gonadotoxic chemotherapy, helps to maintain ovarian follicles in their primordial state. Effects on cognition mTOR promotes the protein synthesis required for synaptic plasticity. Studies in cell cultures and hippocampal slices indicate that mTOR inhibition reduces long-term potentiation. mTOR activation can protect against certain neurodegeneration associated with certain disease conditions. On the other hand, promotion of autophagy by mTOR inhibition may reduce cognitive decline associated with neurodegeneration. Moderate reduction of mTOR activity by 25-30% has been shown to improve brain function, suggesting that the relation between mTOR and cognition is optimized with intermediate doses (2.24 mg/kg/day in mice, human equivalent about 0.19 mg/kg/day), where very high or very low doses impair cognition. Reduction of the inflammatory cytokine Interleukin 1 beta (IL-1β) in mice by mTOR inhibition (with rapamycin in doses of 20 mg/kg/day, human equivalent about 1.6 mg/kg/day) has been shown to enhance learning and memory. Although IL-1β is required for memory, IL-1β normally increases with age, impairing cognitive function. Structure activity relationship The pipecolate region of rapamycin structure seems necessary for rapamycin-binding to FKBP12. This step is required for further binding of rapamycin to the mTOR kinase, which is the key enzyme in many biological actions of rapamycin. The high affinity of rapamycin binding to FKBP12 is explained by number of hydrogen bonds through two different hydrophobic binding pockets, and this has been revealed by X-ray crystal structure of the compound bound to the protein. The structural characteristics common to temsirolimus and sirolimus; the pipecolic acid, tricarbonyl region from C13-C15, and lactone functionalities play the key role in binding groups with the FKBP12. The most important hydrogen bonds are the lactone carbonyl oxygen at C-21 to the backbone NH of Ile56, amide carbonyl at C-15 to the phenolic group on the sidechain of Tyr82, and the hydroxyl proton at the hemiketal carbon, C-13, to the sidechain of Asp37. Structural changes to the rapamycin structure can affect binding to mTOR. This could include both direct and indirect binding as a part of binding to FKBP12. Interaction of the FKBP12-rapamycin complex with mTOR corresponds with conformational flexibility of the effector domain of rapamycin. This domain consists of molecular regions that make hydrophobic interactions with the FKB domain and triene region from C-1-C-6, methoxy group at C-7, and methyl groups at C-33, C-27 and C-25. All changes of the macrolide ring can have unpredictable effects on binding and therefore, make determination of SAR for rapalogs problematic. Rapamycin contains no functional groups that ionize in the pH range 1-10 and therefore, are rather insoluble in water. Despite its effectiveness in preclinic cancer models, its poor solubility in water, stability, and the long half-life elimination made its parenteral use difficult, but the development of soluble rapamycin analogs vanquished various barriers. Nonetheless, the rapamycin analogs that have been approved for human use are modified at C-43 hydroxyl group and show improvement in pharmacokinetic parameters as well as drug properties, for example, solubility. Rapamycin and temsirolimus have similar chemical structures and bind to FKBP12, though their mechanism of action differs. Temsirolimus is a dihydroxymethyl propionic acid ester of rapamycin, and its first derivative. Therefore, it is more water-soluble, and due to its water solubility it can be given by intravenous formulation. Everolimus has O-2 hydroxyethyl chain substitution and deforolimus has a phosphine oxide substitution at position C-43 in the lactone ring of rapamycin. Deforolimus (Ridaforolimus ) has C43 secondary alcohol moiety of the cyclohexyl group of Rapamycin that was substituted with phosphonate and phosphinate groups, preventing the high-affinity binding to mTOR and FKBP. Computational modelling studies helped the synthesise of the compound. Adverse events Treatment with mTOR inhibitors can be complicated by adverse events. The most frequently occurring adverse events are stomatitis, rash, anemia, fatigue, hyperglycemia/hypertriglyceridemia, decreased appetite, nausea, and diarrhea. Additionally, interstitial lung disease is an adverse event of particular importance. mTORi-induced ILD often is asymptomatic (with ground glass abnormalities on chest CT) or mild symptomatic (with a non-productive cough), but can be very severe as well. Even fatalities have been described. Careful diagnosis and treatment, therefore, is essential. Recently, a new diagnostic and therapeutic management approach has been proposed. Biomarkers Identification of predictive biomarkers of efficacy for tumor types that are sensitive to mTOR inhibitors remains a major issue. Possible predictive biomarkers for tumor response to mTOR inhibitors, as have been described in glioblastoma, breast and prostate cancer cells, may be the differential expression of mTOR pathway proteins, PTEN, AKT, and S6. Thus, this data is based on preclinical assays, based on in vitro cultured tumor cell lines, which suggest that the effects of mTOR inhibitors may be more pronounced in cancers displaying loss of PTEN functions or PIK3CA mutations. However, the use of PTEN, PIK3CA mutations, and AKT–phospho status for predicting rapalog sensitivity has not been fully validated in clinic. To date, attempts to identify biomarkers of rapalog response have been unsuccessful. Sensitivity Clinical and translational data suggest that sensitive tumor types, with adequate parameters and functional apoptosis pathways, might not need high doses of mTOR inhibitors to trigger apoptosis. In most cases, cancer cells might only be partially sensitive to mTOR inhibitors due to redundant signal transduction or lack of functional apoptosis signaling pathways. In situations like this, high doses of mTOR inhibitors might be required. In a recent study of patients with Renal cell carcinoma, resistance to Temsirolimus was associated with low levels of p-AKT and p-S6K1, that play the key role in mTOR activation. These data strongly suggests number of tumors with an activated PI3K/AKT/mTOR signaling pathway that does not respond to mTOR inhibitors. For future studies, it is recommended to exclude patients with low or negative p-AKT levels from trials with mTOR inhibitors. Current data is insufficient to predict sensitivity of tumors to rapamycin. However, the existing data allows us to characterize tumors that might not respond to rapalogs. ATP-competitive mTOR kinase inhibitors These second generation mTOR inhibitors bind to ATP-binding site in mTOR kinase domain required for the functions of both mTORC1 and mTORC2, and result in downregulation of mTOR signaling pathway. Due to PI3K and mTORC2 ability to regulate AKT phosphorylation, these two compounds play a key role in minimizing the feedback activation of AKT. mTOR/PI3K dual inhibitors Several, so-called mTOR/PI3K dual inhibitors (TPdIs), have been developed and are in early-stage preclinical trials and show promising results. Their development has been benefited from previous studies with PI3K-selective inhibitors. The activity of these small molecules from rapalog activity differs in the way by blocking both mTORC1-dependent phospholylation of S6K1 and mTORC2-dependent phosphorylation of AKT Ser473 residue. Dual mTOR/PI3K inhibitors include dactolisib, voxtalisib, BGT226, SF1126, PKI-587 and many more. For example, Novartis has developed the compound NVPBE235 that was reported to inhibit tumor growth in various preclinical models. It enhances antitumor activity of some other drugs such as vincristine. Dactolisib seems to inhibit effectively both wild-type and mutant form of PI3KCA, which suggests its use towards wide types of tumors. Studies have shown superior antiproliferative activity to rapalogs and in vivo models have confirmed these potent antineoplastic effects of dual mTOR/PI3K inhibitors. These inhibitors target isoforms of PI3K (p110α, β and γ) along with ATP-binding sites of mTORC1 and mTORC2 by blocking PI3K/AKT signaling, even in cancer types with mutations in this pathway. mTORC1/mTORC2 dual inhibitors (TORCdIs) New mTOR-specific inhibitors came forth from screening and drug discovery efforts. These compounds block activity of both mTOR complexes and are called mTORC1/mTORC2 dual inhibitors. Compounds with this characteristics such as sapanisertib (codenamed INK128), AZD8055, and AZD2014 have entered clinical trials. A series of these mTOR kinase inhibitors have been studied. Their structure is derived from morpholino pyrazolopyrimidine scaffold. Improvements of this type of inhibitors have been made by exchanging the morpholines with bridged morpholines in pyrazolopyrimidine inhibitors and results showed increased selectivity to mTOR by 26000 fold. Limitations of new generation mTOR inhibitors Although the new generation of mTOR inhibitors hold great promise for anticancer therapy and are rapidly moving into clinical trials, there are many important issues that determine their success in the clinic. First of all predictable biomarkers for benefit of these inhibitors are not available. It appears that genetic determinants predispose cancer cells to be sensitive or resistant to these compounds. Tumors that depend on PI3K/mTOR pathway should respond to these agents but it is unclear if compounds are effective in cancers with distinct genetic lesions. Inhibition of mTOR is a promising strategy for treatment of number of cancers. Limited clinical activity of selective mTORC1 agents have made them unlikely to have impact in cancer treatment. The development of competitive ATP-catalytic inhibitors have the ability to block both mTORC1 and mTORC2. Future The limitations of currently available rapalogs have led to new approaches to mTOR targeting. Studies suggest that mTOR inhibitors may have anticancer activity in many cancer types, such as RCC, neuroendocrine tumors, breast cancer, hepatocellular carcinoma, sarcoma, and large B-cell lymphoma. One major limitation for the development of mTOR inhibition therapy is that biomarkers are not presently available to predict which patient will respond to them. A better understanding of the molecular mechanisms that are involved in the response of cancer cells to mTOR inhibitors are still required so this can be possible. A way to overcome the resistance and improve efficacy of mTOR targeting agents may be with stratification of patients and selection of drug combination therapies. This may lead to a more effective and personalized cancer therapy. Although further research is needed, mTOR targeting still remains an attractive and promising therapeutic option for the treatment of cancer. See also Mammalian target of rapamycin (mTOR) PI3K/AKT/mTOR pathway Akt/PKB signaling pathway PI3K inhibitor References Further reading Signal transduction Tor signaling pathway Human proteins Antineoplastic drugs Oncology Cancer treatments
MTOR inhibitors
[ "Chemistry", "Biology" ]
6,997
[ "Tor signaling pathway", "Neurochemistry", "Biochemistry", "Signal transduction" ]
37,127,641
https://en.wikipedia.org/wiki/Metal-centered%20cycloaddition%20reactions
A metal-centered cycloaddition is a subtype of the more general class of cycloaddition reactions. In such reactions "two or more unsaturated molecules unite directly to form a ring", incorporating a metal bonded to one or more of the molecules. Cycloadditions involving metal centers are a staple of organic and organometallic chemistry, and are involved in many industrially-valuable synthetic processes. There are two general types of metal-centered cycloaddition reactions: those in which the metal is incorporated into the cycle (a metallocycle), and those in which the metal is external to the cycle. These can be further divided into "true" cycloadditions (those that take place in a concerted fashion), and formal cycloadditions (those that take place in a stepwise fashion). Beyond that, they are classified by the number of atoms contributed to the cycle by each of the participants. For example, olefin metathesis using a Grubbs catalyst typically involves a reversible [2+2] cycloaddition. A Ruthenium alkylidene and an alkene (or alkyne) react to form a metallocycle. Roles of metals in cycloaddition reactions Conformational control A common role for a metal centre in cycloaddition reactions is to exert control over the conformation of the reactants. Metal ions are frequently a component of 1,3-dipolar cycloadditions, and Diels-Alder reactions. A Lewis acidic can coerce a Diene into the reactive cisoid conformation, thereby catalyzing the reaction the Diels-Alder reaction. A crucial role of the metal in many cycloadditions reactions is to bind simultaneously to the reactants. This brings them into close proximity and encourages them to cyclize. The ligands associated with the metal can direct the approach of the reactants, providing control over regiochemistry and stereochemistry. Stabilization of reactive species Cycloadditions that require unstable synthons such as carbanions or carbenes are often possible using organometallic compounds. Several synthetic routes to cyclopropyl and cyclopropenyl compounds involve the cycloaddition of a metal carbene to an alkene or alkyne. Metal-stabilized allyl and pentadienyl complexes are used in [4+3] and [5+2] cycloadditions for preparing seven-membered rings. Metallocycles Alkylidenes and other carbene analogs participate readily in cycloaddition reactions. Cycloaddition reactions of Ruthenium phosphinidenes with alkenes and alkynes is an active area research and has promise as catalytic cycle for hydrophosphination. Molecular orbital explanation Underlying any attempt to explain cycloaddition reactions is Frontier Molecular Orbital Theory, which describes the interaction between the Highest Occupied Molecular Orbital (HOMO) and the Lowest Unoccupied Molecular Orbital (LUMO) of the reactants. A cycloaddition will only proceed if the HOMO and LUMO have an allowed symmetry and are similar in energy. Metals play a crucial role in cycloaddition reactions because they can bind to unsaturated molecules, changing the symmetries and energy levels of the HOMO and/or LUMO. The Woodward-Hoffmann rules and Green-Davies-Mingos rules can provide some indication of the effects of metal-bonding on cycloaddition reactions. As an example, free Benzene is extremely unreactive in cycloadditions due to its aromaticity. Coordination of Benzene to a highly reduced Tricarbonylmanganese centre allows the Benzene to undergo cycloaddition with Diphenylketene. Examples [2+2] cycloaddition of two alkynes Although cyclobutadienes can only exist briefly in the free state, they can exist indefinitely as metal ligands. They can be formed as ligands in-situ by the [2+2] cycloaddition of sterically bulky alkynes bound to a metal. Benzannulation The Dötz reaction is a formal [3+2+1] cycloaddition of two alkynes, a carbene, and a carbonyl ligand to form a benzene ring. Formal [5+4] cycloaddition An unusual formal [5+4] cycloaddition was reported by Kreiter et al. Nine-membered rings are unusual and only a handful of synthetic routes to rings of this size are known. See also Cycloaddition reaction Frontier Molecular Orbital Theory Organometallic chemistry Pericyclic reaction 1,3-Dipolar cycloaddition Diels-Alder reaction References Cycloadditions Reaction mechanisms
Metal-centered cycloaddition reactions
[ "Chemistry" ]
1,042
[ "Reaction mechanisms", "Chemical kinetics", "Physical organic chemistry" ]
37,131,577
https://en.wikipedia.org/wiki/Elliptic%20Gauss%20sum
In mathematics, an elliptic Gauss sum is an analog of a Gauss sum depending on an elliptic curve with complex multiplication. The quadratic residue symbol in a Gauss sum is replaced by a higher residue symbol such as a cubic or quartic residue symbol, and the exponential function in a Gauss sum is replaced by an elliptic function. They were introduced by , at least in the lemniscate case when the elliptic curve has complex multiplication by , but seem to have been forgotten or ignored until the paper . Example gives the following example of an elliptic Gauss sum, for the case of an elliptic curve with complex multiplication by . where The sum is over residues mod whose representatives are Gaussian integers is a positive integer is a positive integer dividing is a rational prime congruent to 1 mod 4 where is the sine lemniscate function, an elliptic function. is the th power residue symbol in with respect to the prime of is the field is the field is a primitive th root of 1 is a primary prime in the Gaussian integers with norm is a prime in the ring of integers of lying above with inertia degree 1 References Algebraic number theory Elliptic curves
Elliptic Gauss sum
[ "Mathematics" ]
240
[ "Algebraic number theory", "Number theory" ]
37,135,857
https://en.wikipedia.org/wiki/C20H28N4O2
{{DISPLAYTITLE:C20H28N4O2}} The molecular formula C20H28N4O2 may refer to: AB-CHMINACA, an indazole-based synthetic cannabinoid Rolofylline, an experimental diuretic which acts as a selective adenosine A1 receptor antagonist Molecular formulas
C20H28N4O2
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
37,136,779
https://en.wikipedia.org/wiki/Experience%20architecture
Experience architecture (XA) is the art of articulating a clear user story/journey through an information architecture, interaction design and experience design that an end user navigates across products and services offered by the client or as intended by the designer. This visual representation is intended not only to highlight the systems that the end user will touch and interact with, but also the key interactions that the user will have with interacting the internal systems or back end structure of an application. It provides a holistic view of the experience, vertical knowledge of industry, the systems, documentation, and analysis of the points that should be focused on when delivering a holistic experience. The Experience architecture provides an overall direction for user experience actions across the projects. Experience architect An experience architect (also known as an XA) is a designer authoring, planning, and designing the experience architecture deliverables. An XA will encompass a variety of interaction and digital design skills of human behaviour, user-centered design (UCD) and interaction design. This person is also responsible for connecting human emotions with the end product thus creating co-relations by ensuring that the experience meets or exceeds needs and objectives of the intended or wide users. The XA integrates the results into an actionable requirements. They are responsible for conceptualising and delivering the design deliverables that meets business and usability objectives by identifying the modules, templates, and structure necessary for end-product integrations. Experience architect deliverables Experience architects are responsible for documenting and delivering a series of project manuals, guidelines and specifications. These include all or some of the below practices. Persona Scenario User story User journey Process flow diagram Information Architecture Wireframe Ranging from High-Fidelity to Low-Fidelity Content strategy Prototype Functional specification Inclusive experience A great design experience must be self-explanatory and emphasize a user journey from step to step in minimalistic manner. In a broader term, it is a branch of inclusive design and universal design. The purpose of inclusion in the context of experience architecture is to create technology and user interfaces accessible for wider audiences inclusive of full range of human diversity with respect to ability, gender, age and other forms of human difference. This methodology is to achieve independent experience and accessibility for users who are aging, abled or disabled or impaired either by birth, natural or incurred through natural events. It requires exercising and creating interfaces or prototyping a lower level design for physical world that makes actions and steps more self-explanatory thereby removing layers of prerequisite requirements to access any digital system. Inclusive Experience is an emerging and developing skill sets and standard elements in applications that are mass-produced for consumers, government and in other public domains. Education The first Experience Architecture program began at Michigan State University. Developed by Liza Potts, Rebecca Tegtmeyer, Bill Hart-Davidson, this program launched in 2014. Bachelor programs BA Experience Architecture – Michigan State University Related areas Application architecture Business analyst Card sorting Content strategy Contextual inquiry Data architecture Data management Design thinking Experiential interior design Human factors Information architecture Information design Information system Interaction design Participatory design Semantic Web Service design Taxonomy Usability testing User-centered design User experience design Design
Experience architecture
[ "Engineering" ]
645
[ "Design" ]
37,138,290
https://en.wikipedia.org/wiki/Development%20testing
Development testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. Depending on the organization's expectations for software development, development testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices. Overview Development testing is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. Purposes and benefits Development testing is applied for the following main purposes: Quality assurance—To improve the overall development and test process by building quality and security into the software (rather than trying to test defects/vulnerabilities out). Industry or Regulatory Compliance—To achieve compliance with industry or regulatory compliance initiatives (e.g.., FDA, IEC 62304, DO-178B, DO-178C, ISO 26262, IEC 61508, etc.) that commonly require strict risk reduction as well as bidirectional requirements traceability (e.g., between requirements, tests, code reviews, source code, defects, tasks, etc.) VDC research reports that the standardized implementation of development testing processes within an overarching standardized process not only improves software quality (by aligning development activities with proven best practices) but also increases project predictability. voke research reports that development testing makes software more predictable, traceable, visible, and transparent throughout the software development lifecycle. Key principles In each of the above applications, development testing starts by defining policies that express the organization's expectations for reliability, security, performance, and regulatory compliance. Then, after the team is trained on these policies, development testing practices are implemented to align software development activities with these policies. These development testing practices include: Practices that prevent as many defects as possible through a Deming-inspired approach that promotes reducing the opportunity for error via root cause analysis. Practices that expose defects immediately after they are introduced—when finding and fixing defects is fastest, easiest, and cheapest. The emphasis on applying a broad spectrum of defect prevention and defect detection practices is based on the premise that different development testing techniques are tuned to expose different types of defects at different points in the software development lifecycle, so applying multiple techniques in concert decreases the risk of defects slipping through the cracks. The importance of applying broad set of practices is confirmed by Boehm and Basili in the often-referenced "Software Defect Reduction Top 10 List." Static analysis The term "development testing" has occasionally been used to describe the application of static analysis tools. Numerous industry leaders have taken issue with this conflation because static analysis is not technically testing; even static analysis that "covers" every line of code is incapable of validating that the code does what it is supposed to do—or of exposing certain types of defects or security vulnerabilities that manifest themselves only as software is dynamically executed. Although many warn that static analysis alone should not be considered a silver bullet or panacea, most industry experts agree that static analysis is a proven method for eliminating many security, reliability, and performance defects. In other words, while static analysis is not the same as development testing, it is commonly considered a component of development testing. Additional activities In addition to various implementations of static analysis, such as flow analysis, and unit testing, development testing also includes peer code review as a primary quality activity. Code review is widely considered one of the most effective defect detection and prevention methods in software development. See also Unit testing Software testing Integration testing Functional Testing Regression Testing Software performance testing User Acceptance Testing (UAT) Continuous Integration/Continuous deployment (CI/CD) References Software testing
Development testing
[ "Engineering" ]
813
[ "Software engineering", "Software testing" ]
3,864,143
https://en.wikipedia.org/wiki/Igneous%20differentiation
In geology, igneous differentiation, or magmatic differentiation, is an umbrella term for the various processes by which magmas undergo bulk chemical change during the partial melting process, cooling, emplacement, or eruption. The sequence of (usually increasingly silicic) magmas produced by igneous differentiation is known as a magma series. Definitions Primary melts When a rock melts to form a liquid, the liquid is known as a primary melt. Primary melts have not undergone any differentiation and represent the starting composition of a magma. In nature, primary melts are rarely seen. Some leucosomes of migmatites are examples of primary melts. Primary melts derived from the mantle are especially important and are known as primitive melts or primitive magmas. By finding the primitive magma composition of a magma series, it is possible to model the composition of the rock from which a melt was formed, which is important because we have little direct evidence of the Earth's mantle. Parental melts Where it is impossible to find the primitive or primary magma composition, it is often useful to attempt to identify a parental melt. A parental melt is a magma composition from which the observed range of magma chemistries has been derived by the processes of igneous differentiation. It need not be a primitive melt. For instance, a series of basalt lava flows is assumed to be related to one another. A composition from which they could reasonably be produced by fractional crystallization is termed a parental melt. To prove this, fractional crystallization models would be produced to test the hypothesis that they share a common parental melt. Cumulate rocks Fractional crystallization and accumulation of crystals formed during the differentiation process of a magmatic event are known as cumulate rocks, and those parts are the first which crystallize out of the magma. Identifying whether a rock is a cumulate or not is crucial for understanding if it can be modelled back to a primary melt or a primitive melt, and identifying whether the magma has dropped out cumulate minerals is equally important even for rocks which carry no phenocrysts. Underlying causes of differentiation The primary cause of change in the composition of a magma is cooling, which is an inevitable consequence of the magma being formed and migrating from the site of partial melting into an area of lower stress - generally a cooler volume of the crust. Cooling causes the magma to begin to crystallize minerals from the melt or liquid portion of the magma. Most magmas are a mixture of liquid rock (melt) and crystalline minerals (phenocrysts). Contamination is another cause of magma differentiation. Contamination can be caused by assimilation of wall rocks, mixing of two or more magmas or even by replenishment of the magma chamber with fresh, hot magma. The whole gamut of mechanisms for differentiation has been referred to as the FARM process, which stands for fractional crystallization, assimilation, replenishment and magma mixing. Fractional crystallization of igneous rocks Fractional crystallization is the removal and segregation from a melt of mineral precipitates, which changes the composition of the melt. This is one of the most important geochemical and physical processes operating within the Earth's crust and mantle. Fractional crystallization in silicate melts (magmas) is a very complex process compared to chemical systems in the laboratory because it is affected by a wide variety of phenomena. Prime amongst these are the composition, temperature, and pressure of a magma during its cooling. The composition of a magma is the primary control on which mineral is crystallized as the melt cools down past the liquidus. For instance in mafic and ultramafic melts, the MgO and SiO2 contents determine whether forsterite olivine is precipitated or whether enstatite pyroxene is precipitated. Two magmas of similar composition and temperature at different pressure may crystallize different minerals. An example is high-pressure and high-temperature fractional crystallization of granites to produce single-feldspar granite, and low-pressure low-temperature conditions which produce two-feldspar granites. The partial pressure of volatile phases in silicate melts is also of prime importance, especially in near-solidus crystallization of granites. Assimilation Assimilation can be broadly defined as a process where a mass of magma wholly or partially homogenizes with materials derived from the wall rock of the magma body. Assimilation is a popular mechanism to partly explain the felsification of ultramafic and mafic magmas as they rise through the crust: a hot primitive melt intruding into a cooler, felsic crust will melt the crust and mix with the resulting melt. This then alters the composition of the primitive magma. Also, pre-existing mafic host rocks can be assimilated by very hot primitive magmas. Effects of assimilation on the chemistry and evolution of magma bodies are to be expected, and have been clearly proven in many places. In the early 20th century there was a lively discussion on the relative importance of the process in igneous differentiation. More recent research has shown, however, that assimilation has a fundamental role in altering the trace element and isotopic composition of magmas, in formation of some economically important ore deposits, and in causing volcanic eruptions. Replenishment When a melt undergoes cooling along the liquid line of descent, the results are limited to the production of a homogeneous solid body of intrusive rock, with uniform mineralogy and composition, or a partially differentiated cumulate mass with layers, compositional zones and so on. This behaviour is fairly predictable and easy enough to prove with geochemical investigations. In such cases, a magma chamber will form a close approximation of the ideal Bowen's reaction series. However, most magmatic systems are polyphase events, with several pulses of magmatism. In such a case, the liquid line of descent is interrupted by the injection of a fresh batch of hot, undifferentiated magma. This can cause extreme fractional crystallisation because of three main effects: Additional heat provides additional energy to allow more vigorous convection, allows resorption of existing mineral phases back into the melt, and can cause a higher-temperature form of a mineral or other higher-temperature minerals to begin precipitating Fresh magma changes the composition of the melt, changing the chemistry of the phases which are being precipitated. For instance, plagioclase conforms to the liquid line of descent by forming initial anorthite which, if removed, changes the equilibrium mineral composition to oligoclase or albite. Replenishment of the magma can see this trend reversed, so that more anorthite is precipitated atop cumulate layers of albite. Fresh magma destabilises minerals which are precipitating as solid solution series or on a eutectic; a change in composition and temperature can cause extremely rapid crystallisation of certain mineral phases which are undergoing a eutectic crystallisation phase. Magma mixing Magma mixing is the process by which two magmas meet, comingle, and form a magma of a composition somewhere between the two end-member magmas. Magma mixing is a common process in volcanic magma chambers, which are open-system chambers where magmas enter the chamber, undergo some form of assimilation, fractional crystallisation and partial melt extraction (via eruption of lava), and are replenished. Magma mixing also tends to occur at deeper levels in the crust and is considered one of the primary mechanisms for forming intermediate rocks such as monzonite and andesite. Here, due to heat transfer and increased volatile flux from subduction, the silicic crust melts to form a felsic magma (essentially granitic in composition). These granitic melts are known as an underplate. Basaltic primary melts formed in the mantle beneath the crust rise and mingle with the underplate magmas, the result being part-way between basalt and rhyolite; literally an 'intermediate' composition. Other mechanisms of differentiation Interface entrapment Convection in a large magma chamber is subject to the interplay of forces generated by thermal convection and the resistance offered by friction, viscosity and drag on the magma offered by the walls of the magma chamber. Often near the margins of a magma chamber which is convecting, cooler and more viscous layers form concentrically from the outside in, defined by breaks in viscosity and temperature. This forms laminar flow, which separates several domains of the magma chamber which can begin to differentiate separately. Flow banding is the result of a process of fractional crystallization which occurs by convection, if the crystals which are caught in the flow-banded margins are removed from the melt. The friction and viscosity of the magma causes phenocrysts and xenoliths within the magma or lava to slow down near the interface and become trapped in a viscous layer. This can change the composition of the melt in large intrusions, leading to differentiation. Partial melt extraction With reference to the definitions, above, a magma chamber will tend to cool down and crystallize minerals according to the liquid line of descent. When this occurs, especially in conjunction with zonation and crystal accumulation, and the melt portion is removed, this can change the composition of a magma chamber. In fact, this is basically fractional crystallization, except in this case we are observing a magma chamber which is the remnant left behind from which a daughter melt has been extracted. If such a magma chamber continues to cool, the minerals it forms and its overall composition will not match a sample liquid line of descent or a parental magma composition. Typical behaviours of magma chambers It is worth reiterating that magma chambers are not usually static single entities. The typical magma chamber is formed from a series of injections of melt and magma, and most are also subject to some form of partial melt extraction. Granite magmas are generally much more viscous than mafic magmas and are usually more homogeneous in composition. This is generally considered to be caused by the viscosity of the magma, which is orders of magnitude higher than mafic magmas. The higher viscosity means that, when melted, a granitic magma will tend to move in a larger concerted mass and be emplaced as a larger mass because it is less fluid and able to move. This is why granites tend to occur as large plutons, and mafic rocks as dikes and sills. Granites are cooler and are therefore less able to melt and assimilate country rocks. Wholesale contamination is therefore minor and unusual, although mixing of granitic and basaltic melts is not unknown where basalt is injected into granitic magma chambers. Mafic magmas are more liable to flow, and are therefore more likely to undergo periodic replenishment of a magma chamber. Because they are more fluid, crystal precipitation occurs much more rapidly, resulting in greater changes by fractional crystallisation. Higher temperatures also allow mafic magmas to assimilate wall rocks more readily and therefore contamination is more common and better developed. Dissolved gases All igneous magmas contain dissolved gases (water, carbonic acid, hydrogen sulfide, chlorine, fluorine, boric acid, etc.). Of these water is the principal, and was formerly believed to have percolated downwards from the Earth's surface to the heated rocks below, but is now generally admitted to be an integral part of the magma. Many peculiarities of the structure of the plutonic rocks as contrasted with the lavas may reasonably be accounted for by the operation of these gases, which were unable to escape as the deep-seated masses slowly cooled, while they were promptly given up by the superficial effusions. The acid plutonic or intrusive rocks have never been reproduced by laboratory experiments, and the only successful attempts to obtain their minerals artificially have been those in which special provision was made for the retention of the "mineralizing" gases in the crucibles or sealed tubes employed. These gases often do not enter into the composition of the rock-forming minerals, for most of these are free from water, carbonic acid, etc. Hence as crystallization goes on the residual melt must contain an ever-increasing proportion of volatile constituents. It is conceivable that in the final stages the still uncrystallized part of the magma has more resemblance to a solution of mineral matter in superheated steam than to a dry igneous fusion. Quartz, for example, is the last mineral to form in a granite. It bears much of the stamp of the quartz which we know has been deposited from aqueous solution in veins, etc. It is at the same time the most infusible of all the common minerals of rocks. Its late formation shows that in this case it arose at comparatively low temperatures and points clearly to the special importance of the gases of the magma as determining the sequence of crystallization. When solidification is nearly complete the gases can no longer be retained in the rock and make their escape through fissures towards the surface. They are powerful agents in attacking the minerals of the rocks which they traverse, and instances of their operation are found in the kaolinization of granites, tourmalinization and formation of greisen, deposition of quartz veins, and the group of changes known as propylitization. These "pneumatolytic" processes are of the first importance in the genesis of many ore deposits. They are a real part of the history of the magma itself and constitute the terminal phases of the volcanic sequence. Quantifying igneous differentiation There are several methods of directly measuring and quantifying igneous differentiation processes; Whole rock geochemistry of representative samples, to track changes and evolution of the magma systems Using the above, calculating normative mineralogy and investigating trends Trace element geochemistry Isotope geochemistry Investigating the contamination of magma systems by wall rock assimilation using radiogenic isotopes In all cases, the primary and most valuable method for identifying magma differentiation processes is mapping the exposed rocks, tracking mineralogical changes within the igneous rocks and describing field relationships and textural evidence for magma differentiation. Clinopyroxene thermobarometry can be used to determine pressures and temperatures of magma differentiation. See also References External links COMAGMAT Software package designed to facilitate thermodynamic modeling of igneous differentiation MELTS Software package designed to facilitate thermodynamic modeling of phase equilibria in magmatic systems. Geological processes Geochemistry
Igneous differentiation
[ "Chemistry" ]
2,991
[ "nan" ]
3,864,203
https://en.wikipedia.org/wiki/Point%20of%20zero%20charge
The point of zero charge (pzc) is generally described as the pH at which the net electrical charge of the particle surface (i.e. adsorbent's surface) is equal to zero. This concept has been introduced in the studies dealing with colloidal flocculation to explain why pH is affecting the phenomenon. A related concept in electrochemistry is the electrode potential at the point of zero charge. Generally, the pzc in electrochemistry is the value of the negative decimal logarithm of the activity of the potential-determining ion in the bulk fluid. The pzc is of fundamental importance in surface science. For example, in the field of environmental science, it determines how easily a substrate is able to adsorb potentially harmful ions. It also has countless applications in technology of colloids, e.g., flotation of minerals. Therefore, the pzc value has been examined in many application of adsorption to the environmental science. The pzc value is typically obtained by titrations and several titration methods have been developed. Related values associated with the soil characteristics exist along with the pzc value, including zero point of charge (zpc), point of zero net charge (pznc), etc. Term definition of point of zero charge The point of zero charge is the pH value for which the net surface charge of adsorbent is equal to zero. This concept has been introduced by an increase of interest in the pH of the solution during adsorption experiments. The reason is that the adsorption of some substances is very dependent on pH. The pzc value is determined by the characteristics of an adsorbent. For example, the surface charge of adsorbent is described by the ion that lies on the surface of the particle (adsorbent) structure like image. At a lower pH, hydrogen ions (protons, H+) would be more adsorbed than other cations (adsorbate) so that the other cations would be less adsorbed than in the case of the negatively charged particle. On the other hand, if the surface is positively charged and pH is increased, anions will be less adsorbed as pH increases. From the view of the adsorbent, if the pH of the solution is below the pzc value, the surface charge of the adsorbent would become positive so that the anions can be adsorbed. Conversely, if the pH is above the pzc value, the surface charge would be negative so that the cations can be adsorbed. For example, the electrical charge on the surface of silver iodide (AgI) crystals can be determined by the concentration of iodide ions present in the solution above the crystals. Then, the pzc value of the AgI surface will be described by a function of the concentration of I− in the solution (or by the negative decimal logarithm of this concentration, -log10 [I–] = pI−). Relation of pzc to isoelectric point The pzc is the same as the isoelectric point (iep) if there is no adsorption of other ions than the . This is often the case for pure ("pristine surface") oxides in suspension in water. In the presence of specific adsorption, pzc and isoelectric point generally have different values. Method of experimental determination The pzc is typically obtained by acid-base titrations of colloidal dispersions while monitoring the electrophoretic mobility of the particles and the pH of the suspension. Several titrations are required to distinguish pzc from iep, using different supporting electrolytes (including varying the electrolyte ionic strength). Once satisfactory curves are obtained (acid/base amount—pH, and pH—zeta potential), the pzc is established as the common intersection point (cip) of the lines. Therefore, pzc is also sometimes referred to as cip. Related abbreviations Besides pzc, iep, and cip, there are also numerous other terms used in the literature, usually expressed as initialisms, with identical or (confusingly) near-identical meaning: zero point of charge (zpc), point of zero net charge (pznc), point of zero net proton charge (pznpc), pristine point of zero charge (ppzc), point of zero salt effect (pzse), zero point of titration (zpt) of colloidal dispersion, and isoelectric point of the solid (ieps) and point of zero surface tension (pzst or pzs). Application in electrochemistry In electrochemistry, the electrode-electrolyte interface is generally charged. If the electrode is polarizable, then its surface charge depends on the electrode potential. IUPAC defines the potential at the point of zero charge as the potential of an electrode (against a defined reference electrode) at which one of the charges defined is zero. The potential of zero charge is used for determination of the absolute electrode potential in a given electrolyte. IUPAC also defines the potential difference with respect to the potential of zero charge as: where: Epzc is the electrode potential difference with respect to the point of zero charge, Eσ=0 E is the potential of the same electrode against a defined reference electrode in volts Eσ=0 is the potential of the same electrode when the surface charge is zero, in the absence of specific adsorption other than that of the solvent, against the reference electrode as used above, in volts The structure of electrolyte at the electrode surface can also depend on the surface charge, with a change around the pzc potential. For example, on a platinum electrode, water molecules have been reported to be weakly hydrogen-bonded with "oxygen-up" orientation on negatively charged surfaces, and strongly hydrogen-bonded with nearly flat orientation at positively charged surfaces. At pzc, the colloidal system exhibits zero zeta potential (that is, the particles remain stationary in an electric field), minimum stability (exhibits maximum coagulation or flocculation rate), maximum solubility of the solid phase, maximum viscosity of the dispersion, and other peculiarities. Application in environmental geochemistry In the field of environmental science, adsorption is involved in many techniques that can eliminate pollutants and governs the concentration of chemicals in soils and/or atmosphere. When studying pollutant degradation or a sorption process, it is important to examine the pzc value related to adsorption. For example, natural and organic substrates including wood ash, sawdust, etc. are used as an adsorbent by eliminating harmful heavy metals like arsenic, cobalt, mercury ion and so forth in contaminated neutral drainage (CND), which is a passive reactor that could possible metal adsorption with low-cost materials. Therefore, the pzc values of the organic substrates were evaluated to optimize the selection of materials in CND. Another example is that the emission of nitrous acid, which controls the atmosphere's oxidative capacity. Different soil pH leads to the different surface charges of minerals so the emission of nitrous acid would be varied, further impacting on the biological cycle involved in the nitrous acid species. Further reading Kosmulski M. (2009). Surface Charging and Points of Zero Charge. CRC Press; 1st edition (Hardcover). References Physical chemistry Colloidal chemistry
Point of zero charge
[ "Physics", "Chemistry" ]
1,590
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Colloids", "Surface science", "nan", "Physical chemistry" ]