id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
30,733,508 | https://en.wikipedia.org/wiki/Aleksandrov%E2%80%93Rassias%20problem | The theory of isometries in the framework of Banach spaces has its beginning in a paper by Stanisław Mazur and Stanisław M. Ulam in 1932. They proved the Mazur–Ulam theorem stating that every isometry of a normed real linear space onto a normed real linear space is a linear mapping up to translation. In 1970, Aleksandr Danilovich Aleksandrov asked whether the existence of a single distance that is preserved by a mapping implies that it is an isometry, as it does for Euclidean spaces by the Beckman–Quarles theorem. Themistocles M. Rassias posed the following problem:
Aleksandrov–Rassias Problem. If and are normed linear spaces and if is a continuous and/or surjective mapping such that whenever vectors and in satisfy , then (the distance one preserving property or DOPP), is then necessarily an isometry?
There have been several attempts in the mathematical literature by a number of researchers for the solution to this problem.
References
P. M. Pardalos, P. G. Georgiev and H. M. Srivastava (eds.), Nonlinear Analysis. Stability, Approximation, and Inequalities. In honor of Themistocles M. Rassias on the occasion of his 60th birthday, Springer, New York, 2012.
A. D. Aleksandrov, Mapping of families of sets, Soviet Math. Dokl. 11(1970), 116–120.
On the Aleksandrov-Rassias problem for isometric mappings
On the Aleksandrov-Rassias problem and the geometric invariance in Hilbert spaces
S.-M. Jung and K.-S. Lee, An inequality for distances between 2n points and the Aleksandrov–Rassias problem, J. Math. Anal. Appl. 324(2)(2006), 1363–1369.
S. Xiang, Mappings of conservative distances and the Mazur–Ulam theorem, J. Math. Anal. Appl. 254(1)(2001), 262–274.
S. Xiang, On the Aleksandrov problem and Rassias problem for isometric mappings, Nonlinear Functional Analysis and Appls. 6(2001), 69-77.
S. Xiang, On approximate isometries, in : Mathematics in the 21st Century (eds. K. K. Dewan and M. Mustafa), Deep Publs. Ltd., New Delhi, 2004, pp. 198–210.
Mathematical analysis
Metric geometry
Functional equations | Aleksandrov–Rassias problem | [
"Mathematics"
] | 523 | [
"Mathematical analysis",
"Mathematical objects",
"Functional equations",
"Equations"
] |
30,733,523 | https://en.wikipedia.org/wiki/Cauchy%E2%80%93Rassias%20stability | A classical problem of Stanislaw Ulam in the theory of functional equations is the following: When is it true that a function which approximately satisfies a functional equation E must be close to an exact solution of E? In 1941, Donald H. Hyers gave a partial affirmative answer to this question in the context of Banach spaces. This was the first significant breakthrough and a step towards more studies in this domain of research. Since then, a large number of papers have been published in connection with various generalizations of Ulam's problem and Hyers' theorem. In 1978, Themistocles M. Rassias succeeded in extending the Hyers' theorem by considering an unbounded Cauchy difference. He was the first to prove the stability of the linear mapping in Banach spaces. In 1950, T. Aoki had provided a proof of a special case of the Rassias' result when the given function is additive. For an extensive presentation of the stability of functional equations in the context of Ulam's problem, the interested reader is referred to the recent book of S.-M. Jung, published by Springer, New York, 2011 (see references below).
Th. M. Rassias' theorem attracted a number of mathematicians who began to be stimulated to do research in stability theory of functional equations. By regarding the large influence of S. M. Ulam, D. H. Hyers and Th. M. Rassias on the study of stability problems of functional equations, this concept is called the Hyers–Ulam–Rassias stability.
In the special case when Ulam's problem accepts a solution for Cauchy's functional equation f(x + y) = f(x) + f(y), the equation E is said to satisfy the Cauchy–Rassias stability. The name is referred to Augustin-Louis Cauchy and Themistocles M. Rassias.
References
P. M. Pardalos, P. G. Georgiev and H. M. Srivastava (eds.), Nonlinear Analysis. Stability, Approximation, and Inequalities. In honor of Themistocles M. Rassias on the occasion of his 60th birthday, Springer, New York, 2012.
D. H. Hyers, On the stability of the linear functional Equation, Proc. Natl. Acad. Sci. USA, 27(1941), 222-224.
Th. M. Rassias, On the stability of the linear mapping in Banach spaces, Proceedings of the American Mathematical Society 72(1978), 297-300. [Translated in Chinese and published in: Mathematical Advance in Translation, Chinese Academy of Sciences 4 (2009), 382-384.]
Th. M. Rassias, On the stability of functional equations and a problem of Ulam, Acta Applicandae Mathematicae, 62(1)(2000), 23-130.
S.-M. Jung, Hyers-Ulam-Rassias Stability of Functional Equations in Nonlinear Analysis, Springer, New York, 2011, .
T. Aoki, On the stability of the linear transformation in Banach spaces, J. Math. Soc. Jpn., 2(1950), 64-66.
C.-G. Park, Generalized quadratic mappings in several variables, Nonlinear Anal., 57(2004), 713–722.
J.-R. Lee and D.-Y. Shin, On the Cauchy-Rassias stability of a generalized additive functional equation, J. Math. Anal. Appl. 339(1)(2008), 372–383.
C. Baak, Cauchy – Rassias stability of Cauchy-Jensen additive mappings in Banach spaces, Acta Math. Sinica (English Series), 15(1)(1999), 1-11.
C.-G. Park, Homomorphisms between Lie JC*- algebras and Cauchy – Rassias stability of Lie JC*-algebra derivations, J. Lie Theory, 15(2005), 393–414.
J.-R. Lee, D.-Y. Shin, On the Cauchy-Rassias stability of the Trif functional equation in C*-algebras. J. Math. Anal. Appl. 296(1)(2004), 351–363.
C. Baak, H.- Y. Chu and M. S. Moslehian, On the Cauchy-Rassias inequality and linear n–inner product preserving mappings, Math. Inequal. Appl. 9(3)(2006), 453–464.
C.-G. Park, M. Eshaghi Gordji and H. Khodaei, A fixed point approach to the Cauchy-Rassias stability of general Jensen type quadratic-quadratic mappings, Bull. Korean Math. Soc. 47(2010), no. 5, 987–996
A. Najati, Cauchy-Rassias stability of homomorphisms associated to a Pexiderized Cauchy-Jensen type functional equation, J. Math. Inequal. 3(2)(2009), 257-265.
C.-G. Park and S. Y. Jang, Cauchy-Rassias stability of sesquilinear n-quadratic mappings in Banach modules, Rocky Mountain J. Math. 39(6)(2009), 2015–2027.
Pl. Kannappan, Functional Equations and Inequalities with Applications, Springer, New York, 2009, .
P. K. Sahoo and Pl. Kannappan, Introduction to Functional Equations, CRC Press, Chapman & Hall Book, Florida, 2011, .
Th. M. Rassias and J. Brzdek (eds.), Functional Equations in Mathematical Analysis, Springer, New York, 2012, .
Mathematical analysis
Functional analysis
Functional equations | Cauchy–Rassias stability | [
"Mathematics"
] | 1,278 | [
"Functions and mappings",
"Mathematical analysis",
"Functional analysis",
"Functional equations",
"Mathematical objects",
"Equations",
"Mathematical relations"
] |
34,774,698 | https://en.wikipedia.org/wiki/Seiberg%E2%80%93Witten%20map | The Seiberg–Witten map is a map used in gauge theory and string theory introduced by Nathan Seiberg and Edward Witten which relates non-commutative degrees of freedom of a gauge theory to their commutative counterparts. It was argued by Seiberg and Witten that certain non-commutative gauge theories are equivalent to commutative ones and that there exists a map from a commutative gauge field to a non-commutative one, which is compatible with the gauge structure of each.
References
Gauge theories
String theory | Seiberg–Witten map | [
"Physics",
"Astronomy"
] | 114 | [
"String theory",
"Astronomical hypotheses",
"Quantum mechanics",
"Quantum physics stubs"
] |
34,775,719 | https://en.wikipedia.org/wiki/Velocity%20triangle | In turbomachinery, a velocity triangle or a velocity diagram is a triangle representing the various components of velocities of the working fluid in a turbomachine. Velocity triangles may be drawn for both the inlet and outlet sections of any turbomachine. The vector nature of velocity is utilized in the triangles, and the most basic form of a velocity triangle consists of the tangential velocity, the absolute velocity and the relative velocity of the fluid making up three sides of the triangle.
Components
A general velocity triangle consists of the following vectors:
= absolute velocity of the fluid.
= blade linear velocity.
= relative velocity of the fluid after contact with rotor.
= tangential component of (absolute velocity), called Whirl velocity.
= flow velocity (axial component in case of axial machines, radial component in case of radial machines).
The following angles are encountered during the analysis:
= absolute angle is an angle made by with the plane of the machine (usually the nozzle angle or the guide blade angle) i.e. angle made by absolute velocity and the direction of blade rotation .
= relative angle is an angle made by relative velocity and direction of blade rotation.
References
Mechanical engineering
Fluid mechanics | Velocity triangle | [
"Physics",
"Engineering"
] | 243 | [
"Civil engineering",
"Applied and interdisciplinary physics",
"Fluid mechanics",
"Mechanical engineering"
] |
34,778,599 | https://en.wikipedia.org/wiki/RNA%20CoSSMos | The RNA Characterization of Secondary Structure Motifs database (RNA CoSSMos) is a repository of three-dimensional nucleic acid PDB structures containing secondary structure motifs ( loops, hairpin loops ...).
See also
Nucleic acid secondary structure
References
External links
https://www.rnacossmos.com/
Biological databases
RNA
Biophysics
Molecular geometry | RNA CoSSMos | [
"Physics",
"Chemistry",
"Biology"
] | 74 | [
"Applied and interdisciplinary physics",
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Bioinformatics",
"Biophysics",
"Stereochemistry stubs",
"Biological databases",
"Matter"
] |
34,779,206 | https://en.wikipedia.org/wiki/Photovoltaic%20power%20station | A photovoltaic power station, also known as a solar park, solar farm, or solar power plant, is a large-scale grid-connected photovoltaic power system (PV system) designed for the supply of merchant power. They are different from most building-mounted and other decentralized solar power because they supply power at the utility level, rather than to a local user or users. Utility-scale solar is sometimes used to describe this type of project.
This approach differs from concentrated solar power, the other major large-scale solar generation technology, which uses heat to drive a variety of conventional generator systems. Both approaches have their own advantages and disadvantages, but to date, for a variety of reasons, photovoltaic technology has seen much wider use. , about 97% of utility-scale solar power capacity was PV.
In some countries, the nameplate capacity of photovoltaic power stations is rated in megawatt-peak (MWp), which refers to the solar array's theoretical maximum DC power output. In other countries, the manufacturer states the surface and the efficiency. However, Canada, Japan, Spain, and the United States often specify using the converted lower nominal power output in MWAC, a measure more directly comparable to other forms of power generation. Most solar parks are developed at a scale of at least 1 MWp. As of 2018, the world's largest operating photovoltaic power stations surpassed 1 gigawatt. At the end of 2019, about 9,000 solar farms were larger than 4 MWAC (utility scale), with a combined capacity of over 220 GWAC.
Most of the existing large-scale photovoltaic power stations are owned and operated by independent power producers, but the involvement of community and utility-owned projects is increasing. Previously, almost all were supported at least in part by regulatory incentives such as feed-in tariffs or tax credits, but as levelized costs fell significantly in the 2010s and grid parity has been reached in most markets, external incentives are usually not needed.
History
The first 1 MWp solar park was built by Arco Solar at Lugo near Hesperia, California, at the end of 1982, followed in 1984 by a 5.2 MWp installation in Carrizo Plain. Both have since been decommissioned (although a new plant, Topaz Solar Farm, was commissioned in Carrizo Plain in 2015). The next stage followed the 2004 revisions to the feed-in tariffs in Germany, when a substantial volume of solar parks were constructed.
Several hundred installations over 1 MWp have since been installed in Germany, of which more than 50 are over 10 MWp. With its introduction of feed-in tariffs in 2008, Spain briefly became the largest market with some 60 solar parks over 10 MW, but these incentives have since been withdrawn. The USA, China, India, France, Canada, Australia, and Italy, among others, have also become major markets as shown on the list of photovoltaic power stations.
The largest sites under construction have capacities of hundreds of MWp and some more than 1 GWp.
Siting and land use
The land area required for a desired power output varies depending on the location, the efficiency of the solar panels, the slope of the site, and the type of mounting used. Fixed tilt solar arrays using typical panels of about 15% efficiency on horizontal sites, need about /MW in the tropics and this figure rises to over in northern Europe.
Because of the longer shadow the array casts when tilted at a steeper angle, this area is typically about 10% higher for an adjustable tilt array or a single axis tracker, and 20% higher for a 2-axis tracker, though these figures will vary depending on the latitude and topography.
The best locations for solar parks in terms of land use are held to be brown field sites, or where there is no other valuable land use. Even in cultivated areas, a significant proportion of the site of a solar farm can also be devoted to other productive uses, such as crop growing or biodiversity. The change in albedo affects local temperature. One study claims a temperature rise due to the heat island effect, and another study claims that surroundings in arid ecosystems become cooler.
Agrivoltaics
Agrivoltaics is using the same area of land for both solar photovoltaic power and agriculture. A recent study found that the value of solar generated electricity coupled to shade-tolerant crop production created an over 30% increase in economic value from farms deploying agrivoltaic systems instead of conventional agriculture.
Solar landfill
A Solar landfill is a repurposed used landfill that is converted to a solar array solar farm.
Co-location
In some cases, several different solar power stations with separate owners and contractors are developed on adjacent sites. This can offer the advantage of the projects sharing the cost and risks of project infrastructure such as grid connections and planning approval. Solar farms can also be co-located with wind farms.
Sometimes 'solar park' is used to describe a set of individual solar power stations, which share sites or infrastructure, and 'cluster' is used where several plants are located nearby without any shared resources. Some examples of solar parks are the Charanka Solar Park, where there are 17 different generation projects; Neuhardenberg, with eleven plants, and the Golmud solar park with total reported capacity over 500MW. An extreme example would be calling all of the solar farms in the Gujarat state of India a single solar park, the Gujarat Solar Park.
To avoid land use altogether, in 2022, a 5 MW floating solar park was installed in the Alqueva Dam reservoir, Portugal, enabling solar power and hydroelectric energy to be combined. Separately, a German engineering firm committed to integrating an offshore floating solar farm with an offshore wind farm to use ocean space more efficiently. The projects involve "hybridization", in which different renewable energy technologies are combined in one site.
Solar farms in space
The first successful test in January 2024 of a solar farm in space—collecting solar power from a photovoltaic cell and beaming energy down to Earth—constituted an early feasibility demonstration completed. Such setups are not limited by cloud cover or the Sun’s cycle.
Technology
Most solar parks are ground mounted PV systems, also known as free-field solar power plants. They can either be fixed tilt or use a single axis or dual axis solar tracker. While tracking improves the overall performance, it also increases the system's installation and maintenance cost. A solar inverter converts the array's power output from DC to AC, and connection to the utility grid is made through a high voltage, three phase step up transformer of typically 10 kV and above.
Solar array arrangements
The solar arrays are the subsystems which convert incoming light into electrical energy. They comprise a multitude of solar panels, mounted on support structures and interconnected to deliver a power output to electronic power conditioning subsystems. The majority are free-field systems using ground-mounted structures, usually of one of the following types:
Fixed arrays
Many projects use mounting structures where the solar panels are mounted at a fixed inclination calculated to provide the optimum annual output profile. The panels are normally oriented towards the Equator, at a tilt angle slightly less than the latitude of the site. In some cases, depending on local climatic, topographical or electricity pricing regimes, different tilt angles can be used, or the arrays might be offset from the normal east–west axis to favour morning or evening output.
A variant on this design is the use of arrays, whose tilt angle can be adjusted twice or four times annually to optimise seasonal output. They also require more land area to reduce internal shading at the steeper winter tilt angle. Because the increased output is typically only a few percent, it seldom justifies the increased cost and complexity of this design.
Dual axis trackers
To maximise the intensity of incoming direct radiation, solar panels should be orientated normal to the sun's rays. To achieve this, arrays can be designed using two-axis trackers, capable of tracking the sun in its daily movement across the sky, and as its elevation changes throughout the year.
These arrays need to be spaced out to reduce inter-shading as the sun moves and the array orientations change, so need more land area. They also require more complex mechanisms to maintain the array surface at the required angle. The increased output can be of the order of 30% in locations with high levels of direct radiation, but the increase is lower in temperate climates or those with more significant diffuse radiation, due to overcast conditions. So dual axis trackers are most commonly used in subtropical regions, and were first deployed at utility scale at the Lugo plant.
Single axis trackers
A third approach achieves some of the output benefits of tracking, with a lesser penalty in terms of land area, capital and operating cost. This involves tracking the sun in one dimension – in its daily journey across the sky – but not adjusting for the seasons. The angle of the axis is normally horizontal, though some, such as the solar park at Nellis Air Force Base, which has a 20° tilt, incline the axis towards the equator in a north–south orientation – effectively a hybrid between tracking and fixed tilt.
Single axis tracking systems are aligned along axes roughly north–south. Some use linkages between rows so that the same actuator can adjust the angle of several rows at once.
Power conversion
Solar panels produce direct current (DC) electricity, so solar parks need conversion equipment to convert this to alternating current (AC), which is the form transmitted by the electricity grid. This conversion is done by inverters. To maximise their efficiency, solar power plants also vary the electrical load, either within the inverters or as separate units. These devices keep each solar array string close to its peak power point.
There are two primary alternatives for configuring this conversion equipment; centralized and string inverters, although in some cases individual, or micro-inverters are used. Single inverters allows optimizing the output of each panel, and multiple inverters increases the reliability by limiting the loss of output when an inverter fails.
Centralized inverters
These units have relatively high capacity, typically of the order between 1 MW up to 7 MW for newer units (2020), so they condition the output of a substantial block of solar arrays, up to perhaps in area. Solar parks using centralized inverters are often configured in discrete rectangular blocks, with the related inverter in one corner, or the centre of the block.
String inverters
String inverters are substantially lower in capacity than central inverters, of the order of 10 kW up to 250 KW for newer models (2020), and condition the output of a single array string. This is normally a whole, or part of, a row of solar arrays within the overall plant. String inverters can enhance the efficiency of solar parks, where different parts of the array are experiencing different levels of insolation, for example where arranged at different orientations, or closely packed to minimise site area.
Transformers
The system inverters typically provide power output at voltages of the order of 480 VAC up to 800 VAC. Electricity grids operate at much higher voltages of the order of tens or hundreds of thousands of volts, so transformers are incorporated to deliver the required output to the grid. Due to the long lead time, the Long Island Solar Farm chose to keep a spare transformer onsite, as transformer failure would have kept the solar farm offline for a long period. Transformers typically have a life of 25 to 75 years, and normally do not require replacement during the life of a photovoltaic power station.
System performance
The performance of a solar park depends on the climatic conditions, the equipment used and the system configuration. The primary energy input is the global light irradiance in the plane of the solar arrays, and this in turn is a combination of the direct and the diffuse radiation. In some regions soiling, the accumulation of dust or organic material on the solar panels that blocks incident light, is a significant loss factor.
A key determinant of the output of the system is the conversion efficiency of the solar panels, which depends in particular on the type of solar cell used.
There will be losses between the DC output of the solar panels and the AC power delivered to the grid, due to a wide range of factors such as light absorption losses, mismatch, cable voltage drop, conversion efficiencies, and other parasitic losses. A parameter called the 'performance ratio' has been developed to evaluate the total value of these losses. The performance ratio gives a measure of the output AC power delivered as a proportion of the total DC power which the solar panels should be able to deliver under the ambient climatic conditions. In modern solar parks the performance ratio should typically be in excess of 80%.
System degradation
Early photovoltaic systems output decreased as much as 10%/year, but as of 2010 the median degradation rate was 0.5%/year, with panels made after 2000 having a significantly lower degradation rate, so that a system would lose only 12% of its output performance in 25 years. A system using panels which degrade 4%/year will lose 64% of its output during the same period. Many panel makers offer a performance guarantee, typically 90% in ten years and 80% over 25 years. The output of all panels is typically warranted at plus or minus 3% during the first year of operation.
The business of developing solar parks
Solar power plants are developed to deliver merchant electricity into the grid as an alternative to other renewable, fossil or nuclear generating stations.
The plant owner is an electricity generator. Most solar power plants today are owned by independent power producers (IPP's), though some are held by investor- or community-owned utilities.
Some of these power producers develop their own portfolio of power plants, but most solar parks are initially designed and constructed by specialist project developers. Typically the developer will plan the project, obtain planning and connection consents, and arrange financing for the capital required. The actual construction work is normally contracted to one or more engineering, procurement, and construction (EPC) contractors.
Major milestones in the development of a new photovoltaic power plant are planning consent, grid connection approval, financial close, construction, connection and commissioning. At each stage in the process, the developer will be able to update estimates of the anticipated performance and costs of the plant and the financial returns it should be able to deliver.
Planning approval
Photovoltaic power stations occupy at least one hectare for each megawatt of rated output, so require a substantial land area; which is subject to planning approval. The chances of obtaining consent, and the related time, cost and conditions, vary by jurisdiction and location. Many planning approvals will also apply conditions on the treatment of the site after the station has been decommissioned in the future. A professional health, safety and environment assessment is usually undertaken during the design of a PV power station in order to ensure the facility is designed and planned in accordance with all HSE regulations.
Grid connection
The availability, locality and capacity of the connection to the grid is a major consideration in planning a new solar park, and can be a significant contributor to the cost.
Most stations are sited within a few kilometres of a suitable grid connection point. This network needs to be capable of absorbing the output of the solar park when operating at its maximum capacity. The project developer will normally have to absorb the cost of providing power lines to this point and making the connection; in addition often to any costs associated with upgrading the grid, so it can accommodate the output from the plant. Therefore, solar power stations are sometimes built at sites of former coal-fired power stations to reuse existing infrastructure.
Operation and maintenance
Once the solar park has been commissioned, the owner usually enters into a contract with a suitable counterparty to undertake operation and maintenance (O&M). In many cases this may be fulfilled by the original EPC contractor.
Solar plants' reliable solid-state systems require minimal maintenance, compared to rotating machinery. A major aspect of the O&M contract will be continuous monitoring of the performance of the plant and all of its primary subsystems, which is normally undertaken remotely. This enables performance to be compared with the anticipated output under the climatic conditions actually experienced. It also provides data to enable the scheduling of both rectification and preventive maintenance. A small number of large solar farms use a separate inverter or maximizer for each solar panel, which provide individual performance data that can be monitored. For other solar farms, thermal imaging is used to identify non-performing panels for replacement.
Power delivery
A solar park's income derives from the sales of electricity to the grid, and so its output is metered in real-time with readings of its energy output provided, typically on a half-hourly basis, for balancing and settlement within the electricity market.
Income is affected by the reliability of equipment within the plant and also by the availability of the grid network to which it is exporting. Some connection contracts allow the transmission system operator to curtail the output of a solar park, for example at times of low demand or high availability of other generators. Some countries make statutory provision for priority access to the grid for renewable generators, such as that under the European Renewable Energy Directive.
Economics and finance
In recent years, PV technology has improved its electricity generating efficiency, reduced the installation cost per watt as well as its energy payback time (EPBT). It has reached grid parity in most parts of the world and become a mainstream power source.
As solar power costs reached grid parity, PV systems were able to offer power competitively in the energy market. The subsidies and incentives, which were needed to stimulate the early market as detailed below, were progressively replaced by auctions and competitive tendering leading to further price reductions.
Competitive energy costs of utility-scale solar
The improving competitiveness of utility-scale solar became more visible as countries and energy utilities introduced auctions for new generating capacity. Some auctions are reserved for solar projects, while others are open to a wider range of sources.
The prices revealed by these auctions and tenders have led to highly competitive prices in many regions. Amongst the prices quoted are:
Grid parity
Solar generating stations have become progressively cheaper in recent years, and this trend is expected to continue. Meanwhile, traditional electricity generation is becoming progressively more expensive. These trends led to a crossover point when the levelised cost of energy from solar parks, historically more expensive, matched or beat the cost of traditional electricity generation. This point depends on locations and other factors, and is commonly referred to as grid parity.
For merchant solar power stations, where the electricity is being sold into the electricity transmission network, the levelised cost of solar energy will need to match the wholesale electricity price. This point is sometimes called 'wholesale grid parity' or 'busbar parity'.
Prices for installed PV systems show regional variations, more than solar cells and panels, which tend to be global commodities. The IEA explains these discrepancies due to differences in "soft costs", which include customer acquisition, permitting, inspection and interconnection, installation labor and financing costs.
Incentive mechanisms
In the years before grid parity had been reached in many parts of the world, solar generating stations needed some form of financial incentive to compete for the supply of electricity. Many countries used such incentives to support the deployment of solar power stations.
Feed-in tariffs
Feed-in tariffs are designated prices which must be paid by utility companies for each kilowatt hour of renewable electricity produced by qualifying generators and fed into the grid. These tariffs normally represent a premium on wholesale electricity prices and offer a guaranteed revenue stream to help the power producer finance the project.
Renewable portfolio standards and supplier obligations
These standards are obligations on utility companies to source a proportion of their electricity from renewable generators. In most cases, they do not prescribe which technology should be used and the utility is free to select the most appropriate renewable sources.
There are some exceptions where solar technologies are allocated a proportion of the RPS in what is sometimes referred to as a 'solar set aside'.
Loan guarantees and other capital incentives
Some countries and states adopt less targeted financial incentives, available for a wide range of infrastructure investment, such as the US Department of Energy loan guarantee scheme, which stimulated a number of investments in the solar power plant in 2010 and 2011.
Tax credits and other fiscal incentives
Another form of indirect incentive which has been used to stimulate investment in solar power plant was tax credits available to investors. In some cases the credits were linked to the energy produced by the installations, such as the Production Tax Credits. In other cases the credits were related to the capital investment such as the Investment Tax Credits
International, national and regional programmes
In addition to free market commercial incentives, some countries and regions have specific programs to support the deployment of solar energy installations.
The European Union's Renewables Directive sets targets for increasing levels of deployment of renewable energy in all member states. Each has been required to develop a National Renewable Energy Action Plan showing how these targets would be met, and many of these have specific support measures for solar energy deployment. The directive also allows states to develop projects outside their national boundaries, and this may lead to bilateral programs such as the Helios project.
The Clean Development Mechanism of the UNFCCC is an international programme under which solar generating stations in certain qualifying countries can be supported.
Additionally many other countries have specific solar energy development programmes. Some examples are India's JNNSM, the Flagship Program in Australia, and similar projects in South Africa and Israel.
Financial performance
The financial performance of the solar power plant is a function of its income and its costs.
The electrical output of a solar park will be related to the solar radiation, the capacity of the plant and its performance ratio. The income derived from this electrical output will come primarily from the sale of the electricity, and any incentive payments such as those under Feed-in Tariffs or other support mechanisms.
Electricity prices may vary at different times of day, giving a higher price at times of high demand. This may influence the design of the plant to increase its output at such times.
The dominant costs of solar power plants are the capital cost, and therefore any associated financing and depreciation. Though operating costs are typically relatively low, especially as no fuel is required, most operators will want to ensure that adequate operation and maintenance cover is available to maximise the availability of the plant and thereby optimise the income to cost ratio.
Geography
The first places to reach grid parity were those with high traditional electricity prices and high levels of solar radiation. The worldwide distribution of solar parks is expected to change as different regions achieve grid parity. This transition also includes a shift from rooftop towards utility-scale plants, since the focus of new PV deployment has changed from Europe towards the Sunbelt markets where ground-mounted PV systems are favored.
Because of the economic background, large-scale systems are presently distributed where the support regimes have been the most consistent, or the most advantageous. Total capacity of worldwide PV plants above 4 MWAC was assessed by Wiki-Solar as c. 220 GW in c. 9,000 installations at the end of 2019 and represents about 35 percent of estimated global PV capacity of 633 GW, up from 25 percent in 2014. Activities in the key markets are reviewed individually below.
China
In 2013 China overtook Germany as the nation with the most utility-scale solar capacity. Much of this has been supported by the Clean Development Mechanism.
The distribution of power plants around the country is quite broad, with the highest concentration in the Gobi desert and connected to the Northwest China Power Grid.
Germany
The first multi-megawatt plant in Europe was the 4.2 MW community-owned project at Hemau, commissioned in 2003. But it was the revisions to the German feed-in tariffs in 2004, which gave the strongest impetus to the establishment of utility-scale solar power plants. The first to be completed under this programme was the Leipziger Land solar park developed by Geosol. Several dozen plants were built between 2004 and 2011, several of which were at the time the largest in the world. The EEG, the law which establishes Germany's feed-in tariffs, provides the legislative basis not just for the compensation levels, but other regulatory factors, such as priority access to the grid. The law was amended in 2010 to restrict the use of agricultural land, since which time most solar parks have been built on so-called 'development land', such as former military sites. Partly for this reason, the geographic distribution of photovoltaic power plants in Germany is biased towards the former East Germany.
India
India has been rising up the leading nations for the installation of utility-scale solar capacity. The Charanka Solar Park in Gujarat was opened officially in April 2012 and was at the time the largest group of solar power plants in the world.
Geographically the states with the largest installed capacity are Telangana, Rajasthan and Andhra Pradesh with over 2 GW of installed solar power capacity each. Rajasthan and Gujarat share the Thar Desert, along with Pakistan. In May 2018, the Pavagada Solar Park became functional and had a production capacity of 2GW. As of February 2020, it is the largest Solar Park in the world. In September 2018 Acme Solar announced that it had commissioned India's cheapest solar power plant, the 200 MW Rajasthan Bhadla solar power park.
Italy
Italy has a large number of photovoltaic power plants, the largest of which is the 84 MW Montalto di Castro project.
Jordan
By the end of 2017, it was reported that more than 732 MW of solar energy projects had been completed, which contributed to 7% of Jordan's electricity. After having initially set the percentage of renewable energy Jordan aimed to generate by 2020 at 10%, the government announced in 2018 that it sought to beat that figure and aim for 20%.
Spain
The majority of the deployment of solar power stations in Spain to date occurred during the boom market of 2007–8.
The stations are well distributed around the country, with some concentration in Extremadura, Castile-La Mancha and Murcia.
United States
The US deployment of photovoltaic power stations is largely concentrated in southwestern states. The Renewable Portfolio Standards in California and surrounding states provide a particular incentive.
Notable solar parks
The following solar parks were, at the time they became operational, the largest in the world or their continent, or are notable for the reasons given:
See also
Growth of photovoltaics
List of solar thermal power stations
List of photovoltaic power stations
List of photovoltaics companies
List of solar cell manufacturers
Photovoltaics
Photovoltaic system
Solar energy
Solar power by country
Theory of solar cell
References
External links
Interactive mapping of worldwide projects over 10MW
Solar energy
Photovoltaics
Infrastructure
Power stations | Photovoltaic power station | [
"Engineering"
] | 5,514 | [
"Construction",
"Infrastructure"
] |
34,781,467 | https://en.wikipedia.org/wiki/Porteous%20formula | In mathematics, the Porteous formula, or Thom–Porteous formula, or Giambelli–Thom–Porteous formula, is an expression for the fundamental class of a degeneracy locus (or determinantal variety) of a morphism of vector bundles in terms of Chern classes. Giambelli's formula is roughly the special case when the vector bundles are sums of line bundles over projective space. pointed out that the fundamental class must be a polynomial in the Chern classes and found this polynomial in a few special cases, and found the polynomial in general. proved a more general version, and generalized it further.
Statement
Given a morphism of vector bundles E, F of ranks m and n over a smooth variety, its k-th degeneracy locus (k ≤ min(m,n)) is the variety of points where it has rank at most k. If all components of the degeneracy locus have the expected codimension (m – k)(n – k) then Porteous's formula states that its fundamental class is the determinant of the matrix of size m – k whose (i, j) entry is the Chern class cn–k+j–i(F – E).
References
Theorems in algebraic geometry | Porteous formula | [
"Mathematics"
] | 267 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
34,781,952 | https://en.wikipedia.org/wiki/Sarason%20interpolation%20theorem | In mathematics complex analysis, the Sarason interpolation theorem, introduced by , is a generalization of the Caratheodory interpolation theorem and Nevanlinna–Pick interpolation.
References
Theorems in analysis
Interpolation | Sarason interpolation theorem | [
"Mathematics"
] | 51 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical analysis stubs",
"Mathematical problems",
"Mathematical theorems"
] |
34,782,682 | https://en.wikipedia.org/wiki/Decoupling%20%28cosmology%29 | In cosmology, decoupling is a period in the development of the universe when different types of particles fall out of thermal equilibrium with each other. This occurs as a result of the expansion of the universe, as their interaction rates decrease (and mean free paths increase) up to this critical point. The two verified instances of decoupling since the Big Bang which are most often discussed are photon decoupling and neutrino decoupling, as these led to the cosmic microwave background and cosmic neutrino background, respectively.
Photon decoupling is closely related to recombination, which occurred about 378,000 years after the Big Bang (at a redshift of z = ), when the universe was a hot opaque ("foggy") plasma. During recombination, free electrons became bound to protons (hydrogen nuclei) to form neutral hydrogen atoms. Because direct recombinations to the ground state (lowest energy) of hydrogen are very inefficient, these hydrogen atoms generally form with the electrons in a high energy state, and the electrons quickly transition to their low energy state by emitting photons. Because the neutral hydrogen that formed was transparent to light, those photons which were not captured by other hydrogen atoms were able, for the first time in the history of the universe, to travel long distances. They can still be detected today, although they now appear as radio waves, and form the cosmic microwave background ("CMB"). They reveal crucial clues about how the universe formed.
Photon decoupling
Photon decoupling occurred during the epoch known as the recombination. During this time, electrons combined with protons to form hydrogen atoms, resulting in a sudden drop in free electron density. Decoupling occurred abruptly when the rate of Compton scattering of photons was approximately equal to the rate of expansion of the universe , or alternatively when the mean free path of the photons was approximately equal to the horizon size of the universe . After this photons were able to stream freely, producing the cosmic microwave background as we know it, and the universe became transparent.
The interaction rate of the photons is given by
where is the number density of free electrons, is the electron Thomson scattering area, and is the speed of light.
In the matter-dominated era (when recombination takes place),
where is the cosmic scale factor and H₀ is the Hubble constant. also decreases as a more complicated function of , at a faster rate than . By working out the precise dependence of and on the scale factor and equating , it is possible to show that photon decoupling occurred approximately 380,000 years after the Big Bang, at a redshift of when the universe was at a temperature around 3000 K.
Neutrino decoupling
Another example is the neutrino decoupling which occurred within one second of the Big Bang. Analogous to the decoupling of photons, neutrinos decoupled when the rate of weak interactions between neutrinos and other forms of matter dropped below the rate of expansion of the universe, which produced a cosmic neutrino background of freely streaming neutrinos. An important consequence of neutrino decoupling is that the temperature of this neutrino background is lower than the temperature of the cosmic microwave background, since they were no more heated by the shortly following annihilation of positrons.
WIMPs: non-relativistic decoupling
Decoupling may also have occurred for the dark matter candidate, WIMPs. These are known as "cold relics", meaning they decoupled after they became non-relativistic (by comparison, photons and neutrinos decoupled while still relativistic and are known as "hot relics"). By calculating the hypothetical time and temperature of decoupling for non-relativistic WIMPs of a particular mass, it is possible to find their density. Comparing this to the measured density parameter of cold dark matter today of 0.222 0.0026 it is possible to rule out WIMPs of certain masses as reasonable dark matter candidates.
See also
Recombination
Chronology of the universe
Wouthuysen–Field coupling
References
Physical cosmology | Decoupling (cosmology) | [
"Physics",
"Astronomy"
] | 876 | [
"Astrophysics",
"Theoretical physics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
34,783,762 | https://en.wikipedia.org/wiki/Decoupling%20%28meteorology%29 | In weather forecasting, decoupling is a process in which two adjacent layers of Earth's atmosphere stop interacting.
Process
During the day when the sun shines and warms the land, air at the surface of the earth is heated and rises. This rising air mixes the atmosphere near the earth. At night, this process stops and air near the surface cools as the land loses heat by radiating in the infrared. If winds are light, air near the surface of the earth can become much colder, compared to the air above it, than if more mixing of air layers occurred.
Climate impacts
In mountain valleys at high altitudes, such as those in the Cascade Mountains, decoupling may alter the localized impacts of climate change, as it causes a drastic change in surface and atmospheric temperatures. In coastal regions of the Greenland ice sheet, decoupling may simultaneously help conserve ice sheet mass while limiting new accumulation of ice.
References
Weather forecasting
Meteorological phenomena | Decoupling (meteorology) | [
"Physics"
] | 193 | [
"Meteorological phenomena",
"Physical phenomena",
"Earth phenomena"
] |
34,784,308 | https://en.wikipedia.org/wiki/BigDFT | BigDFT is a free software package for physicists and chemists, distributed under the GNU General Public License, whose main program allows the total energy, charge density, and electronic structure of systems made of electrons and nuclei (molecules and periodic/crystalline solids) to be calculated within density functional theory (DFT), using pseudopotentials, and a wavelet basis.
Overview
BigDFT implements density functional theory (DFT) by solving the Kohn–Sham equations describing the electrons in a material, expanded in a Daubechies wavelet basis set and using a self-consistent direct minimization or Davidson diagonalisation methods to determine the energy minimum. Computational efficiency is achieved through the use of fast short convolutions
and pseudopotentials to describe core electrons. In addition to total energy, forces and stresses are also calculated so that geometry optimizations and ab initio molecular dynamics may be carried out.
The Daubechies wavelet basis sets are an orthogonal systematic basis set as plane wave basis set but has the great advantage to allow adapted mesh with different levels of resolutions (see multi-resolution analysis). Interpolating scaling functions are used also to solve the Poisson's equation with different boundary conditions as isolated or surface systems.
BigDFT was among the first massively parallel density functional theory codes which benefited from graphics processing units (GPU) using CUDA and then OpenCL languages.
Because the Daubechies wavelets have a compact support, the Hamiltonian application can be done locally which permits to have a linear scaling in function of the number of atoms instead of a cubic scaling for traditional DFT software.
See also
List of quantum chemistry and solid state physics software
References
External links
Computational chemistry software
Computational physics
Density functional theory software
Free physics software | BigDFT | [
"Physics",
"Chemistry"
] | 363 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Density functional theory software"
] |
27,908,073 | https://en.wikipedia.org/wiki/Triton%20Systems | Triton Systems LLC is a manufacturer of automated teller machines (ATMs). Triton ATMs are built in Long Beach, Mississippi. Triton has been in business since 1979, and has nearly 200,000 installations in over 24 countries.
History
Founded in 1979 by Ernest L. Burdette, Frank J. Wilem, Jr., and Robert E. Sandoz, Triton Systems developed ATMjr, the world's first battery-powered and completely portable device for training bank customers to use what was, at the time, a fairly new banking service, the ATM. Triton followed this product with the development of a Card Activation System that allowed financial institutions to instantly issue ATM cards with customized (often customer chosen) personal identification numbers (PINs).
In the early 1990s, Triton pioneered in-store cash withdrawals with the introduction of the Scrip terminal, a machine that allows a store's customers to use an ATM card to generate a voucher, redeemable for cash at the register.
In 2000 Triton was acquired by Dover Corporation (NYSE-traded DOV), a diversified manufacturer of a wide range of proprietary products and components for industrial and commercial use. In 2004, Fujitsu and Triton entered into a strategic licensing agreement to provide a broader range of solutions for financial institutions and retailers though the deployment of Fujitsu's Windows-based Prism software on Triton ATMs. Later that same year, Triton launched the RT2000, a smaller, low-cost through-the-wall ATM that was easy to install and easy to maintain. 120,000 ATMs were shipped to 17 countries around the world.
In 2005 Hurricane Katrina provided a major challenge for Triton. Its headquarters and manufacturing plant on the Mississippi Gulf coast was shut down, and the entire coastal area evacuated. Triton's Long Beach, Mississippi administrative, manufacturing and production facilities were back on-line within two weeks.
Triton opened a Memphis manufacturing and service facility in July 2006. In 2008 Triton launched the RL2000, a stand-alone ATM. Also that year, Triton's subsidiary, Calypso, began operations in Australia. On April 14, 2008, Calypso successfully conducted the largest migration of ATMs to be completed in a single day — 2,808 ATMs.
In March 2009, Triton introduced the RL1600, a new off-premises ATM. The RL1600 was named the Convenience Store and Petroleum (CSP) magazine Product of the Year for 2009. Also in March 2009, Triton made the decision to sell its Calypso processing operation in order to focus on ATM manufacturing, software development and support.
In September 2009, the company launched ATMGurus. ATMGurus provides customers with multi-brand parts, repair and training support for their ATM estates.
In July 2008, Nautilus Hyosung offered to acquire Triton from its parent company Dover Corporation for $63 million U.S. Dollars. However, in May 2009, citing anti-trust scrutiny from regulators, the acquisition was cancelled. Subsequently, in March 2010, Dover completed the sale of Triton to a group of private investors. The company is currently privately held.
ATMGurus
ATMGurus is a division of Triton Systems of Delaware Inc. and provides parts, repair, and training for a variety of retail ATM brands.
Memberships
Triton has active membership in the following industry associations:
ATM Industry Association (ATMIA)
NAAIO (National Association of ATM ISOs and Operators)
FSPA (Financial and Security Products Association)
ICBA (Independent Community Banking Association)
References
Triton RL1600, Convenience Store and Petroleum product of year for 2009
External links
Triton Systems Website
ATMs Location Information
Independent Community Banking Association
National Association of ATM ISOs and Operators
Manufacturing companies established in 1979
1979 establishments in Mississippi
Automated teller machines
Manufacturing companies based in Mississippi
Harrison County, Mississippi | Triton Systems | [
"Engineering"
] | 804 | [
"Automation",
"Automated teller machines"
] |
27,908,089 | https://en.wikipedia.org/wiki/Infinite-dimensional%20vector%20function | An infinite-dimensional vector function is a function whose values lie in an infinite-dimensional topological vector space, such as a Hilbert space or a Banach space.
Such functions are applied in most sciences including physics.
Example
Set for every positive integer and every real number Then the function defined by the formula
takes values that lie in the infinite-dimensional vector space (or ) of real-valued sequences. For example,
As a number of different topologies can be defined on the space to talk about the derivative of it is first necessary to specify a topology on or the concept of a limit in
Moreover, for any set there exist infinite-dimensional vector spaces having the (Hamel) dimension of the cardinality of (for example, the space of functions with finitely-many nonzero elements, where is the desired field of scalars). Furthermore, the argument could lie in any set instead of the set of real numbers.
Integral and derivative
Most theorems on integration and differentiation of scalar functions can be generalized to vector-valued functions, often using essentially the same proofs. Perhaps the most important exception is that absolutely continuous functions need not equal the integrals of their (a.e.) derivatives (unless, for example, is a Hilbert space); see Radon–Nikodym theorem
A curve is a continuous map of the unit interval (or more generally, of a non−degenerate closed interval of real numbers) into a topological space. An arc is a curve that is also a topological embedding. A curve valued in a Hausdorff space is an arc if and only if it is injective.
Derivatives
If where is a Banach space or another topological vector space then the derivative of can be defined in the usual way:
Functions with values in a Hilbert space
If is a function of real numbers with values in a Hilbert space then the derivative of at a point can be defined as in the finite-dimensional case:
Most results of the finite-dimensional case also hold in the infinite-dimensional case too, with some modifications. Differentiation can also be defined to functions of several variables (for example, or even where is an infinite-dimensional vector space).
If is a Hilbert space then any derivative (and any other limit) can be computed componentwise: if
(that is, where is an orthonormal basis of the space ), and exists, then
However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space.
Most of the above hold for other topological vector spaces too. However, not as many classical results hold in the Banach space setting, for example, an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.
Crinkled arcs
If is an interval contained in the domain of a curve that is valued in a topological vector space then the vector is called the chord of determined by .
If is another interval in its domain then the two chords are said to be non−overlapping chords if and have at most one end−point in common.
Intuitively, two non−overlapping chords of a curve valued in an inner product space are orthogonal vectors if the curve makes a right angle turn somewhere along its path between its starting point and its ending point.
If every pair of non−overlapping chords are orthogonal then such a right turn happens at every point of the curve; such a curve can not be differentiable at any point.
A crinkled arc is an injective continuous curve with the property that any two non−overlapping chords are orthogonal vectors.
An example of a crinkled arc in the Hilbert space is:
where is the indicator function defined by
A crinkled arc can be found in every infinite−dimensional Hilbert space because any such space contains a closed vector subspace that is isomorphic to
A crinkled arc is said to be normalized if and the span of its image is a dense subset of
If is an increasing homeomorphism then is called a reparameterization of the curve
Two curves and in an inner product space are unitarily equivalent if there exists a unitary operator (which is an isometric linear bijection) such that (or equivalently, ).
Measurability
The measurability of can be defined by a number of ways, most important of which are Bochner measurability and weak measurability.
Integrals
The most important integrals of are called Bochner integral (when is a Banach space) and Pettis integral (when is a topological vector space). Both these integrals commute with linear functionals. Also spaces have been defined for such functions.
See also
References
Einar Hille & Ralph Phillips: "Functional Analysis and Semi Groups", Amer. Math. Soc. Colloq. Publ. Vol. 31, Providence, R.I., 1957.
Banach spaces
Differential calculus
Hilbert spaces
Topological vector spaces
Vectors (mathematics and physics) | Infinite-dimensional vector function | [
"Physics",
"Mathematics"
] | 1,057 | [
"Vector spaces",
"Calculus",
"Quantum mechanics",
"Space (mathematics)",
"Topological vector spaces",
"Differential calculus",
"Hilbert spaces"
] |
27,914,146 | https://en.wikipedia.org/wiki/Drugs.com | Drugs.com is an online pharmaceutical encyclopedia that provides drug information for consumers and healthcare professionals, primarily in the United States. It self-describes its information as "accurate and independent" yet limited to being "for educational purposes only and is not intended for medical advice, diagnosis or treatment."
Website
The Drugs.com website is owned and operated by the Drugsite Trust, a privately held Trust administered by two New Zealand pharmacists, Karen Ann and Phillip James Thornton. Operated on the IBM Cloud, Drugs.com provides information on some 24,000 drugs, was visited by 50 million users per month in 2021, and has a download time of one second.
The site contains a library of reference information which includes content from Cerner Multum, Micromedex, Truven Health Analytics, U.S. Food and Drug Administration (FDA), AHFS, Harvard Health Publications, Mayo Clinic, and Animalytics (a veterinary products database).
Drugs.com is certified by the TRUSTe online privacy certification program and the HONcode of Health on the Net Foundation.
The Drugs.com encyclopedia contains drug information for consumers, a portal for drugs based on diseases, a health professionals database of drug monographs, a natural products database, and a poison control center. Drugs.com is not affiliated with any pharmaceutical companies.
History
The domain Drugs.com was originally registered by Bonnie Neubeck in 1994. In 1999 at the height of the dotcom boom, Eric MacIver purchased an option to buy the domain from Neubeck. In August 1999, MacIver sold the domain at auction for US$823,666 to Venture Frogs, a startup incubator run by Tony Hsieh and Alfred Lin, best known for their involvement in LinkExchange and later Zappos.com. Venture Frogs sold the Drugs.com domain name to a private investor in June 2001, allowing Hsieh and Lin to focus on Zappos.com.
The Drugs.com website was officially launched in September 2001. In March 2008, Drugs.com announced the release of Mednotes — an online personal medication record application which connected to Google Health (On June 24, 2011, Google announced it was retiring Google Health on January 1, 2012).
In May 2010, U.S. FDA announced a collaboration with Drugs.com to distribute consumer health updates on the Drugs.com website and mobile platform.
In February 2016, comScore stated that Drugs.com was the sixth most popular health network receiving approximately 23 million visitors for the month, while Searchmetrics listed Drugs.com in the top 100 US websites for search visibility.
In April 2017, The Harris Poll listed Drugs.com as the Health Information Website Brand of the Year.
References
American medical websites
American companies established in 2001
Health care companies established in 2001
Internet properties established in 2001
Drugs | Drugs.com | [
"Chemistry"
] | 577 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
27,914,444 | https://en.wikipedia.org/wiki/Weighted%20sum%20model | In decision theory, the weighted sum model (WSM), also called weighted linear combination (WLC) or simple additive weighting (SAW), is the best known and simplest multi-criteria decision analysis (MCDA) / multi-criteria decision making method for evaluating a number of alternatives in terms of a number of decision criteria.
Description
In general, suppose that a given MCDA problem is defined on m alternatives and n decision criteria. Furthermore, let us assume that all the criteria are benefit criteria, that is, the higher the values are, the better it is. Next suppose that wj denotes the relative weight of importance of the criterion Cj and aij is the performance value of alternative Ai when it is evaluated in terms of criterion Cj. Then, the total (i.e., when all the criteria are considered simultaneously) importance of alternative Ai, denoted as AiWSM-score, is defined as follows:
For the maximization case, the best alternative is the one that yields the maximum total performance value.
It is very important to state here that it is applicable only when all the data are expressed in exactly the same unit. If this is not the case, then the final result is equivalent to "adding apples and oranges."
Example
For a simple numerical example suppose that a decision problem of this type is defined on three alternative choices A1, A2, A3 each described in terms of four criteria C1, C2, C3 and C4. Furthermore, let the numerical data for this problem be as in the following decision matrix:
For instance, the relative weight of the first criterion is equal to 0.20, the relative weight for the second criterion is 0.15 and so on. Similarly, the value of the first alternative (i.e., A1) in terms of the first criterion is equal to 25, the value of the same alternative in terms of the second criterion is equal to 20 and so on.
When the previous formula is applied on these numerical data the WSM scores for the three alternatives are:
Similarly, one gets:
Thus, the best choice (in the maximization case) is either alternative A2 or A3 (because they both have the maximum WSM score which is equal to 22.00). These numerical results imply the following ranking of these three alternatives: A2 = A3 > A1 (where the symbol ">" stands for "greater than").
Choosing the weights
The choice of values for the weights is rarely easy. The simple default of equal weighting is sometimes used when all criteria are measured in the same units. Scoring methods may be used for rankings (universities, countries, consumer products etc.), and the weights will determine the order in which these entities are placed. There is often much argument about the appropriateness of the chosen weights, and whether they are biased or display favouritism.
One approach for overcoming this issue is to automatically generate the weights from the data. This has the advantage of avoiding personal input and so is more objective. The so-called Automatic Democratic Method for weight generation has two key steps:
(1) For each alternative, identify the weights which will maximize its score, subject to the condition that these weights do not lead to any of the alternatives exceeding a score of 100%.
(2) Fit an equation to these optimal scores using regression so that the regression equation predicts these scores as closely as possible using the criteria data as explanatory variables. The regression coefficients then provide the final weights.
See also
Decision-making software
Weighted product model
References
Control theory
Decision theory
Mathematical and quantitative methods (economics)
Multiple-criteria decision analysis
Risk
Statistical inference | Weighted sum model | [
"Mathematics"
] | 740 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
27,915,795 | https://en.wikipedia.org/wiki/Process%20analytical%20chemistry | Process analytical chemistry (PAC) is the application of analytical chemistry with specialized techniques, algorithms, and sampling equipment for solving problems related to chemical processes. It is a specialized form of analytical chemistry used for process manufacturing similar to process analytical technology (PAT) used in the pharmaceutical industry.
The chemical processes are for production and quality control of manufactured products, and process analytical technology is used to determine the physical and chemical composition of the desired products during a manufacturing process. It is first mentioned in the chemical literature in 1946(1,2).
Process sampling
Process analysis initially involved sampling the variety of process streams or webs and transporting samples to quality control or central analytical service laboratories. Time delays for analytical results due to sample transport and analytical preparation steps negated the value of many chemical analyses for purposes other than product release. Over time it was understood that real-time measurements provided timely information about a process, which was far more useful for high efficiency and quality. The development of real-time process analysis has provided information for process optimization during any manufacturing process. The journal Analytical Chemistry (journal) publishes a biennial review of the most recent developments in the field.
The first real-time measurements in a production environment were made with modified laboratory instrumentation; in recent times specialized process and handheld instrumentation has been developed for immediate analysis.
Applications
Process analytical chemistry involves the following sub-disciplines of analytical chemistry: microanalytical systems, nanotechnology, chemical detection, electrochemistry or electrophoresis, chromatography, spectroscopy, mass spectrometry, process chemometrics, process control, flow injection analysis, ultrasound, and handheld sensors.
References
Further reading
McMahon, T.; Wright, E. L. in Analytical Instrumentation: A Practical Guide for Measurement and Control; Sherman, R.E., Rhodes, L. J., Eds.; Instrument Society of America: Research Triangle Park, NC, 1996.
Gregory, C. H. (Team Leader); Appleton, H. B.; Lowes, A. P.; Whalen, F. C. Instrumentation & Control in the German Chemical Industry. British Intelligence Operations Subcommittee Report 1007, 12 June 1946 (per discussion with Terry McMahon).
Analytical chemistry
Microfluidics
Electrochemistry
Electrophoresis
Chromatography
Cheminformatics
Ultrasound | Process analytical chemistry | [
"Chemistry",
"Materials_science",
"Biology"
] | 469 | [
"Chromatography",
"Microfluidics",
"Microtechnology",
"Separation processes",
"Instrumental analysis",
"Biochemical separation processes",
"Computational chemistry",
"Molecular biology techniques",
"Electrochemistry",
"Cheminformatics",
"nan",
"Electrophoresis"
] |
35,917,093 | https://en.wikipedia.org/wiki/List%20of%20sequenced%20animal%20genomes | This list of sequenced animal genomes contains animal species for which complete genome sequences have been assembled, annotated and published. Substantially complete draft genomes are included, but not partial genome sequences or organelle-only sequences.
Porifera
Amphimedon queenslandica, a sponge (2009)
Stylissa carteri (2016)
Ephydatia muelleri (2020)
Xestospongia testudinaria (2016)
Ctenophora
Mnemiopsis leidyi (Ctenophora), (order Lobata) (2012/2013)
Hormiphora californensis (Ctenophora) (2021)
Pleurobrachia bachei (Ctenophora) (2014)
Bolinopsis microptera(Ctenophora) (2022)
Placozoa
Trichoplax adhaerens, a Placozoan (2008)
Hoilungia hongkongensis, nov. gen H13 Placozoan (2018)
Cnidaria
Hydra vulgaris, (previously Hydra magnipapillata), a model hydrozoan (2010)
Nematostella vectensis, a model sea anemone (starlet sea anemone) (2007)
Aiptasia pallida, a sea anemone (2015)
Renilla muelleri, an octocoral (2017, 2019)
Stylophora pistillata, a coral (2017)
Aurelia aurita, moon jellyfish (2019)
Clytia hemisphaerica, Hydrozoan jellyfish (2019)
Myxobolus honghuensis (2022)
Nemopilema nomurai, Nomura jellyfish (2019)
Rhopilema esculentum, Flame jellyfish (2020)
Cassiopea xamachana (Scyphozoa) (2019)
Alatina alata (Cubozoa) (2019)
Calvadosia cruxmelitensis (Staurozoa) (2019)
Dendronephthya gigantea, an octocoral (2019)
Acropora acuminata (2020)
Acropora awi (2020)
Acropora cytherea, Table coral (2020)
Acropora digitifera, a coral (2011)
Acropora echinata (2020)
Acropora florida, branching staghorn coral(2020)
Acropora gemmifera (2021)
Acropora hyacinthus, Brush coral (2020)
Acropora intermedia, Noble Staghorn Coral (2020)
Acropora microphthalma (2020)
Acropora muricata, Staghorn coral (2020)
Acropora nasta, branching staghorn coral (2020)
Acropora selago, Green Selago Acropora (2020)
Acropora tenuis, Purple Tipped Acropora (2020)
Acropora yongei ,Yonge's staghorn coral (2020)
Astreopora myriophthalma, Porous star coral (2020)
Lophelia pertusa, Deepwater White Coral (2023)
Montipora cactus (2020)
Montipora capitata, Rice coral (2022)
Montipora efflorescens, Velvet coral (2020)
Orbicella faveolata, mountainous star coral (2016)
Pocillopora acuta, Hosoeda Hanayasai coral (2022)
Pocillopora damicornis, cauliflower coral (2018)
Pocillopora meandrina, Cauliflower coral (2022)
Porites astreoides, Mustard hill coral (2022)
Porites compressa, Finger coral (2022)
Deuterostomia
Hemichordates
Saccoglossus kowalevskii, Enteropneusta (2015)
Ptychodera flava, Enteropneusta (2015)
Echinoderms
Acanthaster planci, starfish (2014)
Apostichopus japonicus, sea cucumber (2017)
Plazaster borealis, Octopus starfish (2022)
Strongylocentrotus purpuratus, a sea urchin and model deuterostome (2006)
Tunicates
Ciona intestinalis, a tunicate (2002)
Ciona savignyi, a tunicate (2007)
Oikopleura dioica, a larvacean (2001).
Cephalochordates
Branchiostoma floridae, a lancelet (2008)
Cyclostomes
Petromyzon marinus, a lamprey (2009, 2013)
Cartilaginous fish
Callorhinchus milii, an elephant shark (2007)
Carcharodon carcharias, Great white shark (2018)
Chiloscyllium plagiosum, Whitespotted bamboo shark (2020)
Chiloscyllium punctatum, Brownbanded bamboo shark (2018)
Rhincodon typus, Whale shark (2017)
Scyliorhinus torazame, Cloudy catshark (2018)
Bony fish
Order Anabantiformes
Betta splendens, Siamese fighting fish (2018)
Helostoma temminkii, Kissing gourami (2020)
Order Anguilliformes
Anguilla anguilla, European Eel (2012)
Anguilla japonica, Japanese Eel (2022)
Order Beloniformes
Oryzias latipes, medaka (2007)
Order Callionymiformes
Callionymus lyra, common dragonet (2020)
Order Carangiformes
Caranx ignobilis, Giant trevally (2022)
Caranx melampygus, Bluefin trevally (2021)
Pseudocaranx georgianus, New Zealand trevally (2021)
Order Centrarchiformes
Oplegnathus fasciatus, barred knifejaw (2019)
Order Characiformes
Astyanax jordani, Mexican cavefish (2014)
Astyanax mexicanus, Mexican tetra (2021)
Colossoma macropomum, Tambaqui (2021)
Hasemania nana, Silvertip tetra (2013)
Petitella bleheri, Firehead tetra (2015)
Psalidodon paranae, (2016)
Order Cichliformes
Oreochromis niloticus, Nile tilapia (2019)
Metriaclima zeb, Lake Malawi cichlid (2019)
Order Clupeiformes
Clupea harengus, Atlantic herring (2020)
Coilia nasus, Japanese grenadier anchovy (2020)
Sardina pilchardus, European pilchard (2019)
Order Coelacanthiformes
Latimeria chalumnae, West Indian Ocean coelacanth and oldest known living lineage of Sarcopterygii (2013)
Order Cypriniformes
Anabarilius grahami, Kanglang fish (2018)
Danio rerio, zebrafish (2007)
Leuciscus baicalensis, Siberian dace (2014)
Megalobrama amblycephala, Wuchang bream (2017)
Metzia formosae, (2015)
Opsarius caudiocellatus, (2022)
Oxygymnocypris stewartii, (2019)
Pseudobrama simoni (2020)
Rhodeus ocellatus, Rosy bitterling (2020)
Triplophysa bleekeri, Tibetan stone loach (2020)
Order Cyprinodontiformes
Fundulus catenatus, Northern studfish (2020)
Fundulus olivaceus, Blackspotted topminnow (2020)
Fundulus nottii, Bayou topminnow (2020)
Fundulus xenicus, Diamond killifish (2020)
Gambusia affinis, western mosquitofish (2020)
Heterandria formosa, least killifish (2019)
Micropoecilia picta, swamp guppy (2021)
Xiphophorus maculatus, platyfish (2013)
Nothobranchius furzeri, turquoise killifish (2015)
Order Dipnoi
Protopterus annectens, West-African lungfish (2021)
Order Esociformes
Esox lucius, northern pike (2014)
Order Gadiformes
Gadus macrocephalus, Pacific cod (2022)
Gadus morhua, Atlantic cod (2011)
Order Gasterosteiformes
Gasterosteus aculeatus, three-spined stickleback (2006, 2012)
Order Gobiiformes
Oxyeleotris marmorata, marble goby (2020)
Periophthalmus modestus, shuttles hoppfish or shuttles mudskipper (2022)
Order Gymnotiformes
Electrophorus electricus, electric eel (2014)
Order Lampriformes
Lampris incognitus, Smalleye Pacific Opah (2021)
Order Lepisosteiformes
Lepisosteus oculatus, spotted gar
Order Osmeriformes
Neosalanx tangkahkeii, Chinese icefish (2015)
Protosalanx hyalocranius, clearhead icefish (2017)
Order Osteoglossiformes
Heterotis niloticus, African arowana (2020)
Paramormyrops kingsleyae, mormyrid electric fish (2017)
Scleropages formosus, Asian arowana (2016)
Order Perciformes
Centropyge bicolor, bicolor angelfish (2021)
Chaetodon trifasciatus, melon butterflyfish (2020)
Channa argus, northern snakehead (2017)
Channa maculata, blotched snakehead (2021)
Chelmon rostratus, copperband butterflyfish (2020)
Dissostichus mawsoni, Antarctic toothfish (2019)
Eleginops maclovinus, Patagonian robalo (2019)
Epinephelus moara, kelp grouper (2021)
Larimichthys crocea, large yellow croaker (2014)
Lutjanus campechanus, Northern red snapper (2020)
Naso vlamingii, bignose unicornfish (2020)
Parachaenichthys charcoti, Antarctic dragonfish (2017)
Seriola dumerili, Greater amberjack (2017)
Sillago sinica, chinese sillago (2018)
Siniperca knerii, Big-Eye Mandarin Fish (2020)
Sparus aurata, gilt-head bream (2018)
Order Salmoniformes
Salmo salar, Atlantic salmon (2016)
Oncorhynchus mykiss, rainbow trout (2014)
Oncorhynchus tshawytscha, Chinook salmon (2018)
Salvelinus namaycush, Lake Trout (2021)
Order Scorpaeniformes
Sebastes schlegelii, Black rockfish (2018)
Order Siluriformes
Clarias batrachus, walking catfish (2018)
Ictalurus punctatus, channel catfish (2016)
Pangasianodon hypophthalmus, Iridescent shark catfish (2021)
Silurus glanis, Wels catfish (2020)
Order Spariformes
Datnioides pulcher, Siamese tigerfish (2020)
Datnioides undecimradiatus, Mekong tiger perch (2020)
Order Syngnathiformes
Syngnathus scovelli, Gulf pipefish (2016, 2023)
Order Tetraodontiformes
Diodon holocanthus, Long-spine porcupinefish (2020)
Mola mola, ocean sunfish (2016)
Takifugu rubripes, a puffer fish (International Fugu Genome Consortium 2002)
Tetraodon nigroviridis, a puffer fish (2004)
Amphibians
Frogs (Anura)
Taudactylus pleione, Kroombit tinker frog (2023)
Leptobrachium leishanense, Leishan Moustache toad (2019)
Limnodynastes dumerilii dumerilii, Eastern banjo frog (2020)
Nanorana parkeri, High Himalaya frog (2015)
Oophaga pumilio, Strawberry poison-dart frog (2018)
Platyplectrum ornatum, Ornate burrowing frog (2021)
Pyxicephalus adspersus, African bullfrog (2018)
Rana [Lithobates] catesbeiana, North American bullfrog (2017)
Rana kukunoris, Plateau brown frog (2023)
Rhinella marina, Cane toad (2018)
Vibrissaphora ailaonica, Moustache toad (2019)
Xenopus tropicalis, western clawed frog (2010)
Salamanders (Urodela)
Family Ambystomatidae (Tiger Salamanders)
Ambystoma mexicanum, axolotl (2018)
Reptiles
Birds (Aves)
Ratites (Palaeognathae)
Order Apterygiformes (Kiwis)
Apteryx mantelli, North island brown kiwi (2015}
Order Struthioniformes (Ostriches)
Struthio camelus, Common ostrich (2014)
Order Tinamiformes (Tinamous)
Tinamus guttatus, White-throated tinamou (2014)
† Order Dinornithiformes (Moas)
† Anomalopteryx didiformis, Little bush moa (2024 draft)
Fowl (Galloanserae)
Order Anseriformes (Waterfowl)
Anas platyrhynchos, mallard duck (or wild duck) (2013}
Aythya fuligula, tufted duck (2021)
Order Galliformes (Land Fowl)
Arborophila rufipectus, Sichuan Partridge (2019)
Gallus gallus, chicken (2004)
Meleagris gallopavo, domesticated turkey (2011)
Numida meleagris, helmeted guinea fowl (2019)
Pavo cristatus, Indian peafowl (2018)
Pavo muticus, Green peafowl (2022)
Phasianus colchicus, Common Pheasant (2019)
Syrmaticus mikado, mikado pheasant (2018)
Tetrao tetrix, black grouse (2014)
Neoaves
Order Accipitriformes
Haliaeetus albicilla, white-tailed eagle (2014)
Haliaeetus leucocephalus, bald eagle (2014)
Aegypius monachus, cinereous vulture (2015)
Aquila chrysaetos, golden eagle (2018)
Order Apodiformes
Chaetura pelagica, chimney swift (2014)
Order Bucerotiformes
Buceros rhinoceros silvestris, rhinoceros hornbill (2014)
Order Caprimulgiformes
Antrostomus carolinensis, chuck-will's-widow (2014)
Order Cariamiformes
Cariama cristata, red-legged seriema (2014)
Order Cathartiformes
Cathartes aura, turkey vulture (2014)
Order Charadriiformes
Charadrius vociferus, killdeer (2014)
Himantopus novaezelandiae, kakī/black stilt (2019)
Himantopus himantopus, pied stilt (2019)
Recurvirostra avosetta, pied avocet (2019)
Order Ciconiiformes
Nipponia nippon, crested ibis (2014))
Order Coliiformes
Colius striatus, speckled mousebird (2014)
Order Columbiformes
Columba livia, pigeon (2014)
Order Coraciiformes
Merops nubicus, carmine bee-eater (2014)
Order Cuculiformes
Cuculus canorus, common cuckoo (2014)
Tauraco erythrolophus, red-crested turaco (2014)
Order Eurypygiformes
Eurypyga helias, sunbittern (2014)
Order Falconiformes
Falco cherrug, saker falcon (2013)
Falco peregrinus, peregrine falcon (2013)
Order Gaviiformes
Gavia stellata, red-throated loon (2014)
Order Gruiformes
Balearica regulorum gibbericeps, grey crowned-crane (2014)
Chlamydotis macqueenii, MacQueen's bustard (2014)
Order Leptosomiformes
Leptosomus discolor, cuckoo-roller (2014)
Order Mesitornithiformes
Mesitornis unicolor, brown mesite (2014)
Order Opisthocomiformes
Opisthocomus hoazin, hoatzin (2014)
Order Passeriformes
Acanthisitta chloris, rifleman (2014)
Corvus brachyrhynchos, American crow (2014)
Corvus hawaiiensis, Hawaiian crow (2018)
Eopsaltria australis, Eastern yellow robin (2019)
Ficedula albicollis, collared flycatcher (2012)
Ficedula hypoleuca, pied flycatcher (2012)
Geospiza fortis, medium ground-finch (2014)
Hirundo rustica, barn swallow (2018)
Lonchura striata domestica, Society finch (2018)
Manacus vitellinus, golden-collared manakin (2014)
Lycocorax pyrrhopterus, Paradise-crow (2019)
Malurus cyaneus, superb fairywren (2021)
Manacus vitellinus, golden-collared manakin (2014)
Notiomystis cincta, stichbird or hihi (2019)
Paradisaea rubra, red bird-of-paradise (2019)
Pteridophora alberti, king of Saxony bird-of-paradise (2019)
Ptiloris paradiseus, paradise riflebird (2019)
Taeniopygia guttata, zebra finch (2010)
Order Pelecaniformes
Egretta garzetta, little egret (2014)
Pelecanus crispus, Dalmatian pelican (2014)
Order Phaethontiformes
Phaethon lepturus, white-tailed tropicbird (2014)
Order Phoenicopteriformes
Phoenicopterus ruber ruber, American flamingo (2014)
Order Piciformes
Picoides pubescens, downy woodpecker (2014)
Order Podicipediformes
Podiceps cristatus, great crested grebe (2014)
Order Procellariiformes
Fulmarus glacialis, northern fulmar (2014)
Order Pterocliformes
Pterocles gutturalis, yellow-throated sandgrouse (2014)
Order Psittaciformes
Amazona leucocephala, Cuban amazon (2019)
Amazona ventralis, Hispaniolan amazon (2019)
Amazona vittata, Puerto Rican parrot (2012)
Ara macao, Scarlet macaw (2013)
Cyanoramphus malherbi, kākāriki karaka (2020)
Melopsittacus undulatus, budgerigar (2014)
Nestor notabilis, kea (2014)
Strigops habroptila, kākāpō (2023)
Order Sphenisciformes
Aptenodytes forsteri, emperor penguin (2014)
Aptenodytes patagonicus, king penguin (2019)
Eudyptes chrysocome, Western rockhopper penguin (2019)
Eudyptes chrysolophus chrysolophus, Macaroni penguin (2019)
Eudyptes chrysolophus schlegeli, Royal penguin (2019)
Eudyptes filholi, Eastern rockhopper penguin (2019)
Eudyptes moseleyi, Northern rockhopper penguin (2019)
Eudyptes pachyrhynchus, Fiordland-crested penguin (2019)
Eudyptes robustus, Snares-crested penguin (2019)
Eudyptes sclateri, Erect-crested penguin (2019)
Eudyptula minor albosignata, white-flippered penguin (2019)
Eudyptula minor minor, little blue penguin (2019)
Eudyptula novaehollandiae, fairy penguin (2019)
Megadyptes antipodes antipodes, yellow eyed penguin or hoiho (2019)
Pygoscelis adeliae, Adélie penguin (2014)
Pygoscelis antarctica, chinstrap penguin (2019)
Pygoscelis papua, gentoo penguin (2019)
Spheniscus demersus, African penguin (2019)
Spheniscus humboldti, Humboldt penguin (2019)
Spheniscus magellanicus, Magellanic penguin (2019)
Spheniscus mendiculus, Galápagos penguin (2019)
Order Strigiformes
Tyto alba, barn owl (2014)
Strix occidentalis caurina, northern spotted owl (2017)
Strix varia, Barred owl (2017)
Order Suliformes
Nannopterum auritum, double-crested cormorant (2017)
Nannopterum brasilianum, Neotropic cormorant (2017)
Phalacrocorax carbo, great cormorant (2014)
Nannopterum harrisi, Galapagos cormorant (2017)
Urile pelagicus, pelagic cormorant (2017)
Order Trochiliformes
Calypte anna, Anna's hummingbird (2014)
Order Trogoniformes
Apaloderma vittatum, bar-tailed trogon (2014)
Crocodilia
Family Alligatoridae (Alligators and Caimans)
Alligator mississippiensis, American alligator (2012)
Alligator sinensis, Chinese alligator (2013, 2014)
Family Crocodilidae (Crocodiles)
Crocodylus porosus, salt water crocodile (2012)
Family Gavialidae (Gharials)
Gavialis gangeticus, Indian gharial (2012)
Turtles (Testudines)
Crytodira (Hidden-Neck Turtles)
Trionychia (Softshell Turtles)
Family Trionychidae
Pelodiscus sinensis, Chinese softshell turtle (2013)
Testudinoidea
Family Emydidae (Terrapins)
Actinemys marmorata, Northwestern pond turtle (2022)
Chrysemys picta bellii, Western painted turtle (2013)
Family Platysternidae
Platysternon megacephalum Big-headed turtle (2019)
Family Geoemydidae
Mauremys reevesii, Chinese three-keeled pond turtle (2021)
Family Testudinidae (Tortoises)
Aldabrachelys gigantea, Aldabra giant tortoise (2019 draft, 2022 chromosome scale)
† Chelonoidis abingdonii, Pinta Island giant tortoise (2019)
Chelonoidis phantasticus, Fernandina Island Galapagos giant tortoise (2022)
Chelonioidea (Sea Turtles)
Family Cheloniidae
Chelonia mydas, Green sea turtle (2013)
Rhynchocephalia
Sphenodon punctatus, tuatara (2020)
Squamata
Ahaetulla prasina, Asian vine snake (2023)
Anilios bituberculatus, Prong-snouted blind snake (2021)
Anolis carolinensis, Carolina anole (2011)
Arizona elegans occidentalis, California glossy snake (2022)
Azemiops feae, Fea's viper (2022)
Boa constrictor (2019)
Bothrops jararaca, Jararaca lancehead, (2021)
Bungarus multicinctus, Many-banded krait (2022)
Charina bottae, rubber boa, (2022)
Chrysopelea ornata, Ornate Flying Snake (2023)
Crotalus adamanteus, Eastern diamondback rattlesnake (2021)
Crotalus mitchellii pyrrhus, southwestern speckled rattlesnake (2014)
Crotalus tigris, Tiger rattlesnake (2021)
Crotalus viridis, Great Plains rattlesnake (2018)
Daboia siamensis, Eastern Russell's viper (2022)
Deinagkistrodon acutus, five-pacer viper (2016)
Dopasia gracilis, Burmese glass lizard, (2015)
Dolichophis caspius, Caspian whipsnake (2020)
Emydocephalus ijimae, Ijima's turtle-headed sea snake, (2019)
Eublepharis macularius, Leopard gecko (2016)
Protobothrops flavoviridis, Habu (2018)
Protobothrops mucrosquamatus, Taiwan Habu (2017)
Heloderma charlesbogerti, Guatemalan Beaded Lizard (2022)
Hemicordylus capensis, Cape Cliff Lizard (2023)
Hydrophis curtus, Shaw's Sea Snake (2020)
Hydrophis cyanocinctus, blue-banded sea snakes (2021)
Hydrophis melanocephalus, slender-necked sea snake, (2019)
Indotyphlops braminus, Brahminy blindsnake, (2022)
Laticauda colubrina, yellow-lipped sea krait, (2019)
Laticauda laticaudata, blue-lipped sea krait, (2019)
Morelia viridis, Green Tree Python (2022)
Myanophis thanlyinesis (2021)
Naja naja, Indian cobra (2020)
Notechis scutatus, mainland tiger snake (2022)
Ophiophagus hannah, king cobra (2013)
Pantherophis guttatus, corn snake (2014)
Pantherophis obsoletus, Leucistic Texas Rat Snake (2021)
Pogona vitticeps, Central bearded dragon (2015)
Phrynosoma platyrhinos, Dessert Horned Lizards (2021)
Phrynosoma cornutum, Texas Horned Lizards (2021)
Pseudonaja textilis, Eastern brown snake (2022)
Python bivittatus, Burmese python (2013)
Python regius, Ball python (2020)
Salvator merianae, Argentine black and white tegu (2018)
Sceloporus undulatus, Eastern fence lizard (2021)
Shinisaurus crocodilurus, Chinese crocodile lizard, (2017)
Simalia boeleni, Boelen's Python (2022)
Thamnophis sirtalis, Common garter snake (2018)
Thermophis baileyi, Tibetan hot-spring snake (2018)
Zootoca vivipara, Viviparous lizard (2020)
Mammals
Monotremes
Family Ornithorhynchidae
Ornithorhynchus anatinus, platypus (2021)
Family Tachyglossidae (echidnas)
Tachyglossus aculeatus, short-beaked echidna (2021)
Marsupials
Order Didelphimorphia
Family Didelphidae (opossums)
Monodelphis domestica, gray short-tailed opossum (2007)
Order Dasyuromorphia
Family Dasyuridae
Antechinus stuartii, brown antechinus (2020)
Sarcophilus harrisii, Tasmanian devil ()
Sminthopsis crassicaudata, fat-tailed dunnart (ongoing)
Dasyurus hallucatus, northern quoll (ongoing)
Family Myrmecobiidae
Myrmecobius fasciatus, numbat (ongoing)
† Family Thylacinidae
† Thylacinus cynocephalus, thylacine ()
Order Peramelemorphia
Family Peramelidae
Perameles gunnii, eastern barred bandicoot (ongoing)
Family Thylacomyidae
Macrotis lagotis, greater bilby (ongoing)
Order Notoryctemorphia, Family Notoryctidae
Notoryctes typhlops, southern marsupial mole (ongoing)
Order Diprotodontia
Family Macropodidae
Macropus eugenii, tammar wallaby (2011)
Petrogale penicillata, brush-tailed rock-wallaby (ongoing)
Family Potoroidae
Bettongia gaimardi, eastern bettong (ongoing)
Bettongia penicillata ogilbyi, woylie (2021)
Family Petauridae
Gymnobelideus leadbeateri, Leadbeater's possum (ongoing)
Family Burramyidae
Burramys parvus, mountain pygmy possum (ongoing)
Family Vombatidae
Vombatus ursinus, common wombat (ongoing)
Family Phascolarctidae
Phascolarctos cinereus, koala (2013 draft)
Placentals
Afrotheria
Order Proboscidea
Family Elephantidae (Elephants)
Elephas maximus, Asian elephant (2015)
Loxodonta africana, African bush elephant (2009)
Loxodonta cyclotis, African forest elephant (2018)
Order Sirenia (sea cows)
Family Trichechidae
Trichechus manatus, West Indian manatee (2015
Euarchontoglires
Order Lagomorpha
Family Leporidae
Oryctolagus cuniculus, European rabbit (2010)
Order Primates
Family Callitrichidae
Callithrix jacchus, Common marmoset (2010, whole genome 2014)
Family Cercopithecidae
Macaca mulatta, rhesus macaque (2007 & Chinese rhesus macaque Macaca mulatta lasiota in 2011)
Macaca fascicularis, Cynomolgus or crab-eating macaque (2011)
Papio anubis, olive baboon (2020)
Papio cynocephalus, yellow baboon (2016)
Rhinopithecus roxellana, golden snub-nosed monkey (2019)
Family Galagidae
Otolemur garnettii, small-eared galago, or bushbaby ()
Family Hominidae
Subfamily Ponginae
Pongo pygmaeus/Pongo abelii, orangutan (Borneo/Sumatra) (2011)
Subfamily Homininae
Gorilla gorilla, western gorilla (2012)
Homo sapiens, modern human (draft 2001, whole genome 2022)
† Homo neanderthalensis, Neanderthal (draft 2010)
Pan troglodytes, chimpanzee (2005)
Pan paniscus, bonobo (2012)
Order Rodentia
Family Caviidae
Hydrochoerus hydrochaeris, capybara (2018)
Family Cricetidae
Microtus montanus, Montane vole (2021)
Microtus richardsoni, North American Water Vole (2021)
Peromyscus leucopus, white-footed mouse (2019)
Family Heteromyidae
Perognathus longimembris pacificus, Pacific Pocket Mouse
Family Muridae
Mastomys coucha, Southern multimammate mouse (2019)
Mus musculus Strain: C57BL/6J, House mouse (2002)
Rattus norvegicus, Brown rat (2004)
Laurasiatheria
Order Artiodactyla (even-toed ungulates)
Family Antilocapridae
Antilocapra americana, pronghorn (2019)
Family Balaenidae
Balaena mysticetus, bowhead whale (2015)
Eubalaena glacialis, North Atlantic right whale (2018)
Family Balaenopteridae
Balaenoptera acutorostrata, common minke whale (2014)
Balaenoptera borealis, sei whale (2018)
Balaenoptera musculus, blue whale (2018)
Balaenoptera physalus, fin whale (2014)
Megaptera novaeangliae, humpback whale (2018)
Family Bovidae
Ammotragus lervia, Barbary sheep (2019)
Antidorcas marsupialis, Springbox (2019)
Bison bonasus, European bison (2017)
Bos grunniens, yak 2012 ()
Bos primigenius indicus, zebu or Brahman cattle (2012)
Bos primigenius taurus, cow 2009 ()
Bubalus bubalis, river buffalo (2017)
Capra ibex, Goats (2019)
Cephalophus harveyi, Harvey's duiker (2019)
Connochaetes taurinus, blue wildebeest (2019)
Damaliscus lunatus, common tsessebe (2019)
Gazella thomsoni, Thomson's gazelle (2019)
Hippotragus niger, Sable Antelope (2019)
Kobus ellipsiprymnus, Waterbuck (2019)
Litocranius walleri, Gerenuk (2019)
Oreotragus oreotragus, Klipspringer (2019)
Oryx gazella, Gemsbok (2019)
Ourebia ourebi, Oribi (2019)
Ovis ammon, Argali (2019)
Ovis ammon polii, marco polo sheep (2017)
Nanger granti, Grant's gazelle (2019)
Neotragus moschatus, Suni (2019)
Neotragus pygmaeus, Royal antelope (2019)
Philantomba maxwellii, Maxwell's duiker (2019)
Procapra przewalskii, Przewalski's gazelle (2019)
Pseudois nayaur, Bharal (2019)
Raphicerus campestris, Steenbox (2019)
Redunca redunca, Bohor reedbuck (2019)
Syncerus caffer, African buffalo (2019)
Sylvicapra grimmia, common duiker (2019)
Tragelaphus, Spiral-horned bovine (2019)
Tragelaphus buxtoni, Mountain nyala (2019)
Tragelaphus strepsiceros, Greater kudu (2019)
Tragelaphus imberbis, Lesser kudu (2019)
Tragelaphus spekii, Sitatunga (2019)
Tragelaphus scriptus, Bushbuck (2019)
Taurotragus oryx, Common eland (2019)
Family Camelidae
Camelus ferus, Wild Bactrian camel (2007)
Family Cervidae
Cervus albirostris, Tharold's deer (2019)
Elaphurus davidianus, Père David's deer (2018)
Muntiacus crinifrons, hairy-fronted muntjac (2019)
Muntiacus muntjak, Indian muntjac (2019)
Muntiacus reevesi, Reeves's muntjac (2019)
Odocoileus hemionus, mule deer (2021)
Rangifer tarandus, Reindeer (2017)
Family Delphinidae
Tursiops truncatus, bottlenosed dolphin (2012)
Neophocaena phocaenoides, finless porpoise (2014)
Orcinus orca, killer whale (2015)
Sousa chinensis, Indo-Pacific humpback dolphin (2019)
Family Eschrichtiidae
Eschrichtius robustus, gray whale (2018)
Family Giraffidae
Giraffa camelopardalis, Giraffe (2019)
Giraffa camelopardalis tippelskirchi, Masai giraffe (2019)
Okapia johnstoni, Okapi (2019)
Family Monodontidae
Delphinapterus, beluga whale (2017)
Family Moschidae
Moschus berezovskii, forest musk deer (2018)
Moschus chrysogaster, Alpine musk deer (2019)
Family Phocoenidae
Neophocaena asiaeorientalis sunameri, East Asian finless porpoise
Family Physeteridae
Physeter macrocephalus, sperm whale (2019)
Family Suidae
Sus scrofa, pig (2012)
Family Tragulidae
Tragulus javanicus, Java mouse-deer (2019)
Order Carnivora
Family Felidae
Acinonyx jubatus, cheetah (2015)
Felis catus, cat (2007)
Panthera leo, lion (2013)
Panthera pardus, Amur leopard (2016)
Panthera tigris tigris, Siberian tiger (2013)
Panthera tigris tigris, Bengal tiger (2013)
Panthera uncia, snow leopard (2013)
Prionailurus bengalensis, leopard cat (2016)
Family Canidae
Canis familiaris, dog (2005)
Canis lupus lupus, wolf (2017).
Lycaon pictus, african wild dog (2018)
Family Ursidae
Ailuropoda melanoleuca, giant panda (2010)
Ursus arctos ssp. horribilis, Grizzly bear (2018)
Ursus americanus, American black bear (2019)
Ursus maritimus, Polar bear (2014)
Family Odobenidae
Odobenus rosmarus, walrus (2015)
Family Mustelidae
Enhydra lutris kenyoni, sea otter (2017)
Mustela erminea, stoat (2018)
Mustela furo, ferret (2014)
Pteronura brasiliensis, giant otter (2019)
Order Chiroptera
Family Megadermatidae
Megaderma lyra, greater false vampire bat (2013)
Family Mormoopidae
Pteronotus parnellii, Parnell's mustached bat (2013)
Family Pteropodidae
Pteropus vampyrus, fruit bat (2012)
Eidolon helvum, Old World fruit bat (2013)
Family Rhinolophidae
Rhinolophus ferrumequinum, greater horseshoe bat (2013)
Family Vespertilionidae
Myotis lucifugus, little brown bat (2010)
Family Phyllostomidae
Leptonycteris yerbabuenae, long nosed bat (2020)
Leptonycteris nivalis, greater long nosed bat (2020)
Musonycteris harrisoni, banana bat (2020)
Artibeus jamaicensis, Jamaican fruit bat (2020)
Macrotus waterhousii, Waterhouse's leaf-nosed bat (2020
Order Erinaceomorpha, Family Erinaceidae
Erinaceus europaeus, western European hedgehog ()
Order Eulipotyphla, Family Solenodontidae
Solenodon parodoxus, Hispaniolan solenodon (2018)
Order Perissodactyla (odd-toed ungulates)
Family Equidae
Equus caballus, horse (2009 2018)
Protostomia
Insects
Order Blattodea
Blattella germanica, German cockroach (2018)
Periplaneta americana, American cockroach (2018)
Zootermopsis nevadensis, a dampwood termite (2014
Cryptotermes secundus, a drywood termite(2018)
Macrotermes natalensis, a higher termite (2014
Order Coleoptera
Dendroctonus ponderosae Hopkins, beetle (mountain pine beetle) (2013)
Aquatica lateralis, Japanese aquatic firefly "Heike-botaru" (firefly) (2018)
Photinus pyralis, Big Dipper firefly (2018)
Protaetia brevitarsis, White-spotted flower chafer (2019)
Tribolium castaneum Strain:GA-2, beetle (red flour beetle) (2008)
Allomyrina dichotoma, Japanese rhinoceros beetle (2022)
Order Collembola
Family Isotomidae
Desoria tigrina, (2021)
Family Sminthurididae
Sminthurides aquaticus, (2021)
Order Diptera
Family Calliphoridae
Aldrichina grahami, Forensic blowfly (2020)
Family Chironomidae
Dasypogon diadema, Hunting Robber fly (2019)
Parochlus steinend, Antarctic winged midge (2017)
Proctacanthus coquilletti, Assassin fly (2017)
Family Culicidae (mosquitoes)
Aedes aegypti Strain:LVPib12, mosquito (vector of dengue fever, etc.) (2007)
Aedes albopictus (2015)
Anopheles darlingi
Anopheles gambiae Strain: PEST, mosquito (vector of malaria) (2002)
Anopheles gambiae Strain: M, mosquito (vector of malaria) (2010)
Anopheles gambiae Strain: S, mosquito (vector of malaria) (2010)
Anopheles sinensis, mosquito (vector of vivax malaria, lymphatic filariasis and Setaria infections), (2014)
Anopheles stephensii
Anopheles arabiensis (2015)
Anopheles quadriannulatus (2015)
Anopheles merus (2015)
Anopheles melas (2015)
Anopheles christyi (2015)
Anopheles epiroticus (2015)
Anopheles maculatus (2015)
Anopheles culicifacies (2015)
Anopheles minimus (2015)
Anopheles funestus (2015, 2019)
Anopheles dirus (2015)
Anopheles farauti (2015)
Anopheles atroparvus (2015)
Anopheles sinensis (2015)
Anopheles albimanus (2015)
Culex quinquefasciatus, mosquito (vector of West Nile virus, filariasis etc.) (2010)
Family Drosophilidae (fruit flies)
Drosophila albomicans, fruit fly (2012)
Drosophila ananassae, fruit fly (2007)
Drosophila biarmipes, fruit fly (2011)
Drosophila bipectinata, fruit fly (2011)
Drosophila erecta, fruit fly (2007)
Drosophila elegans, fruit fly (2011)
Drosophila eugracilis, fruit fly (2011)
Drosophila ficusphila, fruit fly (2011)
Drosophila grimshawi, fruit fly (2007)
Drosophila kikkawai, fruit fly (2011)
Drosophila melanogaster, fruit fly (model organism) (2000)
Drosophila mojavensis, fruit fly (2007)
Drosophila neotestacea, fruit fly (transcriptome 2014)
Drosophila persimilis, fruit fly (2007)
Drosophila pseudoobscura, fruit fly (2005)
Drosophila rhopaloa, fruit fly (2011)
Drosophila santomea, fruit fly ()
Drosophila sechellia, fruit fly (2007)
Drosophila simulans, fruit fly (2007)
Drosophila takahashi, fruit fly (2011)
Drosophila virilis, fruit fly (2007)
Drosophila willistoni, fruit fly (2007)
Drosophila yakuba, fruit fly (2007)
Family Phoridae
Megaselia abdita, scuttle fly (transcriptome 2013)
Family Psychodidae (drain flies)
Clogmia albipunctata, moth midge (transcriptome 2013)
Family Sarcophagidae (flesh flies)
Sarcophaga Bullata, Flesh fly (2019)
Family Syrphidae (hoverflies)
Episyrphus balteatus, hoverfly (transcriptome 2011)
Order Hemiptera
Acyrthosiphon pisum, aphid (pea aphid) (2010)
Ericerus pela, Chinese wax scale insect (2019)
Laodelphax striatellus, small brown planthopper (2017)
Lycorma delicatula, spotted lanternfly (2019)
Rhodnius prolixus, kissing-bug (2015)
Rhopalosiphum maidis, Corn leaf aphid (2019)
Sitobion miscanthi, Indian grain aphid (2019)
Triatoma rubrofasciata, assassin bug (2019)
Order Hymenoptera
Acromyrmex echinatior colony Ae372, ant (Panamanian leafcutter) (2011)
Apis mellifera, bee (honey bee), (model for eusocial behavior) (2006)
Atta cephalotes, ant (leaf-cutter ant) (2011)
Camponotus floridanus, ant (2010)
Cerapachys biroi, ant (clonal raider ant)(2014)
Euglossa dilemma, Green orchid bee (2017)
Harpegnathos saltator, ant (2010)
Lasius niger, ant (black garden ant)(2017)
Linepithema humile, ant (Argentine ant) (2011)
Nasonia giraulti, wasp (parasitoid wasp) (2010)
Nasonia longicornis, wasp (parasitoid wasp) (2010)
Nasonia vitripennis, wasp (parasitoid wasp; model organism) (2010)
Nomia Melanderi, Alkali bee (2019)
Pogonomyrmex barbatus, ant (red harvester ant) (2011)
Solenopsis invicta, ant (fire ant) (2011)
Order Lepidoptera
Abrostola tripartita Hufnagel, Spectacle (2021)
Achalarus lyciades, Hoary Edge Skipper (2017)
Ahamus jianchuanensis, Jianchuan ghost moth (2024)
Antharaea yamamai, Japanese oak silk moth (2019)
Arctia plantaginis, Wood tiger moth (2020)
Bicyclus anynana, squinting bush brown (2017)
Bombyx mori Strain:p50T, moth (domestic silk worm) (2004)
Calycopis cecrops, Red-Banded Groundstreak (2016)
Calycopis isobeon, Dusky-Blue Groundstreak (2016)
Coenonympha arcania, Pearly Heath (2024)
Cydia pomonella, codling moth (2019)
Danaus plexippus, monarch butterfly) (2011)
Heliconius melpomene, butterfly (2012)
Keiferia lycopersicella, Tomato pinworm (2024)
Melitaea cinxia, Glanville fritillary butterfly (2014)
Megathymus ursus violae, bear giant skipper butterfly (2018)
Morpho helenor, Common blue morpho (2023)
Morpho achilles, Blue-banded morpho (2023)
Morpho deidamia (2023)
Papilio bianor, Chinese peacock butterfly (2019)
Phthorimaea absoluta, Tomato leafminer (2024)
Pieris rapae, small cabbage white butterfly (2016)
Plodia interpunctella, Indianmeal moth (2022)
Plutella xylostella, moth (diamondback moth) (2013)
Scrobipalpa atriplicella, Goosefoot groundling moth (2024)
Spodoptera frugiperda, Fall armyworm (2017)
Thitarodes armoricanus, Himalaya ghost moth (2024)
Thitarodes xiaojinensis, Xiaojin ghost moth (2024)
Troides aeacus, Golden birdwing (2024)
Eudocima phalonia, fruit-piercing moth (2017)
Order Orthoptera
Locusta migratoria, migratory locust (2014)
Schistocerca gregaria, desert locust (2020)
Gryllus bimaculatus, two-spotted cricket (2021)
Order Phthiraptera
Pediculus humanus, louse (sucking louse; parasite) (2010)
Psocoptera
Liposcelis brunnea, booklouse (2022)
Order Raphidioptera
Venustoraphidia nigricollis, black-necked snakefly (2023)
Order Trichoptera
Eubasilissa regina, purple caddisfly (2022,)
Stenopsyche tienmushanensisi, Caddisfly (2018)
Crustaceans
Acartia tonsa dana, cosmopolitan calanoid copepod (2019)
Cherax quadricarinatus, Red claw crayfish (2020)
Daphnia pulex, water flea (2007)
Eulimnadia texana, Clam Shrimp (2018)
Macrobrachium nipponense, oriental river prawn (2021)
Neocaridina denticulata, shrimp (2014)
Parhyale hawaiensis, amphipod (2016)
Pollicipes pollicipes, Gooseneck barnacle (2022)
Portunus trituberculatus, swimming crab (2020)
Procambarus virginalis, marbled crayfish (2018)
Sphaeroma terebrans, a wood-boring isopod (2019)
Tigriopus kingsejongensis, antarctic-endemic copepod (2017)
Chelicerates
Order Xiphosura:
Limulus polyphemus, Atlantic horseshoe crab (2014)
Carcinoscorpius rotundicauda, mangrove horseshoe crab (2021)
Tachypleus tridentatus, tri-spine horseshoe crab (2021)
Order Ixodida:
Ixodes scapularis, deer tick (2016)
Order Mesostigmata:
Tropilaelaps mercedesae, honeybee mite (2017)
Order Trombidiformes:
Tetranychus urticae, spider mite (2011)
Order Scorpiones:
Mesobuthus martensii, Chinese scorpion (2013)
Order Araneae:
Acanthoscurria geniculata, Brazilian whiteknee tarantula (2014)
Argiope bruennichi, European wasp spider (2021)
Dysdera silvatica, Canary Island nocturnal endemic woodlouse spider (2019)
Latrodectus elegans, Black widow spider (2022)
Nephila clavipes, (golden silk orb-weaver) (2017)
Parasteatoda tepidariorum, (common house spider) (2017)
Stegodyphus mimosarum, African social velvet spider (2014)
Uloborus diversus, Cribellate orb-weaving spider, (2023)
Myriapods
Strigamia maritima, centipede
Onychophora
Epiperipatus biolleyi, peripatid velvet worm (1996)
Euperipatoides rowelli, peripatopsid velvet worm (2023)
Tardigrades
Hypsibius dujardini, water bear (2015)
Molluscs
Acanthopleura granulata, chiton (2020)
Achatina fulica, giant African snail (2019)
Architeuthis dux, giant squid (2020)
Argopecten purpuratus, peruvian scallop (2018)
Bathymodiolus platifrons, seep mussel (2017
Biomphalaria glabrata, a medically important air-breathing freshwater snail in the family Planorbidae (2017)
Biomphalaria straminea, Ramshorn snail (2022)
Candidula unifasciata, Land snail (2021)
Chlamys farreri, Zhikong scallop (2017)
Conus ventricosus, Mediterranean cone snail (2021)
Crassostrea angulata, Portuguese oyster (2023)
Crassostrea gigas, Pacific oyster (2012)
Dreissena rostriformis, Quagga mussel (2019)
Euprymna scolopes, Hawaiian bobtail squid (2019)
Elysia chlorotica, a solar-powered sea slug (2019)
Haliotis discus hannai, pacific abalone (2017)
Hapalochlaena maculosa, Southern blue-ringed octopus (2020)
Kalloconus canariensis, Canary Island cone shell (2023)
Kelletia kelletii, Kellet’s whelk (2023)
Lottia gigantea, owl limpet (2013)
Limnoperna fortunei, invasive golden mussel (2017)
Liolophura japonica, Common chiton (2024)
Margaritifera margaritifera, European freshwater pearl mussel (2023)
Modiolus philippinarum, shallow water mussel (2017)
Mytilus galloprovincialis, Mediterranean mussel (2016)
Octopus bimaculoides, California two-spot octopus (2015)
Octopus minor, common long-arm octopus (2018)
Octopus vulgaris, common octopus (2019)
Panopea generosa, Pacific geoduck (2023)
Patinopecten yessoensis, Yesso scallop (2017)
Pecten maximus, Great scallop (2020)
Pinctada fucata, Pearl oyster (2012)
Plakobranchus ocellatus, Kleptoplastic sea slug (2021)
Pomacea canaliculata, golden apple snail (2018)
Ruditapes philippinarum, Manila clam (2017)
Saccostrea glomerata, Sydney rock oyster (2018)
Scapharca broughtonii, Blood clam (2019)
Tridacna crocea, Giant clam (2023)
Venustaconcha ellipsiformis, freshwater mussel (2018)
Platyhelminthes
Clonorchis sinensis, liver fluke (human pathogen) (draft 2011)
Echinococcus granulosus, tapeworm (dog pathogen) (2013, 2013)
Echinococcus multilocularis, tapeworm (2013)
Hymenolepis microstoma, tapeworm (2013)
Schistosoma haematobium, schistosome (human pathogen) (2012 2019)
Schistosoma japonicum, schistosome (human pathogen) (2009)
Schistosoma mansoni, schistosome (human pathogen) (2009, 2012)
Schmidtea mediterranea, planarian (model organism) (2006)
Taenia solium, tapeworm (2013)
Nematodes
Ancylostoma ceylanicum, zoonotic hookworm infecting both humans and other mammals (2015)
Aplectana chamaeleonis, amphibian parasite (2023)
Ascaris suum, pig-infecting giant roundworm, closely related to human-infecting giant roundworm Ascaris lumbricoides (2011)
Brugia malayi (Strain:TRS), human-infecting filarial parasite (2007)
Bursaphelenchus xylophilus, infects pine trees (2011)
Caenorhabditis angaria (Strain:PS1010) (2010)
Caenorhabditis brenneri, a gonochoristic (male-female obligate) species more closely related to C. briggsae than C. elegans
Caenorhabditis briggsae (2003)
Caenorhabditis elegans (Strain:Bristol N2), model organism (1998)
Caenorhabditis remanei, a gonochoristic (male-female obligate) species more closely related to C. briggsae than C. elegans
Dirofilaria immitis, dog-infecting filarial parasite (2012)
Globodera pallida, plant pathogen (2014)
Haemonchus contortus, blood-feeding parasite infecting sheep and goats (2013)
Heterodera glycines, soybean cyst nematode (2019)
Heterorhabditis bacteriophora, (2013)
Loa loa, human-infecting filarial parasite (2013)
Meloidogyne hapla, northern root-knot nematode (plant pathogen) (2008)
Meloidogyne incognita, southern root-knot nematode (plant pathogen) (2008)
Necator americanus, human-infecting hookworm (2014)
Onchocerca volvulus, human-infecting filarial parasite
Pristionchus pacificus, model invertebrate (2008)
Romanomermis culicivorax, entomopathogenic nematode that invades larvae of various mosquito species (2013)
Trichuris suis, pig-infecting whipworm (2014)
Trichuris muris, mouse-infecting whipworm (2014)
Trichuris trichiura, human-infecting whipworm (2014)
Wuchereria bancrofti, human-infecting filarial parasite
Annelids
Capitella teleta, polychaete (2007, 2013)
Helobdella robusta, leech (2007, 2013)
Eisenia fetida, earthworm (2015, 2016)
Paraescarpia echinospica, deep-sea tubeworm (2021,)
Hirudinaria manillensis, Asian Buffalo leech (2023)
Hirudo nipponia, Japanese blood-sucking leech (2023)
Whitmania pigra, Asian freshwater leech (2023)
Bryozoa
Bugula neritina, bryozoan (2020,)
Brachiopoda
Lingula anatina, brachiopod (2015,)
Rotifera
Adineta vaga, rotifer (2013,)
See also
List of sequenced bacterial genomes
List of sequenced archaeal genomes
List of sequenced eukaryotic genomes
List of sequenced fungi genomes
List of sequenced plant genomes
List of sequenced protist genomes
List of sequenced plastomes
References
Animal
Biology-related lists | List of sequenced animal genomes | [
"Engineering",
"Biology"
] | 12,228 | [
"Lists of sequenced genomes",
"DNA sequencing",
"Genetic engineering",
"Genome projects"
] |
31,710,022 | https://en.wikipedia.org/wiki/Spiruchostatin | Spiruchostatins are a group of chemical compounds isolated from Pseudomonas sp. as gene expression-enhancing substances. They possess novel bicyclic depsipeptides involving 4-amino-3-hydroxy-5-methylhexanoic acid and 4-amino-3-hydroxy-5-methylheptanoic acid residues. The two main forms are spiruchostatin A and spiruchostatin B.
Uses
Spiruchostatin A is a natural product that is showing positive signs of development as a prodrug. Very often, prodrugs are made by simple addition of groups, such as acetyl or alkyl chains, to the drug through amide or ester formation. Similarly, in nature there are examples in which protecting groups are added to an NP, as a strategy for self resistance. This evolutionary trait allows the organism to carry the toxic compound harmlessly until the target is presented that activates the NP.
Spiruchostatin A is a member of a class of cyclic, cysteine-containing, depsipeptidic natural products. Due to their histone deacetylase (HDAC) inhibitory activity, tremendous effort has been thrust into the synthesis of these compounds and their analogs as potential chemotherapeutic targets. In human cancers, HDACs are often over-expressed and are recruited to reduce the transcription of tumor suppressor genes and other cell growth regulators. HDAC inhibitors are therefore thought to increase the expression of tumor suppressor genes, and they are an emerging compound class in the treatment of cancers, with several agents either approved or in development.
References
Prodrugs
Depsipeptides | Spiruchostatin | [
"Chemistry"
] | 356 | [
"Chemicals in medicine",
"Prodrugs"
] |
31,710,978 | https://en.wikipedia.org/wiki/S-Nitrosylation | In biochemistry, S-nitrosylation is the covalent attachment of a nitric oxide group () to a cysteine thiol within a protein to form an S-nitrosothiol (SNO). S-Nitrosylation has diverse regulatory roles in bacteria, yeast and plants and in all mammalian cells. It thus operates as a fundamental mechanism for cellular signaling across phylogeny and accounts for the large part of NO bioactivity.
S-Nitrosylation is precisely targeted, reversible, spatiotemporally restricted and necessary for a wide range of cellular responses, including the prototypic example of red blood cell mediated autoregulation of blood flow that is essential for vertebrate life. Although originally thought to involve multiple chemical routes in vivo, accumulating evidence suggests that S-nitrosylation depends on enzymatic activity, entailing three classes of enzymes (S-nitrosylases) that operate in concert to conjugate NO to proteins, drawing analogy to ubiquitinylation. Beside enzymatic activity, hydrophobicity and low pka values also play a key role in regulating the process.S-Nitrosylation was first described by Stamler et al. and proposed as a general mechanism for control of protein function, including examples of both active and allosteric regulation of proteins by endogenous and exogenous sources of NO. The redox-based chemical mechanisms for S-nitrosylation in biological systems were also described concomitantly. Important examples of proteins whose activities were subsequently shown to be regulated by S-nitrosylation include the NMDA-type glutamate receptor in the brain. Aberrant S-nitrosylation following stimulation of the NMDA receptor would come to serve as a prototypic example of the involvement of S-nitrosylation in disease. S-Nitrosylation similarly contributes to physiology and dysfunction of cardiac, airway and skeletal muscle and the immune system, reflecting wide-ranging functions in cells and tissues. It is estimated that ~70% of the proteome is subject to S-nitrosylation and the majority of those sites are conserved. S-Nitrosylation is also known to show up in mediating pathogenicity in Parkinson's disease systems. S-Nitrosylation is thus established as ubiquitous in biology, having been demonstrated to occur in all phylogenetic kingdoms and has been described as the prototypic redox-based signalling mechanism,
Denitrosylation
The reverse of S-nitrosylation is denitrosylation, principally an enzymically controlled process. Multiple enzymes have been described to date, which fall into two main classes mediating denitrosylation of protein and low molecular weight SNOs, respectively. S-Nitrosoglutathione reductase (GSNOR) is exemplary of the low molecular weight class; it accelerates the decomposition of S-nitrosoglutathione (GSNO) and of SNO-proteins in equilibrium with GSNO. The enzyme is highly conserved from bacteria to humans. Thioredoxin (Trx)-related proteins, including Trx1 and 2 in mammals, catalyze the direct denitrosylation of S-nitrosoproteins (in addition to their role in transnitrosylation). Aberrant S-nitrosylation (and denitrosylation) has been implicated in multiple diseases, including heart disease, cancer and asthma as well as neurological disorders, including stroke, chronic degenerative diseases (e.g., Parkinson's and Alzheimer's disease) and amyotrophic lateral sclerosis (ALS).
Transnitrosylation
Another interesting aspect of S-nitrosylation includes the protein protein transnitrosylation, which is the transfer of an NO moiety from a SNO to the free thiols in another protein. Thioredoxin (Txn), a protein disulfide oxidoreductase for the cytosol and caspase 3 are a good example where transnitrosylation is significant in regulating cell death. Another example include, the structural changes in mammalian Hb to SNO-Hb under oxygen depleted conditions helps it to bind to AE1 (Anion Exchange, a membrane protein) and in turn gets transnitrosylated the later. Cdk5 (a neuronal-specific kinase) is known get nitrosylated at cysteine 83 and 157 in different neurodegenerative diseases like AD. This SNO-Cdk5 in turn is nitrosylated Drp1, the nitrosylated form of which can be considered as a therapeutic target.
References
Protein structure
Protein biosynthesis
Chemical reactions | S-Nitrosylation | [
"Chemistry"
] | 1,002 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis",
"Structural biology",
"nan",
"Protein structure"
] |
31,714,734 | https://en.wikipedia.org/wiki/Kummer%27s%20congruence | In mathematics, Kummer's congruences are some congruences involving Bernoulli numbers, found by .
used Kummer's congruences to define the p-adic zeta function.
Statement
The simplest form of Kummer's congruence states that
where p is a prime, h and k are positive even integers not divisible by p−1 and the numbers Bh are Bernoulli numbers.
More generally if h and k are positive even integers not divisible by p − 1, then
whenever
where φ(pa+1) is the Euler totient function, evaluated at pa+1 and a is a non negative integer. At a = 0, the expression takes the simpler form, as seen above.
The two sides of the Kummer congruence are essentially values of the p-adic zeta function, and the Kummer congruences imply that the p-adic zeta function for negative integers is continuous, so can be extended by continuity to all p-adic integers.
See also
Von Staudt–Clausen theorem, another congruence involving Bernoulli numbers
References
Theorems in number theory
Modular arithmetic | Kummer's congruence | [
"Mathematics"
] | 245 | [
"Theorems in number theory",
"Arithmetic",
"Mathematical problems",
"Mathematical theorems",
"Modular arithmetic",
"Number theory"
] |
43,010,045 | https://en.wikipedia.org/wiki/Point-to-point%20encryption | Point-to-point encryption (P2PE) is a standard established by the PCI Security Standards Council. The objective of P2PE is to provide a payment security solution that instantaneously converts confidential payment card (credit and debit card) data and information into indecipherable code at the time the card is swiped, in order to prevent hacking and fraud. It is designed to maximize the security of payment card transactions in an increasingly complex regulatory environment. There also exist payment solutions based on end-to-end encryption, implying the highest level of confidentiality for the transferred data.
The standard
The P2PE Standard defines the requirements that a "solution" must meet in order to be accepted as a PCI-validated P2PE solution. A "solution" is a complete set of hardware, software, gateway, decryption, device handling, etc. Only "solutions" can be validated; individual pieces of hardware such as card readers cannot be validated. It is also a common mistake to refer to P2PE validated solutions as "certified"; there is no such certification.
The determination of whether or not a solution meets the P2PE standard is the responsibility of a P2PE Qualified Security Assessor (P2PE-QSA). P2PE-QSA companies are independent third-party companies who employ assessors that have met the PCI Security Standards Council's requirements for education and experience, and have passed the requisite exam. The PCI Security Standards Council does not validate solutions.
How it works
As a payment card is swiped through a card reading device, referred to as a point of interaction (POI) device, at the merchant location or point of sale, the device immediately encrypts the card information. A device that is part of a PCI-validated P2PE solution uses an algorithmic calculation to encrypt the confidential payment card data. From the POI, the encrypted, indecipherable codes are sent to the payment gateway or processor for decryption. The keys for encryption and decryption are never available to the merchant, making card data entirely invisible to the retailer. Once the encrypted codes are within the secure data zone of the payment processor, the codes are decrypted to the original card numbers and then passed to the issuing bank for authorization. The bank either approves or rejects the transaction, depending upon the card holder's payment account status. The merchant is then notified if the payment is accepted or rejected to complete the process along with a token that the merchant can store. This token is a unique number reference to the original transaction that the merchant can use should they ever be needed to perform research or refund the customer without ever knowing the customer's card information (tokenization). There are also Qualified Integrator and Reseller (QIR) Companies, which are businesses authorized to "implement, configure, and/or support validated" PA-DSS Payment Applications, and perform qualified installations.
Solution providers
According to the PCI Security Standards Council:The P2PE solution provider is a third-party entity (for example, a processor, acquirer, or payment gateway) that has overall responsibility for the design and implementation of a specific P2PE solution, and manages P2PE solutions for its merchant customers. The solution provider has overall responsibility for ensuring that all P2PE requirements are met, including any P2PE requirements performed by third-party organizations on behalf of the solution provider (for example, certification authorities and key-injection facilities).
Benefits
Customer benefits
P2PE significantly reduces the risk of payment card fraud by instantaneously encrypting confidential cardholder data at the moment a payment card is swiped or "dipped" if it is a chip card at the card reading device (payment terminal) or POI.
Merchant benefits
P2PE significantly facilitates merchant responsibilities:
With a P2PE validated solution, merchants save significant time and money as PCI requirements may be greatly reduced. Payment Card Industry Data Security Standard (PCI DSS). For organizations who use a P2PE validated solution provider, the PCI Self Assessment Questionnaire is reduced from 12 sections to 4 sections and the controls are reduced from 329 questions to just 35.
In the event of fraud, the P2PE Solution Provider, not the merchant, is held accountable for data loss and resulting fines that may be assessed by the card brands (American Express, Visa, MasterCard, Discover, and JCB). The PCI Security Standards Council does not assess penalties on Solution Providers or Merchants.
The payment process with P2PE is quicker than other transaction processes, thus creating simpler and faster customer–merchant transactions.
Point-to-point encryption versus end-to-end encryption
Point-to-point
A point-to-point connection directly links system 1 (the point of payment card acceptance) to system 2 (the point of payment processing).
A true P2PE solution is determined with three main factors:
The solution uses a hardware-to-hardware encryption and decryption process along with a POI device that has SRED (Secure Reading and Exchange of Data) listed as a function.
The solution has been validated to the PCI P2PE Standard which includes specific POI device requirements such as strict controls regarding shipping, receiving, tamper-evident packaging, and installation.
A solution includes merchant education in the form of a P2PE Instruction Manual, which guides the merchant on POI device use, storage, return for repairs, and regular PCI reporting.
End-to-end
End-to-end encryption as the name suggests has the advantage over P2PE that card details are not unencrypted between the two endpoints. If the endpoints are a PCI PED validated PIN pad and a POS acquirer, there is no opportunity for the card details to be intercepted. It is obviously important that the endpoints (the PED and gateway) are provided by PCI accredited organisations.
PCI point-to-point encryption requirements
The requirements include:
Secure encryption of payment card data at the point of interaction (POI),
P2PE validated application(s) at the point of interaction,
Secure management of encryption and decryption devices,
Management of the decryption environment and all decrypted account data,
Use of secure encryption methodologies and cryptographic key operations, including key generation, distribution, loading/injection, administration, and usage.
References
Cryptography | Point-to-point encryption | [
"Mathematics",
"Engineering"
] | 1,347 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
43,016,216 | https://en.wikipedia.org/wiki/Prandtl%20condition | In fluid mechanics the Prandtl condition was suggested by the German physicist Ludwig Prandtl to identify possible boundary layer separation points of incompressible fluid flows.
Prandtl condition-in normal shock
In the case of normal shock, flow is assumed to be in a steady state and thickness of shock is very small. It is further assumed that there is no friction or heat loss at the shock (because heat transfer is negligible because it occurs on a relatively small surface). It is customary in this field to denote x as the upstream and y as the downstream condition.
Since the mass flow rate from the two sides of the shock are constant, the mass balance becomes,
As there is no external force applied, momentum is conserved. Which give rises to the equation
Because heat flow is negligible, the process can be treated as adiabatic. So the energy equation will be
From the equation of state for perfect gas,
P=ρRT
As the temperature from both sides of the shock wave is discontinuous, the speed of sound is different in these adjoining medium. So it is convenient to define the star mach number that will be independent of the specific mach number. From star condition, the speed of sound at the critical condition can also be a good reference velocity. Speed of sound at that temperature is,
And additional Mach number which is independent of specific mach number is,
Since energy remains constant across the shock,
dividing mass equation by momentum equation we will get
From above equations,
it will give rises to
Which is called the prandtl condition in normal shock
References
Fluid dynamics | Prandtl condition | [
"Chemistry",
"Engineering"
] | 322 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
5,314,955 | https://en.wikipedia.org/wiki/Body%20capacitance | Body capacitance is the physical property of a human body to act as a capacitor. Like any other electrically conductive object, a human body can store electric charge if insulated. The actual amount of capacitance varies with the surroundings; it would be low when standing on top of a pole with nothing nearby, but high when leaning against an insulated, but grounded large metal surface, such as a household refrigerator, or a metal wall in a factory.
When a human's body capacitance is charged to a high voltage by friction or other means, it can produce undesirable effects when abruptly discharged as a spark. The influence of body capacitance on a tuned circuit may also change its resonant frequency, which would affect the performance of radio receivers. A capacitive sensing circuit that detects a change in body capacitance from a human finger can be used for a touchscreen or a touch switch, allowing control of devices without depressing mechanical switches.
Properties
Friction with some fabrics can act as an electrostatic generator that can charge a human body to about . Some electronic devices can be damaged by voltages of the order of . The breakdown voltages of metal oxide semiconductors without protection diodes may be even lower. Electronics factories are careful to prevent people from becoming charged. A branch of the electronics industry deals with preventing static charge build-up and protecting products against electrostatic discharge.
Notably, a combination of footwear with some sole materials, low humidity, and a dry carpet can cause footsteps to charge a person's body capacitance to as much as a few tens of kilovolts with respect to the earth. The human and surroundings then constitute a highly charged capacitor. A close approach to any conductive object connected to earth (ground) can create a shock, even a visible spark.
Capacitance of a human body in normal surroundings is typically in the tens to low hundreds of picofarads, which is small by typical electronic standards. The human-body model defined by the Electrostatic Discharge Association (ESDA) is a capacitor in series with a resistor. While humans are much larger than typical electronic components, they are also mostly separated by significant distance from other conductive objects. But close contact with another conducting body may cause an abrupt discharge of the stored energy as a spark. Although the occasional static shock can be startling and even unpleasant, the amount of stored energy is relatively low, and won't harm a healthy person. But it can result in momentary pain and a startle response that may cause further accidents. The spark may damage sensitive materials or electronic devices and in exceptional cases may ignite flammable gas or vapor resulting in a fire.
Touch sensors
Body capacitance can be used to operate touch switches (e.g. for elevators or faucets). They respond to close approach of a part of a human body, usually a fingertip. They don't require applying any force to their surfaces. Rather, the capacitance between electrodes at the device's surface and the fingertip is sensed.
Tuned circuits
Radio receivers rely on tuned circuits to isolate the frequency of a particular desired signal. Body capacitance was a significant nuisance when tuning the earliest radios; touching the tuning knob controlling the tuner's variable capacitor would couple the body capacitance into the tuning circuit, slightly changing its resonant frequency. Design of such circuits intended to be adjusted by a user must prevent interaction of the user's body capacitance with the resonant circuit, so that the resonant frequency is not affected. For example, a metal shield may be placed behind a tuning knob so that the presence of an operator's hand does not affect the tuning.
Body capacitance is exploited in the theremin to shift the frequency of the musical instrument's internal oscillators (one oscillator controls pitch and the other controls loudness).
See also
Self capacitance
Triboelectric series
Triboelectric effect
Touch-sensitive lamp
Test light: certain voltage tester probes rely on body capacitance
References
External links
Downloadable electrostatic BEM modules in MATLAB for self-capacitance of a human body and relevant human body meshes
Energy storage
Capacitors
Biotechnology | Body capacitance | [
"Physics",
"Biology"
] | 897 | [
"Physical quantities",
"Biotechnology",
"Capacitors",
"nan",
"Capacitance"
] |
5,315,016 | https://en.wikipedia.org/wiki/Aeration | Aeration (also called aerification or aeriation) is the process by which air is circulated through, mixed with or dissolved in a liquid or other substances that act as a fluid (such as soil). Aeration processes create additional surface area in the mixture, allowing greater chemical or suspension reactions.
Aeration of liquids
Methods
Aeration of liquids (usually water) is achieved by:
passing air through the liquid by means of the Venturi tube, aeration turbines or compressed air which can be combined with diffuser(s) air stone(s), as well as fine bubble diffusers, coarse bubble diffusers or linear aeration tubing. Ceramics are suitable for this purpose, often involving dispersion of fine air or gas bubbles through the porous ceramic into a liquid. The smaller the bubbles, the more gas is exposed to the liquid increasing the gas transfer efficiency. Diffusers or spargers can also be designed into the system to cause turbulence or mixing if desired.
Porous ceramic diffusers are made by fusing aluminum oxide grains using porcelain bonds to form a strong, uniformly porous and homogeneous structure. The naturally hydrophilic material is easily wetted resulting in the production of fine, uniform bubbles.
On a given volume of air or liquid, the surface area changes proportionally with drop or bubble size, the very surface area where exchange can occur. Utilizing extremely small bubbles or drops increases the rate of gas transfer (aeration) due to the higher contact surface area. The pores which these bubbles pass through are generally micrometre-size.
Uses of aeration of liquids
To smooth (laminate) the flow of tap water at the faucet.
Production of aerated water or cola for drinking purposes.
Secondary treatment of sewage or industrial wastewater through use of aerating mixers/diffusers.
To increase the oxygen content of water used to house animals, such as aquarium fish or fish farm
To increase oxygen content of wort (unfermented beer) or must (unfermented wine) to allow yeast to propagate and begin fermentation.
To dispel other dissolved gases such as carbon dioxide or chlorine.
In chemistry, to oxidise a compound dissolved or suspended in water.
To induce mixing of a body of otherwise still water.
Pond aeration.
Aeration of liquid solids
Aeration of soil
Aeration in food
Refers to the process in which air is absorbed into the food item. It refers to the lightness of cakes and bread, as measured by the type of pores they contain, and the color and texture of some sauces which have incorporated air bubbles.
In wine tasting, a variety of methods are used to aerate the wine and bring out the aromas, including swirling wine in the glass, using a decanter to increase exposure to air, or using a specialized wine aerator.
Cider from Asturias is poured into the glass from a height of about 1 metre (el escanciado) to increase aeration.
See also
Winkler test for dissolved oxygen
References
Chemical processes
Gas technologies | Aeration | [
"Chemistry"
] | 624 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
5,315,359 | https://en.wikipedia.org/wiki/Reluctance%20motor | A reluctance motor is a type of electric motor that induces non-permanent magnetic poles on the ferromagnetic rotor. The rotor does not have any windings. It generates torque through magnetic reluctance.
Reluctance motor subtypes include synchronous, variable, switched and variable stepping.
Reluctance motors can deliver high power density at low cost, making them attractive for many applications. Disadvantages include high torque ripple (the difference between maximum and minimum torque during one revolution) when operated at low speed, and noise due to torque ripple.
Until the early twenty-first century, their use was limited by the complexity of designing and controlling them. Advances in theory, computer design tools, and low-cost embedded systems for control overcame these obstacles. Microcontrollers use real-time computing control algorithms to tailor drive waveforms according to rotor position and current/voltage feedback. Before the development of large-scale integrated circuits, the control electronics were prohibitively costly.
Design and operating fundamentals
The stator consists of multiple projecting (salient) electromagnet poles, similar to a wound field brushed DC motor. The rotor consists of soft magnetic material, such as laminated silicon steel, which has multiple projections acting as salient magnetic poles through magnetic reluctance. For switched reluctance motors, the number of rotor poles is typically less than the number of stator poles, which minimizes torque ripple and prevents the poles from all aligning simultaneously—a position that cannot generate torque.
When a rotor pole is equidistant from two adjacent stator poles, the rotor pole is said to be in the "fully unaligned position". This is the position of maximum magnetic reluctance for the rotor pole. In the "aligned position", two (or more) rotor poles are fully aligned with two (or more) stator poles, (which means the rotor poles completely face the stator poles) and is a position of minimum reluctance.
When a stator pole is energized, the rotor torque is in the direction that reduces reluctance. Thus, the nearest rotor pole is pulled from the unaligned position into alignment with the stator field (a position of less reluctance). (This is the same effect used by a solenoid, or when picking up ferromagnetic metal with a magnet.) To sustain rotation, the stator field must rotate in advance of the rotor poles, thus constantly "pulling" the rotor along. Some motor variants run on 3-phase AC power (see the synchronous reluctance variant below). Most modern designs are of the switched reluctance type, because electronic commutation gives significant control advantages for motor starting, speed control and smooth operation (low torque ripple).
The inductance of each phase winding in the motor varies with position, because the reluctance also varies with position. This presents a control systems challenge.
Types
Synchronous reluctance
Synchronous reluctance motors (SynRM) have an equal number of stator and rotor poles. The projections on the rotor are arranged to introduce internal flux "barriers", holes that direct the magnetic flux along the so-called direct axis. The number of poles must be even, typically 4 or 6.
The rotor operates at synchronous speeds without current-conducting parts. Rotor losses are minimal compared to those of an induction motor, however it normally has less torque.
Once started at synchronous speed, the motor can operate with sinusoidal voltage. Speed control requires a variable-frequency drive.
High-powered SynRMs typically require rare-earth elements such as neodymium and dysprosium. However, a 2023 study reported the use of a dual-phase magnetic laminate to replace them. Magnetizing such a material creates highly magnetized regions, serving as the rotor poles, while leaving other regions non-magnetic (nonpermeable). In one experiment using high-temperature nitriding to increase strength, a dual-phase rotor output 23 kW at 14,000 RPM with a power density of 1.4 kW and 94% peak efficiency, while a comparable conventional rotor produced 3.7 kW. The use of nonpermeable posts and bridges allows them to be larger and stronger, reducing interfence between the flux lines of the rotor and the stator. One limitation is that magnetization is limited to 1.5 T, compared to conventional motors 2 T.
Switched reluctance or variable reluctance
Applications
Analog electric meters
Analog electric clocks
Some washing machine designs
Control rod drive mechanisms of nuclear reactors
Hard disk drive motor
Electric vehicles
Power tools such as drill presses, lathes, and bandsaws
See also
Electric vehicle motor
Flux switching alternator, a similar machine arrangement, used as a generator
Switched reluctance motor
Stepper motor
References
External links
Real-Time Simulation of Switched Reluctance Motor Drives Technical Paper
Electric motors | Reluctance motor | [
"Technology",
"Engineering"
] | 985 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
5,317,429 | https://en.wikipedia.org/wiki/Hexanitrohexaazaisowurtzitane | Hexanitrohexaazaisowurtzitane, also called HNIW and CL-20, is a polycyclic nitroamine explosive with the formula . It has a better oxidizer-to-fuel ratio than conventional HMX or RDX. It releases 20% more energy than traditional HMX-based propellants.
History and use
In the 1980s, CL-20 was developed by the China Lake facility, primarily to be used in propellants.
While most development of CL-20 has been fielded by the Thiokol Corporation, the US Navy (through ONR) has also been interested in CL-20 for use in rocket propellants, such as for missiles, as it has lower observability characteristics such as less visible smoke.
Thus far, CL-20 has only been used in the AeroVironment Switchblade 300 “kamikaze” drone, but is undergoing testing for use in the Lockheed Martin [LMT] AGM-158C Long Range Anti-Ship Missile (LRASM) and AGM-158B Joint Air-to-Surface Standoff Missile-Extended Range (JASSM-ER).
The Indian Armed Forces have also looked into CL-20.
The Taiwanese National Chung-Shan Institute of Science and Technology innaugerated a CL-20 production facility in 2022 with reported integration into the HF-2 and HF-3 product lines.
Synthesis
First, benzylamine (1) is condensed with glyoxal (2) under acidic and dehydrating conditions to yield the first intermediate compound.(3). Four benzyl groups selectively undergo hydrogenolysis using palladium on carbon and hydrogen. The amino groups are then acetylated during the same step using acetic anhydride as the solvent. (4). Finally, compound 4 is reacted with nitronium tetrafluoroborate and nitrosonium tetrafluoroborate, resulting in HNIW.
Cocrystals
In August 2011, Adam Matzger and Onas Bolton published results showing that a cocrystal of CL-20 and TNT had twice the stability of CL-20—safe enough to transport, but when heated to the cocrystal may separate into liquid TNT and a crystal form of CL-20 with structural defects that is somewhat less stable than CL-20.
In August 2012, Onas Bolton et al. published results showing that a cocrystal of 2 parts CL-20 and 1 part HMX had similar safety properties to HMX, but with a greater firing power closer to CL-20.
Polymeric derivatives
In 2017, K.P. Katin and M.M. Maslov designed one-dimensional covalent chains based on the CL-20 molecules. Such chains were constructed using molecular bridges for the covalent bonding between the isolated CL-20 fragments. It was theoretically predicted that their stability increased with efficient length growth. A year later, M.A. Gimaldinova and colleagues demonstrated the versatility of molecular bridges. It is shown that the use of bridges is the universal technique to connect both CL-20 fragments in the chain and the chains together to make a network (linear or zigzag). It is confirmed that the increase of the effective sizes and dimensionality of the CL-20 covalent systems leads to their thermodynamic stability growth. Therefore, the formation of CL-20 crystalline covalent solids seems to be energetically favorable, and CL-20 molecules are capable of forming not only molecular crystals but bulk covalent structures as well. Numerical calculations of CL-20 chains and networks' electronic characteristics revealed that they were wide-bandgap semiconductors.
See also
2,4,6-Tris(trinitromethyl)-1,3,5-triazine
4,4’-Dinitro-3,3’-diazenofuroxan (DDF)
Hexanitrobenzene (HNB)
Heptanitrocubane (HNC)
HHTDD
Octaazacubane (N8)
Iceane (Wurtzitane)
Octanitrocubane (ONC)
RE factor
TEX (explosive)
References
Further reading
Lowe, Derek (11 November 2011) "Things I won't work with: Hexanitrohexaazaisowurtzitane"
Explosive chemicals
Nitroamines
Rocket fuels | Hexanitrohexaazaisowurtzitane | [
"Chemistry"
] | 924 | [
"Explosive chemicals"
] |
5,318,198 | https://en.wikipedia.org/wiki/Transmission%20coefficient | The transmission coefficient is used in physics and electrical engineering when wave propagation in a medium containing discontinuities is considered. A transmission coefficient describes the amplitude, intensity, or total power of a transmitted wave relative to an incident wave.
Overview
Different fields of application have different definitions for the term. All the meanings are very similar in concept: In chemistry, the transmission coefficient refers to a chemical reaction overcoming a potential barrier; in optics and telecommunications it is the amplitude of a wave transmitted through a medium or conductor to that of the incident wave; in quantum mechanics it is used to describe the behavior of waves incident on a barrier, in a way similar to optics and telecommunications.
Although conceptually the same, the details in each field differ, and in some cases the terms are not an exact analogy.
Chemistry
In chemistry, in particular in transition state theory, there appears a certain "transmission coefficient" for overcoming a potential barrier. It is (often) taken to be unity for monomolecular reactions. It appears in the Eyring equation.
Optics
In optics, transmission is the property of a substance to permit the passage of light, with some or none of the incident light being absorbed in the process. If some light is absorbed by the substance, then the transmitted light will be a combination of the wavelengths of the light that was transmitted and not absorbed. For example, a blue light filter appears blue because it absorbs red and green wavelengths. If white light is shone through the filter, the light transmitted also appears blue because of the absorption of the red and green wavelengths.
The transmission coefficient is a measure of how much of an electromagnetic wave (light) passes through a surface or an optical element. Transmission coefficients can be calculated for either the amplitude or the intensity of the wave. Either is calculated by taking the ratio of the value after the surface or element to the value before. The transmission coefficient for total power is generally the same as the coefficient for intensity.
Telecommunications
In telecommunication, the transmission coefficient is the ratio of the amplitude of the complex transmitted wave to that of the incident wave at a discontinuity in the transmission line.
Consider a wave travelling through a transmission line with a step in impedance from to . When the wave transitions through the impedance step, a portion of the wave will be reflected back to the source. Because the voltage on a transmission line is always the sum of the forward and reflected waves at that point, if the incident wave amplitude is 1, and the reflected wave is , then the amplitude of the forward wave must be sum of the two waves or .
The value for is uniquely determined from first principles by noting that the incident power on the discontinuity must equal the sum of the power in the reflected and transmitted waves:
.
Solving the quadratic for leads both to the reflection coefficient:
,
and to the transmission coefficient:
.
The probability that a portion of a communications system, such as a line, circuit, channel or trunk, will meet specified performance criteria is also sometimes called the "transmission coefficient" of that portion of the system. The value of the transmission coefficient is inversely related to the quality of the line, circuit, channel or trunk.
Quantum mechanics
In non-relativistic quantum mechanics, the transmission coefficient and related reflection coefficient are used to describe the behavior of waves incident on a barrier. The transmission coefficient represents the probability flux of the transmitted wave relative to that of the incident wave. This coefficient is often used to describe the probability of a particle tunneling through a barrier.
The transmission coefficient is defined in terms of the incident and transmitted probability current density J according to:
where is the probability current in the wave incident upon the barrier with normal unit vector and is the probability current in the wave moving away from the barrier on the other side.
The reflection coefficient R is defined analogously:
Law of total probability requires that , which in one dimension reduces to the fact that the sum of the transmitted and reflected currents is equal in magnitude to the incident current.
For sample calculations, see rectangular potential barrier.
WKB approximation
Using the WKB approximation, one can obtain a tunnelling coefficient that looks like
where are the two classical turning points for the potential barrier. In the classical limit of all other physical parameters much larger than the reduced Planck constant, denoted , the transmission coefficient goes to zero. This classical limit would have failed in the situation of a square potential.
If the transmission coefficient is much less than 1, it can be approximated with the following formula:
where is the length of the barrier potential.
See also
Reflection coefficient
Reflections of signals on conducting lines
References
Quantum mechanics
Geometrical optics
Physical optics
Fiber-optic communications | Transmission coefficient | [
"Physics"
] | 937 | [
"Theoretical physics",
"Quantum mechanics"
] |
5,318,954 | https://en.wikipedia.org/wiki/Double%20diode%20triode | A double diode triode is a type of electronic vacuum tube once widely used in radio receivers. The tube has a triode for amplification, along with two diodes, one typically for use as a detector and the other as a rectifier for automatic gain control, in one envelope. In practice the two diodes usually share a common cathode. Multiple tube sections in one envelope minimized the number of tubes required in a radio or other apparatus.
In European nomenclature a first letter "E" identifies tubes with heaters to be connected in parallel to a transformer winding of 6.3 V; "A" identifies similar 4 V; "U" identifies tubes with heaters to be connected in series across the mains supply, drawing 100 mA; "H" identifies similar 150 mA, "C" identifies similar 200 mA, and "P" identifies similar 300 mA series-connected tubes. Following the voltage letter, "A" stands for a low-current (signal) diode section, "B" for a double diode with common cathode section, "C" for a triode section, "F" for a pentode section, "H" for a hexode or heptode section, and "L" for a power tetrode or pentode section. The first number identified the base type, for example 3 for Octal base; 9 for B7G sub-miniature 7 pin. The remaining numbers identified a particular tube type; tubes with all characters except the first identical had identical electrodes but a different heater; e.g. the EBC81 and UBC81. Generally, odd numbers identified tubes / valves with variable mu characteristics and even numbers straight, or sharp cut-off types.
American nomenclature, also used in Europe, used a number to identify the heater voltage, then one or two sequentially assigned letters, then a number specifying the total number of electrodes plus one. The 6.3V EABC80 has 7 electrodes; the US equivalent are 6AK8 and 6T8, where the "AK" and "T" have no particular meaning; the 6N8 (EBF80) is a dual diode+pentode with 7 electrodes.
There are many double diode triode tubes, including EBC81 (6BD7), EBC90 (6AT6), EBC91 (6AV6) and the older EBC1, EBC2, EBC11, EBC21, EBC33, EBC41 (identical to EBC81 but Rimlock (B8A) socket instead of noval), ABC1 (EBC1 with a 4 V heater), CBC1 (EBC1 with a 200 mA heater). The commoner tube line-ups of an AM-only radio set with mains transformer having a double diode-triode were one of the following: ECH11+EF11+EBC11+EL11 Y8A Base -or- ECH42 (or 41)+EF42 (or 41)+ EBC41+ EL41 (or 42) Rimlock Base -or- ECH81+EF80 (or 85 or 89)+ EBC81 (or 91)+ EL84 (noval Socket) + rectifier and magic eye indicator (depending on the radio class and manufacturer). AC/DC sets without mains transformer would use "U" tubes of the same types, e.g. UCH42+UF41+UBC41+UL41+UY41 rectifier.
There was also a tube with a double diode and a triode sharing a common cathode, and an additional, independent single diode section, named EABC80 or 6AK8 or 6T8 (with a shorter glass envelope) and its versions for AC/DC transformerless receivers with series heater chains, named PABC80 (9AK8, 300 mA for TV sets), HABC80 (19T8, 150 mA for radios) and UABC80 (27AK8, 100 mA for radios). This tube was designed for early AM/FM (MW/VHF) radio sets and was widely used until the end of the tube era; the double diode was used for FM demodulation, the third, independent diode for AM detection and/or automatic gain control (AGC).
The main configurations for an early tube AM/FM set using EABC80 in the 1950s and '60s were:
EC92+EF80 (or 85 or 89)+ECH81+EF80 (or 85 or 89)+EABC80+EL84 (or 95) -or- ECC85+EF80 (or 85 or 89)+ECH81+EABC80+EL84 (or 95)+ rectifier (tube or solid state) and indicator, depending on the radio class and manufacturer. For AC/DC radios, UCC85+UCH81+UF80 (or 85 or 89)+UABC80+UL84+ rectifier and indicator. These configurations were kept until semiconductor (germanium) diodes became available, making this type of tube obsolete.
References
RCA Receiving Tube Manual, Series RC-12, RC-19, RC-25 - Published by RCA.
Vacuum tubes | Double diode triode | [
"Physics"
] | 1,145 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
5,321,204 | https://en.wikipedia.org/wiki/D/U%20ratio | In the design of radio broadcast systems, especially television systems, the desired-to-undesired channel ratio (D/U ratio) is a measure of the strength of the broadcast signal for a particular channel compared with the strength of undesired broadcast signals in the same channel (e.g. from other nearby transmitting stations).
See also
Signal-to-noise ratio
References
ATSC A/74 compliance and tuner design implications; eetimes.com
Engineering ratios
Noise (electronics) | D/U ratio | [
"Mathematics",
"Engineering"
] | 102 | [
"Quantity",
"Metrics",
"Engineering ratios"
] |
5,321,285 | https://en.wikipedia.org/wiki/Radium%20and%20radon%20in%20the%20environment | Radium and radon are important contributors to environmental radioactivity. Radon occurs naturally as a result of decay of radioactive elements in soil and it can accumulate in houses built on areas where such decay occurs. Radon is a major cause of cancer; it is estimated to contribute to ~2% of all cancer related deaths in Europe.
Radium, like radon, is radioactive and is found in small quantities in nature and is hazardous to life if radiation exceeds 20-50 mSv/year. Radium is a decay product of uranium and thorium. Radium may also be released into the environment by human activity: for example, in improperly discarded products painted with radioluminescent paint.
Radium
In the oil and gas industries
Residues from the oil and gas industry often contain radium and its daughters. The sulfate scale from an oil well can be very radium rich. The water inside an oil field is often very rich in strontium, barium and radium, while seawater is very rich in sulfate: so if water from an oil well is discharged into the sea or mixed with seawater, the radium is likely to be brought out of solution by the barium/strontium sulfate which acts as a carrier precipitate.
Radioluminescent (glow in the dark) products
It is not unknown for local contamination to arise from improper disposal of radium-based radioluminescent paints.
In radioactive quackery
Eben Byers was a wealthy American socialite whose death in 1932 from using a radioactive quackery product called Radithor is a prominent example of a death caused by radium. Radithor contained ~1 μCi (40 kBq) of 226Ra and 1 μCi of 228Ra per bottle. Radithor was taken by mouth and radium, being a calcium mimic, has a very long biological halflife in bone.
Radon
Most of the dose is due to the decay of the polonium (218Po) and lead (214Pb) daughters of 222Rn. By controlling exposure to the daughters the radioactive dose to the skin and lungs can be reduced by at least 90%. This can be done by wearing a dust mask, and wearing a suit to cover the entire body. Note that exposure to smoke at the same time as radon and radon daughters will increase the harmful effect of the radon. In uranium miners radon has been found to be more carcinogenic in smokers than in non-smokers.
Occurrence
Radon concentration in open air varies between 1 and 100 Bq m−3. Radon can be found in some spring waters and hot springs. The towns of Misasa, Japan, and Bad Kreuznach, Germany boast radium-rich springs which emit radon, as does Radium Springs, New Mexico.
Radon exhausts naturally from the ground, particularly in certain regions, especially but not only regions with granitic soils. However, not all granitic regions are prone to high emissions of radon. For instance, while the rock which Aberdeen is on is very radium rich, the rock lacks the cracks required for the radon to migrate. In other nearby areas of Scotland (to the north of Aberdeen) and in Cornwall/Devon the radon is very much able to leave the rock.
Radon is a decay product of radium which in turn is a decay product of uranium. Maps of average radon levels in houses are available, to assist in planning mitigation measures.
While high uranium in the soil/rock under a house does not always lead to a high radon level in air, a positive correlation between the uranium content of the soil and the radon level in air can be seen.
In air
Radon harms indoor air quality in many homes. (See "In houses" below.)
Radon (222Rn) released into the air decays to 210Pb and other radioisotopes and the levels of 210Pb can be measured. It is important to note that the rate of deposition of this radioisotope is very dependent on the season. Here is a graph of the deposition rate observed in Japan.
In groundwater
Well water can be very rich in radon; the use of this water inside a house is another route allowing radon to enter the house. The radon can enter the air and then be a source of exposure to the humans, or the water can be consumed by humans which is a different exposure route.
Radon in rainwater
Rainwater can be highly radioactive due to high levels of radon and its decay progenies 214Bi and 214Pb; the concentrations of these radioisotopes can be high enough to seriously disrupt radiation monitoring at nuclear power plants. The highest levels of radon in rainwater occur during thunderstorms, and it is hypothesized that radon is concentrated in thunderstorms because it forms some positive ions during thunderstorms. Estimates of the age of raindrops have been obtained from measuring the isotopic abundance of radon's short-lived decay progeny in rainwater.
In the oil and gas industries
Water, oil and gas from a well often contain radon. The radon decays to form solid radioisotopes which form coatings on the inside of pipework. In an oil processing plant the area of the plant where propane is processed is often one of the more contaminated areas of the plant as radon has a similar boiling point to propane.
In mines
Because uranium minerals emit radon gas, and their harmful and highly radioactive decay products, uranium mining is considerably more dangerous than other (already dangerous) hard rock mining, requiring adequate ventilation systems if the mines are not open pit. In the 1950s, a significant number of American uranium miners were Navajo, as many uranium deposits were discovered on Navajo reservations. A statistically significant proportion of these miners later developed small-cell lung cancer, a type of cancer usually not associated with smoking, after exposure to uranium ore and radon-222, a natural decay product of uranium. The cancer causing agent has been shown to be the radon which is produced by the uranium, and not the uranium itself. Some survivors and their descendants received compensation under the Radiation Exposure Compensation Act in 1990.
The level of radon in the air of mines is now normally controlled by law. In a working mine, the radon level can be controlled by ventilation, sealing off old workings and controlling the water in the mine. The level in a mine can go up when a mine is abandoned; it can reach a level which can cause the skin to become red (a mild radiation burn). The radon levels in some of the mines can reach 400 to 700 kBq m−3.
A common unit of exposure of lung tissue to alpha emitters is the working level month (WLM), this is where the human lungs have been exposed for 170 hours (a typical month worth of work for a miner) to air which has 3.7 kBq of 222Rn (in equilibrium with its decay products). This is air which has the alpha dose rate of 1 working level (WL). It is estimated that the average person (general public) is subject to 0.2 WLM per year, which works out at about 15 to 20 WLM in a lifetime. According to the NRC, 1 WLM is a 5 to 10 mSv lung dose (0.5 to 1.0 rem), while the Organisation for Economic Co-operation and Development (OECD) consider that 1 WLM is equal to a lung dose of 5.5 mSv, and the International Commission on Radiological Protection (ICRP) consider 1 WLM to be a 5 mSv lung dose for professional workers (and a 4 mSv lung dose for the general public). Lastly the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) consider that the exposure of the lungs to 1 Bq of 222Rn (in equilibrium with its decay products) for one year will cause a dose of 61 μSv.
In houses
It has been known since at least the 1950s that radon is present in indoor air, and research into its effects on human health started in the early 1970s. The danger of radon exposure in dwellings received more widespread public awareness after 1984, as a result of the case of Stanley Watras, an employee at the Limerick nuclear power plant in Pennsylvania. Mr. Watras set off the radiation alarms (see Geiger counter) on his way into work for two weeks straight while authorities searched for the source of the contamination. They were shocked to find that the source was astonishingly high levels of radon in his basement and it was not related to the nuclear plant. The risks associated with living in his house were estimated to be equivalent to smoking 135 packs of cigarettes every day.
Depending how houses are built and ventilated, radon may accumulate in basements and dwellings. The European Union recommends that mitigation should be taken starting from concentrations of 400 Bq/m3 for old houses, and 200 Bq/m3 for new ones.
The National Council on Radiation Protection and Measurements (NCRP) recommends action for any house with a concentration higher than 8 pCi/L (300 Bq/m3).
The United States Environmental Protection Agency recommends action for any house with a concentration higher than 148 Bq/m3 (given as 4 pCi/L). Nearly one in 15 homes in the U.S. has a high level of indoor radon according to their statistics. The U.S. Surgeon General and EPA recommend all homes be tested for radon. Since 1985, millions of homes have been tested for radon in the U.S.
By adding a crawl space under the ground floor, which is subject to forced ventilation, the radon level in the house can be lowered.
References
G.K. Gillmore, P. Phillips, A. Denman, M Sperrin and G. Pearse, Ecotoxicology and Environmental Safety, 2001, 49, 281.
J.H. Lubin and J.D. Boice, Journal Natl. Cancer Inst., 1997, 89, 49. (Risks of indoor radon)
N.M. Hurley and J.H. Hurley, Environment International, 1986, 12, 39. (Lung cancer in uranium miners as a function of radon exposure).
Further reading
Hala, J. and Navratil J.D., Radioactivity, Ionizing Radiation and Nuclear Energy, Konvoj, 2003.
Nuclear technology
Nuclear chemistry
Nuclear physics
Environment
Environment
Radiobiology
Radioactivity
Soil contamination
Pollution | Radium and radon in the environment | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 2,188 | [
"Nuclear chemistry",
"Radiobiology",
"Environmental chemistry",
"Nuclear technology",
"Soil contamination",
"nan",
"Nuclear physics",
"Radioactivity"
] |
26,028,285 | https://en.wikipedia.org/wiki/Buckingham%20potential | In theoretical chemistry, the Buckingham potential is a formula proposed by Richard Buckingham which describes the Pauli exclusion principle and van der Waals energy for the interaction of two atoms that are not directly bonded as a function of the interatomic distance . It is a variety of interatomic potentials.
Here, , and are constants. The two terms on the right-hand side constitute a repulsion and an attraction, because their first derivatives with respect to are negative and positive, respectively.
Buckingham proposed this as a simplification of the Lennard-Jones potential, in a theoretical study of the equation of state for gaseous helium, neon and argon.
As explained in Buckingham's original paper and, e.g., in section 2.2.5 of Jensen's text, the repulsion is due to the interpenetration of the closed electron shells. "There is therefore some justification for choosing the repulsive part (of the potential) as an exponential function". The Buckingham potential has been used extensively in simulations of molecular dynamics.
Because the exponential term converges to a constant as →, while the term diverges, the Buckingham potential becomes attractive as becomes small. This may be problematic when dealing with a structure with very short interatomic distances, as any nuclei that cross a certain threshold will become strongly (and unphysically) bound to one another at a distance of zero.
Modified Buckingham (Exp-Six) potential
The modified Buckingham potential, also called the "exp-six" potential, is used to calculate the interatomic forces for gases based on Chapman and Cowling collision theory. The potential has the form
where is the interatomic potential between atom i and atom j, is the minimum potential energy, is the measurement of the repulsive energy steepness which is the ratio , is the value of where is zero, and is the value of which can achieve the minimum interatomic potential . This potential function is only valid when , as the potential will decay towards as . This is corrected by identifying , which is the value of at which the potential is maximized; when , the potential is set to infinity.
Coulomb–Buckingham potential
The Coulomb–Buckingham potential is an extension of the Buckingham potential for application to ionic systems (e.g. ceramic materials). The formula for the interaction is
where A, B, and C are suitable constants and the additional term is the electrostatic potential energy.
The above equation may be written in its alternate form as
where is the minimum energy distance, is a free dimensionless parameter and is the depth of the minimum energy.
Beest Kramer van Santen (BKS) potential
The BKS potential is a force field that may be used to simulate the interatomic potential between Silica glass atoms. Rather than relying only on experimental data, the BKS potential is derived by combining ab initio quantum chemistry methods on small silica clusters to describe accurate interaction between nearest-neighbors, which is the function of accurate force field. The experimental data is applied to fit larger scale force information beyond nearest neighbors. By combining the microscopic and macroscopic information, the applicability of the BKS potential has been extended to both the silica polymorphs and other tetrahedral network oxides systems that have same cluster structure, such as aluminophosphates, carbon and silicon.
The form of this interatomic potential is the usual Buckingham form, with the addition of a Coulomb force term. The formula for the BKS potential is expressed as
where is the interatomic potential between atom i and atom j, and are the charges magnitudes, is the distance between atoms, and , and are constant parameters based on the type of atoms.
The BKS potential parameters for common atoms are shown below:
An updated version of the BKS potential introduced a new repulsive term to prevent atom overlapping. The modified potential is taken as
where the constant parameters were chosen to have the following values for Silica glass:
References
External links
Buckingham potential on SklogWiki
Theoretical chemistry
Computational chemistry
Thermodynamics
Chemical bonding
Intermolecular forces
Quantum mechanical potentials | Buckingham potential | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 848 | [
"Molecular physics",
"Quantum mechanics",
"Intermolecular forces",
"Materials science",
"Quantum mechanical potentials",
"Computational chemistry",
"Theoretical chemistry",
"Condensed matter physics",
"Thermodynamics",
"nan",
"Chemical bonding",
"Dynamical systems"
] |
26,028,993 | https://en.wikipedia.org/wiki/Movable%20cellular%20automaton | The movable cellular automaton (MCA) method is a method in computational solid mechanics based on the discrete concept. It provides advantages both of classical cellular automaton and discrete element methods. One important advantage of the MCA method is that it permits direct simulation of material fracture, including damage generation, crack propagation, fragmentation, and mass mixing. It is difficult to simulate these processes by means of continuum mechanics methods (For example: finite element method, finite difference method, etc.), so some new concepts like peridynamics are required. Discrete element method is very effective to simulate granular materials, but mutual forces among movable cellular automata provides simulating solids behavior. As the cell size of the automaton approaches zero, MCA behavior approaches classical continuum mechanics methods. The MCA method was developed in the group of S.G. Psakhie
Keystone of the movable cellular automaton method
In framework of the MCA approach an object under modeling is considered as a set of interacting elements/automata. The dynamics of the set of automata are defined by their mutual forces and rules for their relationships. This system exists and operates in time and space. Its evolution in time and space is governed by the equations of motion. The mutual forces and rules for inter-elements relationships are defined by the function of the automaton response. This function has to be specified for each automaton. Due to mobility of automata the following new parameters of cellular automata have to be included into consideration: Ri – radius-vector of automaton; Vi – velocity of automaton; ωi – rotation velocity of automaton; θi – rotation vector of automaton; mi – mass of automaton; Ji – moment of inertia of automaton.
New concept: neighbours
The new concept of the MCA method is based on the introducing of the state of the pair of automata (relation of interacting pairs of automata) in addition to the conventional one – the state of a separate automaton. Note that the introduction of this definition allows to go from the static net concept to the concept of neighbours. As a result of this, the automata have the ability to change their neighbors by switching the states (relationships) of the pairs.
Definition of the parameter of pair state
The introducing of new type of states leads to new parameter to use it as criteria for switching relationships. It is defined as an automaton overlapping parameters hij. So the relationship of the cellular automata is characterised by the value of their overlapping.
The initial structure is formed by setting up certain relationships among each pair of neighboring elements.
Criterion of switching of the state of pair relationships
In contrast to the classical cellular automaton method in the MCA method not only a single automaton but also a relationship of pair of automata can be switched. According with the bistable automata concept there are two types of the pair states (relationships):
So the changing of the state of pair relationships is controlled by relative movements of the automata and the media formed by such pairs can be considered as bistable media.
Equations of MCA motion
The evolution of MCA media is described by the following equations of motion for translation:
Here is the mass of automaton , is central force acting between automata and , is certain coefficient associated with transferring the h parameter from pair ij to pair ik, is the angle between directions ij and ik.
Due to finite size of movable automata the rotation effects have to be taken into account. The equations of motion for rotation can be written as follows:
Here Θij is the angle of relative rotation (it is a switching parameter like hij for translation), qij is the distance from center of automaton i to contact point of automaton j (moment arm), τij is the pair tangential interaction, is certain coefficient associated with transferring the Θ parameter from one pair to other (it is similar to from the equation for translation).
These equations are completely similar to the equations of motion for the many–particle approach.
Definition of deformation in pair of automata
Translation of the pair automata
The dimensionless deformation parameter for translation of the i j automata pair can be presented as:
In this case:
where Δt time step, Vnij – relative velocity.
Rotation of the pair automata can be calculated by analogy with the last translation relationships.
Modeling of irreversible deformation in the MCA method
The εij parameter is used as a measure of deformation of automaton i under its interaction with automaton j. Where qij – is a distance from the center of automaton i to its contact point with automaton j; Ri = di/2 (di – is the size of automaton i).
As an example the titanium specimen under cyclic loading (tension – compression) is considered. The loading diagram is shown in the next figure:
Advantages of MCA method
Due to mobility of each automaton the MCA method allows to take into account directly such actions as:
mass mixing
penetration effects
chemical reactions
intensive deformation
phase transformations
accumulation of damages
fragmentation and fracture
cracks generation and development
Using boundary conditions of different types (fixed, elastic, viscous-elastic, etc.) it is possible to imitate different properties of surrounding medium, containing the simulated system. It is possible to model different modes of mechanical loading (tension, compression, shear strain, etc.) by setting up additional conditions at the boundaries.
See also
References
()
()
()
()
()
Software
MCA software package
Software for simulation of materials in discrete-continuous approach «FEM+MCA»: Number of state registration in Applied Research Foundation of Algorithms and Software (AFAS): 50208802297 / Smolin A.Y., Zelepugin S.A., Dobrynin S.A.; applicant and development center is Tomsk State University. – register date 28.11.2008; certificate AFAS N 11826 date 01.12.2008.
Solid mechanics
Numerical analysis
Cellular automata
Condensed matter physics
Mathematical modeling | Movable cellular automaton | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,238 | [
"Solid mechanics",
"Mathematical modeling",
"Recreational mathematics",
"Applied mathematics",
"Phases of matter",
"Computational mathematics",
"Materials science",
"Cellular automata",
"Mechanics",
"Condensed matter physics",
"Mathematical relations",
"Numerical analysis",
"Approximations",... |
29,230,765 | https://en.wikipedia.org/wiki/Uniform%20limit%20theorem | In mathematics, the uniform limit theorem states that the uniform limit of any sequence of continuous functions is continuous.
Statement
More precisely, let X be a topological space, let Y be a metric space, and let ƒn : X → Y be a sequence of functions converging uniformly to a function ƒ : X → Y. According to the uniform limit theorem, if each of the functions ƒn is continuous, then the limit ƒ must be continuous as well.
This theorem does not hold if uniform convergence is replaced by pointwise convergence. For example, let ƒn : [0, 1] → R be the sequence of functions ƒn(x) = xn. Then each function ƒn is continuous, but the sequence converges pointwise to the discontinuous function ƒ that is zero on [0, 1) but has ƒ(1) = 1. Another example is shown in the adjacent image.
In terms of function spaces, the uniform limit theorem says that the space C(X, Y) of all continuous functions from a topological space X to a metric space Y is a closed subset of YX under the uniform metric. In the case where Y is complete, it follows that C(X, Y) is itself a complete metric space. In particular, if Y is a Banach space, then C(X, Y) is itself a Banach space under the uniform norm.
The uniform limit theorem also holds if continuity is replaced by uniform continuity. That is, if X and Y are metric spaces and ƒn : X → Y is a sequence of uniformly continuous functions converging uniformly to a function ƒ, then ƒ must be uniformly continuous.
Proof
In order to prove the continuity of f, we have to show that for every ε > 0, there exists a neighbourhood U of any point x of X such that:
Consider an arbitrary ε > 0. Since the sequence of functions (fn) converges uniformly to f by hypothesis, there exists a natural number N such that:
Moreover, since fN is continuous on X by hypothesis, for every x there exists a neighbourhood U such that:
In the final step, we apply the triangle inequality in the following way:
(This is wrong--U is defined separately for each x, so should really be U_x. The proof needs to specify how to obtain U used in the final line.)
Hence, we have shown that the first inequality in the proof holds, so by definition f is continuous everywhere on X.
Uniform limit theorem in complex analysis
There are also variants of the uniform limit theorem that are used in complex analysis, albeit with modified assumptions.
Theorem.
Let be an open and connected subset of the complex numbers. Suppose that is a sequence of holomorphic functions that converges uniformly to a function on every compact subset of . Then is holomorphic in , and moreover, the sequence of derivatives converges uniformly to on every compact subset of .
Theorem.
Let be an open and connected subset of the complex numbers. Suppose that is a sequence of univalent functions that converges uniformly to a function . Then is holomorphic, and moreover, is either univalent or constant in .
Notes
References
E. M. Stein, R. Shakarchi (2003). Complex Analysis (Princeton Lectures in Analysis, No. 2), Princeton University Press.
E. C. Titchmarsh (1939). The Theory of Functions, 2002 Reprint, Oxford Science Publications.
Theorems in real analysis
Topology of function spaces | Uniform limit theorem | [
"Mathematics"
] | 716 | [
"Theorems in mathematical analysis",
"Theorems in real analysis"
] |
29,237,460 | https://en.wikipedia.org/wiki/Shapley%E2%80%93Folkman%20lemma | The Shapley–Folkman lemma is a result in convex geometry that describes the Minkowski addition of sets in a vector space. It is named after mathematicians Lloyd Shapley and Jon Folkman, but was first published by the economist Ross M. Starr.
The lemma may be intuitively understood as saying that, if the number of summed sets exceeds the dimension of the vector space, then their Minkowski sum is approximately convex.
Related results provide more refined statements about how close the approximation is. For example, the Shapley–Folkman theorem provides an upper bound on the distance between any point in the Minkowski sum and its convex hull. This upper bound is sharpened by the Shapley–Folkman–Starr theorem (alternatively, Starr's corollary).
The Shapley–Folkman lemma has applications in economics, optimization and probability theory. In economics, it can be used to extend results proved for convex preferences to non-convex preferences. In optimization theory, it can be used to explain the successful solution of minimization problems that are sums of many functions. In probability, it can be used to prove a law of large numbers for random sets.
Introductory example
A set is convex if every line segment joining two of its points is a subset in the set: For example, the solid disk is a convex set but the circle is not, because the line segment joining two distinct points is not a subset of the circle.
The convex hull of a set is the smallest convex set that contains . This distance is zero if and only if the sum is convex.
Minkowski addition is the addition of the set members. For example, adding the set consisting of the integers zero and one to itself yields the set consisting of zero, one, and two:The subset of the integers is contained in the interval of real numbers , which is convex. The Shapley–Folkman lemma implies that every point in is the sum of an integer from and a real number from .
The distance between the convex interval and the non-convex set equals one-half:
However, the distance between the average Minkowski sum
and its convex hull is only , which is half the distance () between its summand and . As more sets are added together, the average of their sum "fills out" its convex hull: The maximum distance between the average and its convex hull approaches zero as the average includes more summands.
Preliminaries
The Shapley–Folkman lemma depends upon the following definitions and results from convex geometry.
Real vector spaces
A real vector space of two dimensions can be given a Cartesian coordinate system in which every point is identified by an ordered pair of real numbers, called "coordinates", which are conventionally denoted by and . Two points in the Cartesian plane can be added coordinate-wise:
further, a point can be multiplied by each real number coordinate-wise:
More generally, any real vector space of (finite) dimension can be viewed as the set of all -tuples of real numbers on which two operations are defined: vector addition and multiplication by a real number. For finite-dimensional vector spaces, the operations of vector addition and real-number multiplication can each be defined coordinate-wise, following the example of the Cartesian plane.
Convex sets
In a real vector space, a non-empty set is defined to be convex if, for each pair of its points, every point on the line segment that joins them is still in . For example, a solid disk is convex but a circle is not, because it does not contain a line segment joining its points ; the non-convex set of three integers is contained in the interval , which is convex. For example, a solid cube is convex; however, anything that is hollow or dented, for example, a crescent shape, is non-convex. The empty set is convex, either by definition or vacuously, depending on the author.
More formally, a set is convex if, for all points and in and for every real number in the unit interval , the point
is a member of .
By mathematical induction, a set is convex if and only if every convex combination of members of also belongs to . By definition, a convex combination of an indexed subset of a vector space is any weighted average for some indexed set of non-negative real numbers satisfying the equation .
The definition of a convex set implies that the intersection of two convex sets is a convex set. More generally, the intersection of a family of convex sets is a convex set. In particular, the intersection of two disjoint sets is the empty set, which is convex.
Convex hull
For every subset of a real vector space, its is the minimal convex set that contains . Thus is the intersection of all the convex sets that cover . The convex hull of a set can be equivalently defined to be the set of all convex combinations of points in . For example, the convex hull of the set of integers is the closed interval of real numbers , which contains the integer end-points. The convex hull of the unit circle is the closed unit disk, which contains the unit circle.
Minkowski addition
In any vector space (or algebraic structure with addition), , the Minkowski sum of two non-empty sets is defined to be the element-wise operation (See also .) For example,
This operation is clearly commutative and associative on the collection of non-empty sets. All such operations extend in a well-defined manner to recursive forms By the principle of induction it is easy to see that
Convex hulls of Minkowski sums
Minkowski addition behaves well with respect to taking convex hulls. Specifically, for all subsets of a real vector space, , the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls. That is,
And by induction it follows that
for any and non-empty subsets .
Statements of the three main results
Notation
and represent positive integers. is the dimension of the ambient space .
are nonempty, bounded subsets of . They are also called "summands". is the number of summands.
is the Minkowski sum of the summands.
represents an arbitrary vector in .
Shapley–Folkman lemma
Since , for any , there exist elements such that . The Shapley–Folkman lemma refines this statement.
For example, every point in is the sum of an element in and an element in .
Shuffling indices if necessary, this means that every point in can be decomposed as
where for and for . Note that the reindexing depends on the point .
The lemma may be stated succinctly as
The converse of Shapley–Folkman lemma
In particular, the Shapley–Folkman lemma requires the vector space to be finite-dimensional.
Shapley–Folkman theorem
Shapley and Folkman used their lemma to prove the following theorem, which quantifies the difference between and using squared Euclidean distance.
For any nonempty subset and any point define their squared Euclidean distance to be the infimumMore generally, for any two nonempty subsets defineNote that so we can simply write where Similarly,
For example,
The squared Euclidean distance is a measure of how "close" two sets are. In particular, if two sets are compact, then their squared Euclidean distance is zero if and only if they are equal. Thus, we may quantify how close to convexity is by upper-bounding
For any bounded subset define its circumradius to be the infimum of the radius of all balls containing it (as shown in the diagram). More formally,
Now we can state
where we use the notation to mean "the sum of the largest terms".
Note that this upper bound depends on the dimension of ambient space and the shapes of the summands, but not on the number of summands.
Shapley–Folkman–Starr theorem
Define the inner radius of a bounded subset to be the infimum of such that, for any , there exists a ball of radius such that .
For example, let be two nested balls, then the circumradius of is the radius of , but its inner radius is the radius of .
Since for any bounded subset , the following theorem is a refinement:
In particular, if we have an infinite sequence of nonempty, bounded subsets of , and if there exists some such that the inner radius of each is upper-bounded by , then This can be interpreted as stating that, as long as we have an upper bound on the inner radii, performing "Minkowski-averaging" would get us closer and closer to a convex set.
Other proofs of the results
There have been many proofs of these results, from the original, to the later Arrow and Hahn, Cassels, Schneider, etc. An abstract and elegant proof by Ekeland has been extended by Artstein. Different proofs have also appeared in unpublished papers. An elementary proof of the Shapley–Folkman lemma can be found in the book by Bertsekas, together with applications in estimating the duality gap in separable optimization problems and zero-sum games.
Usual proofs of these results are nonconstructive: they establish only the existence of the representation, but do not provide an algorithm for computing the representation. In 1981, Starr published an iterative algorithm for a less sharp version of the Shapley–Folkman–Starr theorem.
A proof of the results
The following proof of Shapley–Folkman lemma is from. The proof idea is to lift the representation of from to , use Carathéodory's theorem for conic hulls, then drop back to .
The following "probabilistic" proof of Shapley–Folkman–Starr theorem is from.
We can interpret in probabilistic terms: , since for some , we can define a random vector , finitely supported in , such that , and .
Then, it is natural to consider the "variance" of a set asWith that, .
History
The lemma of Lloyd Shapley and Jon Folkman was first published by the economist Ross M. Starr, who was investigating the existence of economic equilibria while studying with Kenneth Arrow. In his paper, Starr studied a convexified economy, in which non-convex sets were replaced by their convex hulls; Starr proved that the convexified economy has equilibria that are closely approximated by "quasi-equilibria" of the original economy; moreover, he proved that every quasi-equilibrium has many of the optimal properties of true equilibria, which are proved to exist for convex economies.
Following Starr's 1969 paper, the Shapley–Folkman–Starr results have been widely used to show that central results of (convex) economic theory are good approximations to large economies with non-convexities; for example, quasi-equilibria closely approximate equilibria of a convexified economy. "The derivation of these results in general form has been one of the major achievements of postwar economic theory", wrote Roger Guesnerie.
The topic of non-convex sets in economics has been studied by many Nobel laureates: Shapley himself (2012), Arrow (1972), Robert Aumann (2005), Gérard Debreu (1983), Tjalling Koopmans (1975), Paul Krugman (2008), and Paul Samuelson (1970); the complementary topic of convex sets in economics has been emphasized by these laureates, along with Leonid Hurwicz, Leonid Kantorovich (1975), and Robert Solow (1987).
Applications
The Shapley–Folkman lemma enables researchers to extend results for Minkowski sums of convex sets to sums of general sets, which need not be convex. Such sums of sets arise in economics, in mathematical optimization, and in probability theory; in each of these three mathematical sciences, non-convexity is an important feature of applications.
Economics
In economics, a consumer's preferences are defined over all "baskets" of goods. Each basket is represented as a non-negative vector, whose coordinates represent the quantities of the goods. On this set of baskets, an indifference curve is defined for each consumer; a consumer's indifference curve contains all the baskets of commodities that the consumer regards as equivalent: That is, for every pair of baskets on the same indifference curve, the consumer does not prefer one basket over another. Through each basket of commodities passes one indifference curve. A consumer's preference set (relative to an indifference curve) is the union of the indifference curve and all the commodity baskets that the consumer prefers over the indifference curve. A consumer's preferences are convex if all such preference sets are convex.
An optimal basket of goods occurs where the budget-line supports a consumer's preference set, as shown in the diagram. This means that an optimal basket is on the highest possible indifference curve given the budget-line, which is defined in terms of a price vector and the consumer's income (endowment vector). Thus, the set of optimal baskets is a function of the prices, and this function is called the consumer's demand. If the preference set is convex, then at every price the consumer's demand is a convex set, for example, a unique optimal basket or a line-segment of baskets.
Non-convex preferences
However, if a preference set is non-convex, then some prices determine a budget-line that supports two separate optimal-baskets. For example, we can imagine that, for zoos, a lion costs as much as an eagle, and further that a zoo's budget suffices for one eagle or one lion. We can suppose also that a zoo-keeper views either animal as equally valuable. In this case, the zoo would purchase either one lion or one eagle. Of course, a contemporary zoo-keeper does not want to purchase half of an eagle and half of a lion (or a griffin)! Thus, the zoo-keeper's preferences are non-convex: The zoo-keeper prefers having either animal to having any strictly convex combination of both.
When the consumer's preference set is non-convex, then (for some prices) the consumer's demand is not connected; a disconnected demand implies some discontinuous behavior by the consumer, as discussed by Harold Hotelling:
If indifference curves for purchases be thought of as possessing a wavy character, convex to the origin in some regions and concave in others, we are forced to the conclusion that it is only the portions convex to the origin that can be regarded as possessing any importance, since the others are essentially unobservable. They can be detected only by the discontinuities that may occur in demand with variation in price-ratios, leading to an abrupt jumping of a point of tangency across a chasm when the straight line is rotated. But, while such discontinuities may reveal the existence of chasms, they can never measure their depth. The concave portions of the indifference curves and their many-dimensional generalizations, if they exist, must forever remain in unmeasurable obscurity.
The difficulties of studying non-convex preferences were emphasized by Herman Wold and again by Paul Samuelson, who wrote that non-convexities are "shrouded in eternal darkness ...", according to Diewert.
Nonetheless, non-convex preferences were illuminated from 1959 to 1961 by a sequence of papers in The Journal of Political Economy (JPE). The main contributors were Farrell, Bator, Koopmans, and Rothenberg. In particular, Rothenberg's paper discussed the approximate convexity of sums of non-convex sets. These JPE-papers stimulated a paper by Lloyd Shapley and Martin Shubik, which considered convexified consumer-preferences and introduced the concept of an "approximate equilibrium". The JPE-papers and the Shapley–Shubik paper influenced another notion of "quasi-equilibria", due to Robert Aumann.
Starr's 1969 paper and contemporary economics
Previous publications on non-convexity and economics were collected in an annotated bibliography by Kenneth Arrow. He gave the bibliography to Starr, who was then an undergraduate enrolled in Arrow's (graduate) advanced mathematical-economics course. In his term-paper, Starr studied the general equilibria of an artificial economy in which non-convex preferences were replaced by their convex hulls. In the convexified economy, at each price, the aggregate demand was the sum of convex hulls of the consumers' demands. Starr's ideas interested the mathematicians Lloyd Shapley and Jon Folkman, who proved their eponymous lemma and theorem in "private correspondence", which was reported by Starr's published paper of 1969.
In his 1969 publication, Starr applied the Shapley–Folkman–Starr theorem. Starr proved that the "convexified" economy has general equilibria that can be closely approximated by "quasi-equilibria" of the original economy, when the number of agents exceeds the dimension of the goods: Concretely, Starr proved that there exists at least one quasi-equilibrium of prices popt with the following properties:
For each quasi-equilibrium's prices popt, all consumers can choose optimal baskets (maximally preferred and meeting their budget constraints).
At quasi-equilibrium prices popt in the convexified economy, every good's market is in equilibrium: Its supply equals its demand.
For each quasi-equilibrium, the prices "nearly clear" the markets for the original economy: an upper bound on the distance between the set of equilibria of the "convexified" economy and the set of quasi-equilibria of the original economy followed from Starr's corollary to the Shapley–Folkman theorem.
Starr established that
"in the aggregate, the discrepancy between an allocation in the fictitious economy generated by [taking the convex hulls of all of the consumption and production sets] and some allocation in the real economy is bounded in a way that is independent of the number of economic agents. Therefore, the average agent experiences a deviation from intended actions that vanishes in significance as the number of agents goes to infinity".
Following Starr's 1969 paper, the Shapley–Folkman–Starr results have been widely used in economic theory. Roger Guesnerie summarized their economic implications: "Some key results obtained under the convexity assumption remain (approximately) relevant in circumstances where convexity fails. For example, in economies with a large consumption side, preference nonconvexities do not destroy the standard results". "The derivation of these results in general form has been one of the major achievements of postwar economic theory", wrote Guesnerie. The topic of non-convex sets in economics has been studied by many Nobel laureates: Arrow (1972), Robert Aumann (2005), Gérard Debreu (1983), Tjalling Koopmans (1975), Paul Krugman (2008), and Paul Samuelson (1970); the complementary topic of convex sets in economics has been emphasized by these laureates, along with Leonid Hurwicz, Leonid Kantorovich (1975), and Robert Solow (1987). The Shapley–Folkman–Starr results have been featured in the economics literature: in microeconomics, in general-equilibrium theory,<ref></p></ref> in public economics (including market failures), as well as in game theory, in mathematical economics, and in applied mathematics (for economists). The Shapley–Folkman–Starr results have also influenced economics research using measure and integration theory.
Mathematical optimization
The Shapley–Folkman lemma has been used to explain why large minimization problems with non-convexities can be nearly solved (with iterative methods whose convergence proofs are stated for only convex problems). The Shapley–Folkman lemma has encouraged the use of methods of convex minimization on other applications with sums of many functions.
Preliminaries of optimization theory
Nonlinear optimization relies on the following definitions for functions:
The graph of a function f is the set of the pairs of arguments x and function evaluations f(x)
Graph(f) = { (x, f(x) ) }
The epigraph of a real-valued function f is the set of points above the graph
Epi(f) = { (x, u) : f(x) ≤ u }.
A real-valued function is defined to be a convex function if its epigraph is a convex set.
For example, the quadratic function f(x) = x2 is convex, as is the absolute value function g(x) = |x|. However, the sine function (pictured) is non-convex on the interval (0, π).
Additive optimization problems
In many optimization problems, the objective function f is separable: that is, f is the sum of many summand-functions, each of which has its own argument:
f(x) = f( (x1, ..., x ) ) = Σ fn(xn).
For example, problems of linear optimization are separable. Given a separable problem with an optimal solution, we fix an optimal solution
xmin = (x1, ..., x )min
with the minimum value For this separable problem, we also consider an optimal solution (xmin, f(xmin) )
to the "convexified problem", where convex hulls are taken of the graphs of the summand functions. Such an optimal solution is the limit of a sequence of points in the convexified problem
(xj, f(xj) ) ∈ Σ Conv (Graph( fn ) ).
Of course, the given optimal-point is a sum of points in the graphs of the original summands and of a small number of convexified summands, by the Shapley–Folkman lemma.
This analysis was published by Ivar Ekeland in 1974 to explain the apparent convexity of separable problems with many summands, despite the non-convexity of the summand problems. In 1973, the young mathematician Claude Lemaréchal was surprised by his success with convex minimization methods on problems that were known to be non-convex; for minimizing nonlinear problems, a solution of the dual problem need not provide useful information for solving the primal problem, unless the primal problem be convex and satisfy a constraint qualification. Lemaréchal's problem was additively separable, and each summand function was non-convex; nonetheless, a solution to the dual problem provided a close approximation to the primal problem's optimal value. Ekeland's analysis explained the success of methods of convex minimization on large and separable problems, despite the non-convexities of the summand functions. Ekeland and later authors argued that additive separability produced an approximately convex aggregate problem, even though the summand functions were non-convex. The crucial step in these publications is the use of the Shapley–Folkman lemma.
The Shapley–Folkman lemma has encouraged the use of methods of convex minimization on other applications with sums of many functions.
Probability and measure theory
Convex sets are often studied with probability theory. Each point in the convex hull of a (non-empty) subset Q of a finite-dimensional space is the expected value of a simple random vector that takes its values in Q, as a consequence of Carathéodory's lemma. Thus, for a non-empty set Q, the collection of the expected values of the simple, Q-valued random vectors equals Q convex hull; this equality implies that the Shapley–Folkman–Starr results are useful in probability theory. In the other direction, probability theory provides tools to examine convex sets generally and the Shapley–Folkman–Starr results specifically. The Shapley–Folkman–Starr results have been widely used in the probabilistic theory of random sets, for example, to prove a law of large numbers, a central limit theorem, and a large-deviations principle. These proofs of probabilistic limit theorems used the Shapley–Folkman–Starr results to avoid the assumption that all the random sets be convex.
A probability measure is a finite measure, and the Shapley–Folkman lemma has applications in non-probabilistic measure theory, such as the theories of volume and of vector measures. The Shapley–Folkman lemma enables a refinement of the Brunn–Minkowski inequality, which bounds the volume of sums in terms of the volumes of their summand-sets. The volume of a set is defined in terms of the Lebesgue measure, which is defined on subsets of Euclidean space. In advanced measure-theory, the Shapley–Folkman lemma has been used to prove Lyapunov's theorem, which states that the range of a vector measure is convex. Here, the traditional term "range" (alternatively, "image") is the set of values produced by the function.
A vector measure is a vector-valued generalization of a measure;
for example,
if p1 and p2 are probability measures defined on the same measurable space,
then the product function is a vector measure,
where
is defined for every event ω
by
(p1 p2)(ω)=(p1(ω), p2(ω)).
Lyapunov's theorem has been used in economics, in ("bang-bang") control theory, and in statistical theory. Lyapunov's theorem has been called a continuous counterpart of the Shapley–Folkman lemma, which has itself been called a discrete analogue of Lyapunov's theorem.
Notes
References
Republished in a festschrift for Robert J. Aumann, winner of the 2008 Nobel Prize in Economics:
Reprint of (1982) Academic Press.
Proceedings of 1981 IEEE Conference on Decision and Control, San Diego, CA, December 1981, pp. 432–443.
. Reprint of the 1970 () Princeton Mathematical Series 28
English translation of the (1998) French Microéconomie: Les défaillances du marché (Economica, Paris)
External links
Additive combinatorics
Convex hulls
Convex geometry
Convex optimization
Geometric transversal theory
General equilibrium theory
Lloyd Shapley
Sumsets
Theorems in geometry | Shapley–Folkman lemma | [
"Mathematics"
] | 5,499 | [
"Mathematical theorems",
"Additive combinatorics",
"Geometric transversal theory",
"Combinatorics",
"Basic concepts in set theory",
"Sumsets",
"Families of sets",
"Geometry",
"Theorems in geometry",
"Mathematical problems"
] |
3,953,835 | https://en.wikipedia.org/wiki/FLAG-tag | FLAG-tag, or FLAG octapeptide, or FLAG epitope, is a peptide protein tag that can be added to a protein using recombinant DNA technology, having the sequence DYKDDDDK (where D=aspartic acid, Y=tyrosine, and K=lysine). It is one of the most specific tags and it is an artificial antigen to which specific, high affinity monoclonal antibodies have been developed and hence can be used for protein purification by affinity chromatography and also can be used for locating proteins within living cells. FLAG-tag has been used to separate recombinant, overexpressed protein from wild-type protein expressed by the host organism. FLAG-tag can also be used in the isolation of protein complexes with multiple subunits, because FLAG-tag's mild purification procedure tends not to disrupt such complexes. FLAG-tag-based purification has been used to obtain proteins of sufficient purity and quality to carry out 3D structure determination by x-ray crystallography.
A FLAG-tag can be used in many different assays that require recognition by an antibody. If there is no antibody against a given protein, adding a FLAG-tag to a protein allows the protein to be studied with an antibody against the FLAG-tag sequence. Examples are cellular localization studies by immunofluorescence, immunoprecipitation or detection by SDS PAGE protein electrophoresis and Western blotting.
The peptide sequence of the FLAG-tag from the N-terminus to the C-terminus is: DYKDDDDK (1012 Da). Additionally, FLAG-tags may be used in tandem, commonly the 3xFLAG peptide: DYKDHD-G-DYKDHD-I-DYKDDDDK (with the final tag encoding an enterokinase cleavage site). FLAG-tag can be fused to the C-terminus or the N-terminus of a protein, or inserted within a protein. Some commercially available antibodies (e.g., M1/4E11) recognize the epitope only when FLAG-tag is present at the N-terminus. However, other available antibodies (e.g., M2) are position-insensitive. The tyrosine residue in the FLAG-tag can be sulfated when expressed on certain secreted proteins, which can affect antibody recognition of the FLAG epitope. The FLAG-tag can be used in conjunction with other affinity tags, for example a polyhistidine tag (His-tag), HA-tag or myc-tag.
History
The first use of epitope tagging was described by Munro and Pelham in 1984. The FLAG-tag was the second example of a fully functional, improved epitope tag, published in the scientific literature. and was the only epitope tag to be patented. It has since become one of the most commonly used protein tags in laboratories worldwide. Unlike some other tags (e.g. myc, HA), where a monoclonal antibody was first isolated against an existing protein, then the epitope was characterized and used as a tag, the FLAG epitope was an idealized, artificial design, to which monoclonal antibodies were raised. The FLAG-tag's sequence was optimized for compatibility with proteins it is attached to, in that FLAG-tag is more hydrophilic than other common epitope tags and therefore less likely to reduce the activity of proteins to which FLAG-tag is appended. In addition, N-terminal FLAG tags can be removed readily from proteins once they have been isolated, by treatment with the specific protease, enterokinase (enteropeptidase).
The third report of epitope tagging, (HA-tag), appeared about one year after the Flag system had been first shipped.
See also
Protein tag
SpyTag
References
Biochemical separation processes
Biochemistry detection methods
Laboratory techniques
Molecular biology
Peptide sequences
Octapeptides | FLAG-tag | [
"Chemistry",
"Biology"
] | 828 | [
"Biochemistry methods",
"Separation processes",
"Chemical tests",
"Biochemical separation processes",
"nan",
"Biochemistry detection methods",
"Biochemistry",
"Molecular biology"
] |
3,954,001 | https://en.wikipedia.org/wiki/Architectural%20glass | Architectural glass is glass that is used as a building material. It is most typically used as transparent glazing material in the building envelope, including windows in the external walls. Glass is also used for internal partitions and as an architectural feature. When used in buildings, glass is often of a safety type, which include reinforced, toughened and laminated glasses.
History
Timeline of modern architectural glass development
1226: "Broad Sheet" first produced in Sussex.
1330: "Crown glass" for art work and vessels first produced in Rouen, France. "Broad Sheet" also produced. Both were also supplied for export.
1500s: A method of making mirrors out of plate glass was developed by Venetian glassmakers on the island of Murano, who covered the back of the glass with a mercury-tin amalgam, obtaining near-perfect and undistorted reflection.
1620s: "Blown plate" first produced in London. Used for mirrors and coach plates.
1678: "Crown glass" first produced in London. This process dominated until the 19th century.
1843: An early form of "float glass" invented by Henry Bessemer, pouring glass onto liquid tin. Expensive and not a commercial success.
1874: Tempered glass is developed by Francois Barthelemy Alfred Royer de la Bastie (1830–1901) of Paris, France, by quenching almost molten glass in a heated bath of oil or grease.
1888: Machine-rolled glass introduced, allowing patterns.
1898: Wired-cast glass first commercially produced by Pilkington for use where safety or security was an issue.
1959: Float glass launched in UK. Invented by Sir Alastair Pilkington.
Types
Cast glass
Glass casting is the process in which glass objects are cast by directing molten glass into a mould where it solidifies. The technique has been used since the Egyptian period. Modern cast glass is formed by a variety of processes such as kiln casting, or casting into sand, graphite or metal moulds. Cast glass windows, albeit with poor optical qualities, began to appear in the most important buildings in Rome and the most luxurious villas of Herculaneum and Pompeii.
Crown glass
One of the earliest methods of glass window manufacture was the crown glass method. Hot blown glass was cut open opposite the pipe, then rapidly spun on a table before it could cool. Centrifugal force shaped the hot globe of glass into a round, flat sheet. The sheet would then be broken off the pipe and trimmed to form a rectangular window to fit into a frame.
At the center of a piece of crown glass, a thick remnant of the original blown bottle neck would remain, hence the name "bullseye". Optical distortions produced by the bullseye could be reduced by grinding the glass. The development of diaper latticed windows was in part because three regular diamond-shaped panes could be conveniently cut from a piece of Crown glass, with minimum waste and with minimum distortion.
This method for manufacturing flat glass panels was very expensive and could not be used to make large panes. It was replaced in the 19th century by the cylinder, sheet, and rolled plate processes, but it is still used in traditional construction and restoration.
Cylinder glass
In this manufacturing process, glass is blown into a cylindrical iron mould. The ends are cut off and a cut is made down the side of the cylinder. The cut cylinder is then placed in an oven where the cylinder unrolls into a flat glass sheets.
Drawn Sheet glass (Fourcault process)
Drawn Sheet glass was made by dipping a leader into a vat of molten glass then pulling that leader straight up while a film of glass hardened just out of the vat – this is known as the Fourcault process. This film or ribbon was pulled up continuously held by tractors on both edges while it cooled. After 12 metres or so it was cut off the vertical ribbon and tipped down to be further cut. This glass is clear but has thickness variations due to small temperature changes just out of the vat as it was hardening. These variations cause lines of slight distortions. This glass may still be seen in older houses. Float glass replaced this process.
Irving Wightman Colburn development a similar method independently. He began experimenting with the method in 1899, and started production in 1906. He went bankrupt, but was bought by Michael Joseph Owens. Because the method was imperfect, they kept refining it till 1916 when they felt it was perfect, and opened a glass factory based on the technology the year after.
Cast plate glass
In 1838, James Hartley was granted a patent for Hartley's Patent Rolled Plate, manufactured by a new cast glass process. The glass is taken from the furnace in large iron ladles, which are carried upon slings running on overhead rails; from the ladle the glass is thrown upon the cast-iron bed of a rolling-table; and is rolled into sheet by an iron roller, the process being similar to that employed in making plate-glass, but on a smaller scale. The sheet thus rolled is roughly trimmed while hot and soft, so as to remove those portions of glass which have been spoiled by immediate contact with the ladle, and the sheet, still soft, is pushed into the open mouth of an annealing tunnel or temperature-controlled oven called a lehr, down which it is carried by a system of rollers.
Polished plate glass
The polished plate glass process starts with sheet or rolled plate glass. This glass is dimensionally inaccurate and often created visual distortions. These rough panes were ground flat and then polished clear. This was a fairly expensive process.
Before the float process, mirrors were plate glass as sheet glass had visual distortions that were akin to those seen in amusement park or funfair mirrors.
In 1918 the Belgian engineer Emil Bicheroux improved the plate glass manufacturing by pouring molten glass between two rollers, which resulted in more even thickness and fewer undulations, and reduced the need for grinding and polishing. This process was further improved in the US.
Rolled plate (figured) glass
The elaborate patterns found on figured (or 'Cathedral') rolled-plate glass are produced in a similar fashion to the rolled plate glass process except that the plate is cast between two rollers, one of which carries a pattern. On occasion, both rollers can carry a pattern. The pattern is impressed upon the sheet by a printing roller which is brought down upon the glass as it leaves the main rolls while still soft. This glass shows a pattern in high relief. The glass is then annealed in a lehr.
The glass used for this purpose is typically whiter in colour than the clear glasses used for other applications.
Only some of the figured glasses may be toughened, dependent on the depth of the embossed pattern. Single rolled figured glass, where the pattern is only imprinted into one surface, may be laminated to produce a safety glass. The much less common 'double rolled figured glass', where the pattern is embossed into both surfaces, can not be made into a safety glass but will already be thicker than average figured plate to accommodate both patterned faces. The finished thickness being dependent on the imprinted design.
Float glass
Ninety percent of the world's flat glass is produced by the float glass process invented in the 1950s by Sir Alastair Pilkington of Pilkington Glass, in which molten glass is poured onto one end of a molten tin bath. The glass floats on the tin, and levels out as it spreads along the bath, giving a smooth face to both sides. The glass cools and slowly solidifies as it travels over the molten tin and leaves the tin bath in a continuous ribbon. The glass is then annealed by cooling in an oven called a lehr. The finished product has near-perfect parallel surfaces.
The side of the glass that has been in contact with the tin has a very small amount of the tin embedded in its surface. This quality makes that side of the glass easier to be coated in order to turn it into a mirror, however that side is also softer and easier to scratch.
Glass is produced in standard metric thicknesses of 2, 3, 4, 5, 6, 8, 10, 12, 15, 19 and 25 mm, with 10mm being the most popular sizing in the architectural industry. Molten glass floating on tin in a nitrogen/hydrogen atmosphere will spread out to a thickness of about 6 mm and stop due to surface tension. Thinner glass is made by stretching the glass while it floats on the tin and cools. Similarly, thicker glass is pushed back and not permitted to expand as it cools on the tin.
Toughened glass
Toughened (or tempered) glass is made from standard Float Glass to create an impact resistant, safety glass. Broken float glass yields sharp, hazardous shards. The toughening process introduces tensions between internal and external surfaces to increase its strength and ensure in the case of breakages the glass shatters into small, harmless pieces of glass. The cut glass panels are put into a toughening furnace. Here the glass panels are heated to upward of 600 degrees C and then the surfaces are cooled rapidly with cold air. This produces tensile stresses on the surface of the glass with the warmer internal glass particles. As the top thickness of the glass cools it contracts and forces the corresponding glass elements to contract to introduce stresses into the glass panel and increasing strength.
Prism glass
Prism glass is architectural glass which bends light. It was frequently used around the turn of the 20th century to provide natural light to underground spaces and areas far from windows. Prism glass can be found on sidewalks, where it is known as vault lighting, in windows, partitions, and canopies, where it is known as prism tiles, and as deck prisms, which were used to light spaces below deck on sailing ships. It could be highly ornamented; Frank Lloyd Wright created over forty different designs for prism tiles. Modern architectural prism lighting is generally done with a plastic film applied to ordinary window glass.
Glass block
Glass block, also known as glass brick, is an architectural element made from glass used in areas where privacy or visual obscuration is desired while admitting light, such as underground parking garages, washrooms, and municipal swimming baths. Glass block was originally developed in the early 1900s to provide natural light in industrial factories.
Annealed glass
Annealed glass is glass without internal stresses caused by heat treatment, i.e., rapid cooling, or by toughening or heat strengthening. Glass becomes annealed if it is heated above a transition point then allowed to cool slowly, without being quenched. Float glass is annealed during the process of manufacture. However, most toughened glass is made from float glass that has been specially heat-treated.
Annealed glass breaks into large, jagged shards that can cause serious injury and is considered a hazard in architectural applications. Building codes in many parts of the world restrict the use of annealed glass in areas where there is a high risk of breakage and injury, for example in bathrooms, door panels, fire exits and at low heights in schools or domestic houses. Safety glass, such as laminated or tempered must be used in these settings to reduce risk of injury.
Laminated glass
Laminated glass is manufactured by bonding two or more layers of glass together with an interlayer, such as PVB, under heat and pressure, to create a single sheet of glass. When broken, the interlayer keeps the layers of glass bonded and prevents it from breaking apart. The interlayer can also give the glass a higher sound insulation rating.
There are several types of laminated glasses manufactured using different types of glass and interlayers which produce different results when broken.
Laminated glass that is made up of annealed glass is normally used when safety is a concern, but tempering is not an option. Windshields are typically laminated glasses. When broken, the PVB layer prevents the glass from breaking apart, creating a "spider web" cracking pattern.
Tempered laminated glass is designed to shatter into small pieces, preventing possible injury. When both pieces of glass are broken it produces a "wet blanket" effect and it will fall out of its opening.
Heat strengthened laminated glass is stronger than annealed, but not as strong as tempered. It is often used where security is a concern. It has a larger break pattern than tempered, but because it holds its shape (unlike the "wet blanket" effect of tempered laminated glass) it remains in the opening and can withstand more force for a longer period of time, making it much more difficult to get through.
Laminated glass has similar properties to ballistic glass, but the two should not be confused. Both are made using a PVB interlayer, but they have drastically different tensile strength. Ballistic glass and laminated glass are both rated to different standards and have a different shatter pattern.
Heat-strengthened glass
Heat-strengthened glass, or tempered glass, is glass that has been heat treated to induce surface compression, but not to the extent of causing it to "dice" on breaking in the manner of tempered glass. On breaking, heat-strengthened glass breaks into sharp pieces that are typically somewhat smaller than those found on breaking annealed glass, and is intermediate in strength between annealed and toughened glasses.
Heat-strengthened glass can take a strong direct hit without shattering, but has a weak edge. By simply tapping the edge of heat-strengthened glass with a solid object, it is possible to shatter the entire sheet.
Chemically strengthened glass
Chemically strengthened glass is a type of glass that has increased strength. When broken it still shatters in long pointed splinters similar to float (annealed) glass. For this reason, it is not considered a safety glass and must be laminated if a safety glass is required. Chemically strengthened glass is typically six to eight times the strength of annealed glass.
The glass is chemically strengthened by submerging the glass in a bath containing a potassium salt (typically potassium nitrate) at . This causes sodium ions in the glass surface to be replaced by potassium ions from the bath solution.
Unlike toughened glass, chemically strengthened glass may be cut after strengthening, but loses its added strength within the region of approximately 20 mm of the cut. Similarly, when the surface of chemically strengthened glass is deeply scratched, this area loses its additional strength.
Chemically strengthened glass was used on some fighter aircraft canopies.
Low-emissivity glass
Glass coated with a low-emissivity substance can reflect radiant infrared energy, encouraging radiant heat to remain on the same side of the glass from which it originated, while letting visible light pass. This often results in more efficient windows because radiant heat originating from indoors in winter is reflected back inside, while infrared heat radiation from the sun during summer is reflected away, keeping it cooler inside.
Heatable glass
Electrically heatable glass is a relatively new product, which helps to find solutions while designing buildings and vehicles.
The idea of heating glass is based on usage of energy-efficient low-emissive glass that is generally simple silicate glass with special metallic oxides coating. Heatable glass can be used in all kinds of standard glazing systems, made of wood, plastic, aluminum or steel.
Self-cleaning glass
A recent (2001 Pilkington Glass) innovation is so-called self-cleaning glass, aimed at building, automotive and other technical applications. A nanometre-scale coating of titanium dioxide on the outer surface of glass introduces two mechanisms which lead to the self-cleaning property. The first is a photo-catalytic effect, in which ultra-violet rays catalyse the breakdown of organic compounds on the window surface; the second is a hydrophilic effect in which water is attracted to the surface of the glass, forming a thin sheet which washes away the broken-down organic compounds.
Insulating glass
Insulating glass, or double glazing, consists of a window or glazing element of two or more layers of glazing separated by a spacer along the edge and sealed to create a dead air space between the layers. This type of glazing has functions of thermal insulation and noise reduction. When the space is filled with an inert gas it is part of energy conservation sustainable architecture design for low energy buildings.
Evacuated glazing
A 1994 innovation for insulated glazing is evacuated glass, which as yet is produced commercially only in Japan and China. The extreme thinness of evacuated glazing offers many new architectural possibilities, particularly in building conservation and historicist architecture, where evacuated glazing can replace traditional single glazing, which is much less energy-efficient.
An evacuated glazing unit is made by sealing the edges of two glass sheets, typically by using a solder glass, and evacuating the space inside with a vacuum pump. The evacuated space between the two sheets can be very shallow and yet be a good insulator, yielding insulative window glass with nominal thicknesses as low as 6 mm overall. The reasons for this low thickness are deceptively complex, but the potential insulation is good essentially because there can be no convection or gaseous conduction in a vacuum.
Unfortunately, evacuated glazing does have some disadvantages; its manufacture is complicated and difficult. For example, a necessary stage in the manufacture of evacuated glazing is outgassing; that is, heating it to liberate any gases adsorbed on the inner surfaces, which could otherwise later escape and destroy the vacuum. This heating process currently means that evacuated glazing cannot be toughened or heat-strengthened. If an evacuated safety glass is required, the glass must be laminated. The high temperatures necessary for outgassing also tend to destroy the highly effective "soft" low-emissivity coatings that are often applied to one or both of the internal surfaces (i.e. the ones facing the air gap) of other forms of modern insulative glazing, in order to prevent loss of heat through infrared radiation. Slightly less effective "hard" coatings are still suitable for evacuated glazing, however.
Furthermore, because of the atmospheric pressure present on the outside of an evacuated glazing unit, its two glass sheets must somehow be held apart in order to prevent them flexing together and touching each other, which would defeat the object of evacuating the unit. The task of holding the panes apart is performed by a grid of spacers, which typically consist of small stainless steel discs that are placed around 20 mm apart. The spacers are small enough that they are visible only at very close distances, typically up to 1 m. However, the fact that the spacers will conduct some heat often leads in cold weather to the formation of temporary, grid-shaped patterns on the surface of an evacuated window, consisting either of small circles of interior condensation centred around the spacers, where the glass is slightly colder than average, or, when there is dew outside, small circles on the exterior face of the glass, in which the dew is absent because the spacers make the glass near them slightly warmer.
The conduction of heat between the panes, caused by the spacers, tends to limit evacuated glazing's overall insulative effectiveness. Nevertheless, evacuated glazing is still as insulative as much thicker conventional double glazing and tends to be stronger, since the two constituent glass sheets are pressed together by the atmosphere, and hence react practically as one thick sheet to bending forces. Evacuated glazing also offers very good sound insulation in comparison with other popular types of window glazing.
Heat reduction glass
One type of heat reduction glass uses radiative cooling. This glass includes a 1.2 micron-thick their transparent radiative cooler (TRC) layer of silica, alumina, and titanium oxide upon glass coated with contact lens polymer. The layer permits only visible light to cross, cutting buildings’ cooling costs by as much as one-third. The developers used machine learning and quantum computing to rapidly test models and identify the best. alternative.
Seismic requirements
The most current building code enforced in most jurisdictions in the United States is the 2006 International Building Code (IBC, 2006). The 2006 IBC references for the 2005 edition of the standard Minimum Design Loads for buildings and other Structures prepared by the American Society of Civil Engineers (ASCE, 2005) for its seismic provisions. ASCE 7-05 contains specific requirements for nonstructural components including requirements for architectural glass.
Reflected sunlight
If incorrectly designed, concave surfaces with extensive amounts of glass can act as solar concentrators depending on the angle of the sun, potentially injuring people and damaging property.
See also
Building construction
Glass in green buildings
Glass museums and galleries
Glazing
Quadruple glazing
Heatable glass
Insulated glazing
Leadlight
Solar thermal collector
Stained glass
Stained glass – British glass, 1811–1918
References
Noel C. Stokes; The Glass and Glazing Handbook; Standards Australia; SAA HB125-1998
External links
Glass Association of North America (GANA) – Architectural Glass educational documents and videos
National Glass Association (NGA) – History and Types of Glass
Welsh School of Architectural Glass, Swansea – UK's leading centre for teaching and research in architectural glass founded in 1946
Paul Scheerbart: Glasarchitektur (Glass architecture, 1914; German)
Reflective building components
Soil-based building materials
Glass
Glass architecture
Glass production
Transparent materials
Architectural elements
Windows | Architectural glass | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 4,397 | [
"Glass engineering and science",
"Physical phenomena",
"Glass",
"Building engineering",
"Glass production",
"Unsolved problems in physics",
"Optical phenomena",
"Homogeneous chemical mixtures",
"Materials",
"Glass architecture",
"Architectural elements",
"Transparent materials",
"Amorphous s... |
3,955,450 | https://en.wikipedia.org/wiki/Neutral%20current | Weak neutral current interactions are one of the ways in which subatomic particles can interact by means of the weak force. These interactions are mediated by the Z boson. The discovery of weak neutral currents was a significant step toward the unification of electromagnetism and the weak force into the electroweak force, and led to the discovery of the W and Z bosons.
In simple terms
The weak force is best known for its role in nuclear decay. It has very short range but (apart from gravity) is the only force to interact with neutrinos. Like other subatomic forces, the weak force is mediated via exchange particles. Perhaps the most well known of the exchange particles for the weak force is the W particle which is involved in beta decay. W particles have electric charge – there are both positive and negative W particles – however the Z boson is also an exchange particle for the weak force but does not have any electrical charge.
Exchange of a Z boson transfers momentum, spin, and energy, but leaves the interacting particles' quantum numbers unaffected – charge, flavor, baryon number, lepton number, etc. Because there is no transfer of electrical charge involved, exchange of Z particles is referred to as "neutral" in the phrase "neutral current". However the word "current" here has nothing to do with electricity – it simply refers to the exchange of the Z particle.
The Z boson's neutral current interaction is determined by a derived quantum number called weak charge, which acts similarly to weak isospin for interactions with the W bosons.
Definition
The neutral current that gives the interaction its name is that of the interacting particles.
For example, the neutral current contribution to the → elastic scattering amplitude is
where the neutral currents describing the flow of the neutrino and of the electron are given by:
where:
and are the vector and axial couplings for fermion denotes the weak isospin of the fermions, their electric charge and their weak charge. These couplings amount to essentially left chiral for neutrinos and axial for charged leptons.
The Z boson can couple to any Standard Model particle, except gluons and photons (sterile neutrinos would also be an exception, if they were found to exist). However, any interaction between two charged particles that can occur via the exchange of a virtual Z boson can also occur via the exchange of a virtual photon. Unless the interacting particles have energies on the order of the Z boson mass (91 GeV) or higher, the virtual Z boson exchange has an effect of a tiny correction, to the amplitude of the electromagnetic process.
Particle accelerators with energies necessary to observe neutral current interactions and to measure the mass of Z boson weren't available until 1983.
On the other hand, Z boson interactions involving neutrinos have distinctive signatures: They provide the only known mechanism for elastic scattering of neutrinos in matter; neutrinos are almost as likely to scatter elastically (via Z boson exchange) as inelastically (via W boson exchange), of major experimental significance, in, e.g. , the Sudbury Neutrino Observatory experiment.
Weak neutral currents were predicted by electroweak theory developed mainly by Abdus Salam, John Clive Ward, Sheldon Glashow and Steven Weinberg, and confirmed shortly thereafter in 1973, in a neutrino experiment in the Gargamelle bubble chamber at CERN.
See also
Charged current
Flavor changing neutral current
Neutral particle oscillation
Electric current
Quantum chromodynamics
Sudbury Neutrino Observatory#Neutral current interaction
Weak charge
References
External links
Electroweak theory | Neutral current | [
"Physics"
] | 754 | [
"Physical phenomena",
"Fundamental interactions",
"Electroweak theory"
] |
3,957,132 | https://en.wikipedia.org/wiki/List%20of%20uniform%20polyhedra%20by%20spherical%20triangle | There are many relations among the uniform polyhedra. This List of uniform polyhedra by spherical triangle groups them by the Wythoff symbol.
Key
The vertex figure can be discovered by considering the Wythoff symbol:
p|q r - 2p edges, alternating q-gons and r-gons. Vertex figure (q.r)p.
p|q 2 - p edges, q-gons (here r=2 so the r-gons are degenerate lines).
2|q r - 4 edges, alternating q-gons and r-gons
q r|p - 4 edges, 2p-gons, q-gons, 2p-gons r-gons, Vertex figure 2p.q.2p.r.
q 2|p - 3 edges, 2p-gons, q-gons, 2p-gons, Vertex figure 2p.q.2p.
p q r|- 3 edges, 2p-gons, 2q-gons, 2r-gons, vertex figure 2p.2q.2r
Convex
Non-convex
a b 2
3 3 2
Group
4 3 2
Group
5 3 2
Group
5 5 2
Group
a b 3
3 3 3
Group
4 3 3
Group
5 3 3
Group
4 4 3
Group
5 5 3
Group
a b 5
5 5 5
Group
Uniform polyhedra | List of uniform polyhedra by spherical triangle | [
"Physics"
] | 293 | [
"Uniform polytopes",
"Uniform polyhedra",
"Symmetry"
] |
3,957,200 | https://en.wikipedia.org/wiki/Kleisli%20category | In category theory, a Kleisli category is a category naturally associated to any monad T. It is equivalent to the category of free T-algebras. The Kleisli category is one of two extremal solutions to the question: "Does every monad arise from an adjunction?" The other extremal solution is the Eilenberg–Moore category. Kleisli categories are named for the mathematician Heinrich Kleisli.
Formal definition
Let 〈T, η, μ〉 be a monad over a category C. The Kleisli category of C is the category CT whose objects and morphisms are given by
That is, every morphism f: X → T Y in C (with codomain TY) can also be regarded as a morphism in CT (but with codomain Y). Composition of morphisms in CT is given by
where f: X → T Y and g: Y → T Z. The identity morphism is given by the monad unit η:
.
An alternative way of writing this, which clarifies the category in which each object lives, is used by Mac Lane. We use very slightly different notation for this presentation. Given the same monad and category as above, we associate with each object in a new object , and for each morphism in a morphism . Together, these objects and morphisms form our category , where we define composition, also called Kleisli composition, by
Then the identity morphism in , the Kleisli identity, is
Extension operators and Kleisli triples
Composition of Kleisli arrows can be expressed succinctly by means of the extension operator (–)# : Hom(X, TY) → Hom(TX, TY). Given a monad 〈T, η, μ〉 over a category C and a morphism f : X → TY let
Composition in the Kleisli category CT can then be written
The extension operator satisfies the identities:
where f : X → TY and g : Y → TZ. It follows trivially from these properties that Kleisli composition is associative and that ηX is the identity.
In fact, to give a monad is to give a Kleisli triple 〈T, η, (–)#〉, i.e.
A function ;
For each object in , a morphism ;
For each morphism in , a morphism
such that the above three equations for extension operators are satisfied.
Kleisli adjunction
Kleisli categories were originally defined in order to show that every monad arises from an adjunction. That construction is as follows.
Let 〈T, η, μ〉 be a monad over a category C and let CT be the associated Kleisli category. Using Mac Lane's notation mentioned in the “Formal definition” section above, define a functor F: C → CT by
and a functor G : CT → C by
One can show that F and G are indeed functors and that F is left adjoint to G. The counit of the adjunction is given by
Finally, one can show that T = GF and μ = GεF so that 〈T, η, μ〉 is the monad associated to the adjunction 〈F, G, η, ε〉.
Showing that GF = T
For any object X in category C:
For any in category C:
Since is true for any object X in C and is true for any morphism f in C, then . Q.E.D.
References
External links
Adjoint functors
Categories in category theory | Kleisli category | [
"Mathematics"
] | 770 | [
"Mathematical structures",
"Category theory",
"Categories in category theory"
] |
3,957,297 | https://en.wikipedia.org/wiki/Solving%20the%20geodesic%20equations | Solving the geodesic equations is a procedure used in mathematics, particularly Riemannian geometry, and in physics, particularly in general relativity, that results in obtaining geodesics. Physically, these represent the paths of (usually ideal) particles with no proper acceleration, their motion satisfying the geodesic equations. Because the particles are subject to no proper acceleration, the geodesics generally represent the straightest path between two points in a curved spacetime.
The differential geodesic equation
On an n-dimensional Riemannian manifold , the geodesic equation written in a coordinate chart with coordinates is:
where the coordinates xa(s) are regarded as the coordinates of a curve γ(s) in and are the Christoffel symbols. The Christoffel symbols are functions of the metric and are given by:
where the comma indicates a partial derivative with respect to the coordinates:
As the manifold has dimension , the geodesic equations are a system of ordinary differential equations for the coordinate variables. Thus, allied with initial conditions, the system can, according to the Picard–Lindelöf theorem, be solved. One can also use a Lagrangian approach to the problem: defining
and applying the Euler–Lagrange equation.
Heuristics
As the laws of physics can be written in any coordinate system, it is convenient to choose one that simplifies the geodesic equations. Mathematically, this means a coordinate chart is chosen in which the geodesic equations have a particularly tractable form.
Effective potentials
When the geodesic equations can be separated into terms containing only an undifferentiated variable and terms containing only its derivative, the former may be consolidated into an effective potential dependent only on position. In this case, many of the heuristic methods of analysing energy diagrams apply, in particular the location of turning points.
Solution techniques
Solving the geodesic equations means obtaining an exact solution, possibly even the general solution, of the geodesic equations. Most attacks secretly employ the point symmetry group of the system of geodesic equations. This often yields a result giving a family of solutions implicitly, but in many examples does yield the general solution in explicit form.
In general relativity, to obtain timelike geodesics it is often simplest to start from the spacetime metric, after dividing by to obtain the form
where the dot represents differentiation with respect to . Because timelike geodesics are maximal, one may apply the Euler–Lagrange equation directly, and thus obtain a set of equations equivalent to the geodesic equations. This method has the advantage of bypassing a tedious calculation of Christoffel symbols.
See also
Geodesics of the Schwarzschild vacuum
Mathematics of general relativity
Transition from special relativity to general relativity
References
General relativity
Mathematical methods in general relativity | Solving the geodesic equations | [
"Physics"
] | 578 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
3,957,360 | https://en.wikipedia.org/wiki/Mass-to-charge%20ratio | The mass-to-charge ratio (m/Q) is a physical quantity relating the mass (quantity of matter) and the electric charge of a given particle, expressed in units of kilograms per coulomb (kg/C). It is most widely used in the electrodynamics of charged particles, e.g. in electron optics and ion optics.
It appears in the scientific fields of electron microscopy, cathode ray tubes, accelerator physics, nuclear physics, Auger electron spectroscopy, cosmology and mass spectrometry. The importance of the mass-to-charge ratio, according to classical electrodynamics, is that two particles with the same mass-to-charge ratio move in the same path in a vacuum, when subjected to the same electric and magnetic fields.
Some disciplines use the charge-to-mass ratio (Q/m) instead, which is the multiplicative inverse of the mass-to-charge ratio. The CODATA recommended value for an electron is
Origin
When charged particles move in electric and magnetic fields the following two laws apply:
Lorentz force law:
Newton's second law of motion:
where F is the force applied to the ion, m is the mass of the particle, a is the acceleration, Q is the electric charge, E is the electric field, and v × B is the cross product of the ion's velocity and the magnetic flux density.
This differential equation is the classic equation of motion for charged particles. Together with the particle's initial conditions, it completely determines the particle's motion in space and time in terms of m/Q. Thus mass spectrometers could be thought of as "mass-to-charge spectrometers". When presenting data in a mass spectrum, it is common to use the dimensionless m/z, which denotes the dimensionless quantity formed by dividing the mass number of the ion by its charge number.
Combining the two previous equations yields:
This differential equation is the classic equation of motion of a charged particle in a vacuum. Together with the particle's initial conditions, it determines the particle's motion in space and time. It immediately reveals that two particles with the same m/Q ratio behave in the same way. This is why the mass-to-charge ratio is an important physical quantity in those scientific fields where charged particles interact with magnetic or electric fields.
Exceptions
There are non-classical effects that derive from quantum mechanics, such as the Stern–Gerlach effect that can diverge the path of ions of identical m/Q.
Symbols and units
The IUPAC-recommended symbols for mass and charge are m and Q, respectively, however using a lowercase q for charge is also very common. Charge is a scalar property, meaning that it can be either positive (+) or negative (−). The Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (e). The SI unit of the physical quantity m/Q is kilogram per coulomb.
Mass spectrometry and m/z
The units and notation above are used when dealing with the physics of mass spectrometry; however, the m/z notation is used for the independent variable in a mass spectrum. This notation eases data interpretation since it is numerically more related to the dalton. For example, if an ion carries one charge the m/z is numerically equivalent to the molecular or atomic mass of the ion in daltons (Da), where the numerical value of m/Q is abstruse. The m refers to the molecular or atomic mass number (number of nucleons) and z to the charge number of the ion; however, the quantity of m/z is dimensionless by definition. An ion with a mass of 100 Da (daltons) () carrying two charges () will be observed at . However, the empirical observation is one equation with two unknowns and could have arisen from other ions, such as an ion of mass 50 Da carrying one charge. Thus, the m/z of an ion alone neither infers mass nor the number of charges. Additional information, such as the mass spacing between mass isotopomers or the relationship between multiple charge states, is required to assign the charge state and infer the mass of the ion from the m/z. This additional information is often but not always available. Thus, the m/z is primarily used to report an empirical observation in mass spectrometry. This observation may be used in conjunction with other lines of evidence to subsequently infer the physical attributes of the ion, such as mass and charge. On rare occasions, the thomson has been used as a unit of the x-axis of a mass spectrum.
History
In the 19th century, the mass-to-charge ratios of some ions were measured by electrochemical methods.
The first attempt to measure the mass-to-charge ratio of cathode ray particles, assuming them to be ions, was made in 1884-1890 by German-born British physicist Arthur Schuster. He put an upper limit of 10^10 coul/kg, but even that resulted in much greater value than expected, so little credence was given to his calculations at the time.
In 1897, the mass-to-charge ratio of the electron was first measured by J. J. Thomson. By doing this, he showed that the electron was in fact a particle with a mass and a charge, and that its mass-to-charge ratio was much smaller than that of the hydrogen ion H+. In 1898, Wilhelm Wien separated ions (canal rays) according to their mass-to-charge ratio with an ion optical device with superimposed electric and magnetic fields (Wien filter). In 1901 Walter Kaufman measured the increase of electromagnetic mass of fast electrons (Kaufmann–Bucherer–Neumann experiments), or relativistic mass increase in modern terms. In 1913, Thomson measured the mass-to-charge ratio of ions with an instrument he called a parabola spectrograph. Today, an instrument that measures the mass-to-charge ratio of charged particles is called a mass spectrometer.
Charge-to-mass ratio
The charge-to-mass ratio (Q/m) of an object is, as its name implies, the charge of an object divided by the mass of the same object. This quantity is generally useful only for objects that may be treated as particles. For extended objects, total charge, charge density, total mass, and mass density are often more useful.
Derivation:
or
Since ,
or
Equations () and () yield
Significance
In some experiments, the charge-to-mass ratio is the only quantity that can be measured directly. Often, the charge can be inferred from theoretical considerations, so the charge-to-mass ratio provides a way to calculate the mass of a particle.
Often, the charge-to-mass ratio can be determined by observing the deflection of a charged particle in an external magnetic field. The cyclotron equation, combined with other information such as the kinetic energy of the particle, will give the charge-to-mass ratio. One application of this principle is the mass spectrometer. The same principle can be used to extract information in experiments involving the cloud chamber.
The ratio of electrostatic to gravitational forces between two particles will be proportional to the product of their charge-to-mass ratios. It turns out that gravitational forces are negligible on the subatomic level, due to the extremely small masses of subatomic particles.
Electron
The electron charge-to-mass quotient, , is a quantity that may be measured in experimental physics. It bears significance because the electron mass me is difficult to measure directly, and is instead derived from measurements of the elementary charge e and . It also has historical significance; the Q/m ratio of the electron was successfully calculated by J. J. Thomson in 1897—and more successfully by Dunnington, which involves the angular momentum and deflection due to a perpendicular magnetic field. Thomson's measurement convinced him that cathode rays were particles, which were later identified as electrons, and he is generally credited with their discovery.
The CODATA recommended value is CODATA refers to this as the electron charge-to-mass quotient, but ratio is still commonly used.
There are two other common ways of measuring the charge-to-mass ratio of an electron, apart from Thomson and Dunnington's methods.
The magnetron method: Using a GRD7 Valve (Ferranti valve), electrons are expelled from a hot tungsten-wire filament towards an anode. The electron is then deflected using a solenoid. From the current in the solenoid and the current in the Ferranti Valve, e/m can be calculated.
Fine beam tube method: A heater heats a cathode, which emits electrons. The electrons are accelerated through a known potential, so the velocity of the electrons is known. The beam path can be seen when the electrons are accelerated through a helium (He) gas. The collisions between the electrons and the helium gas produce a visible trail. A pair of Helmholtz coils produces a uniform and measurable magnetic field at right angles to the electron beam. This magnetic field deflects the electron beam in a circular path. By measuring the accelerating potential (volts), the current (amps) to the Helmholtz coils, and the radius of the electron beam, e/m can be calculated.
Zeeman Effect
The charge-to-mass ratio of an electron may also be measured with the Zeeman effect, which gives rise to energy splittings in the presence of a magnetic field B:
Here mj are quantum integer values ranging from −j to j, with j as the eigenvalue of the total angular momentum operator J, with
where S is the spin operator with eigenvalue s and L is the angular momentum operator with eigenvalue l. gJ is the Landé g-factor, calculated as
The shift in energy is also given in terms of frequency υ and wavelength λ as
Measurements of the Zeeman effect commonly involve the use of a Fabry–Pérot interferometer, with light from a source (placed in a magnetic field) being passed between two mirrors of the interferometer. If δD is the change in mirror separation required to bring the mth-order ring of wavelength into coincidence with that of wavelength λ, and ΔD brings the ring of wavelength λ into coincidence with the mth-order ring, then
It follows then that
Rearranging, it is possible to solve for the charge-to-mass ratio of an electron as
See also
Gyromagnetic ratio
Thomson (unit)
References
Bibliography
IUPAP Red Book SUNAMCO 87-1 "Symbols, Units, Nomenclature and Fundamental Constants in Physics" (does not have an online version)
Symbols Units and Nomenclature in Physics IUPAP-25, E.R. Cohen & P. Giacomo, Physics 146A (1987) 1–68
External links
BIPM SI brochure
AIP style manual
NIST on units and manuscript check list
Physics Today's instructions on quantities and units
Physical quantities
Mass spectrometry
Metrology
Ratios | Mass-to-charge ratio | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,331 | [
"Physical phenomena",
"Physical quantities",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantity",
"Mass",
"Ratios",
"Arithmetic",
"Mass spectrometry",
"Physical properties",
"Matter"
] |
3,957,700 | https://en.wikipedia.org/wiki/Tritiated%20water | Tritiated water is a radioactive form of water in which the usual protium atoms are replaced with tritium atoms. In its pure form it may be called tritium oxide (T2O or 3H2O) or super-heavy water. Pure T2O is a colorless liquid, and it is corrosive due to self-radiolysis. Diluted, tritiated water is mainly H2O plus some HTO (3HOH). It is also used as a tracer for water transport studies in life-science research. Furthermore, since it naturally occurs in minute quantities, it can be used to determine the age of various water-based liquids, such as vintage wines.
The name super-heavy water helps distinguish the tritiated material from heavy water, which contains deuterium instead.
Applications
Tritiated water can be used to measure an organism's total body water (TBW). Unlike doubly labeled water this method relies on scintillation counting. Tritiated water distributes itself into all body compartments relatively quickly. The concentration of tritiated water in urine is assumed to be similar to the concentration of tritiated water in the body. TBW is determined from the following relation:
Health risks
Tritium is radioactive and a low energy beta emitter.
While HTO is produced naturally by cosmic ray interactions in the stratosphere, it is also produced by human activities and can increase local concentrations and be considered an air and water pollutant. Anthropogenic sources of tritiated water include nuclear weapons testing, nuclear power plants, nuclear reprocessing and consumer products such as self-illuminating watches and signs.
HTO has a short biological half-life in the human body of 7 to 14 days, which both reduces the total effects of single-incident ingestion and precludes long-term bioaccumulation of HTO from the environment. The biological half life of tritiated water in the human body, which is a measure of body water turn-over, varies with the season. Studies on the biological half life of occupational radiation workers for free water tritium in a coastal region of Karnataka, India, show that the biological half life in the winter season is twice that of the summer season.
If tritium exposure is suspected or known, drinking uncontaminated water will help replace the tritium from the body. Increasing sweating, urination or breathing can help the body expel water and thereby the tritium contained in it. However, care should be taken that neither dehydration nor a depletion of the body's electrolytes results as the health consequences of those things (particularly in the short term) can be more severe than those of tritium exposure.
References
Nuclear materials
Forms of water
Body water
Tritium | Tritiated water | [
"Physics",
"Chemistry"
] | 576 | [
"Phases of matter",
"Materials",
"Forms of water",
"Nuclear materials",
"Matter"
] |
3,958,742 | https://en.wikipedia.org/wiki/List%20of%20building%20materials | This is a list of building materials.
Many types of building materials are used in the construction industry to create buildings and structures. These categories of materials and products are used by architects and construction project managers to specify the materials and methods used for building projects.
Some building materials like cold rolled steel framing are considered modern methods of construction, over the traditionally slower methods like blockwork and timber.
Catalogs
Catalogs distributed by architectural product suppliers are typically organized into these groups.
Industry standards
The Construction Specifications Institute maintains the following industry standards:
MasterFormat 50 standard divisions of building materials - 2004 edition (current in 2009)
16 Divisions Original 16 divisions of building materials
See also
Category: Building materials
Alternative natural materials
Glass in green buildings
Green building and wood
List of commercially available roofing material
Red List building materials
Sources
Building Materials: Dangerous Properties of Products in MasterFormat Divisions 7 and 9 - H. Leslie Simmons, Richard J. Lewis, Richard J. Lewis (Sr.) - Google Books
Building Materials - P.C. Varghese - Google Books
Architectural Building Materials - Salvan, George S. - Google Books
Durability of Building Materials and Components 8: Service Life and Asset Management - Michael A. Lacasse, Dana J. Vanier - Google Books
Durability of Building Materials and Components - J. M. Baker - Google Books
Understanding Green Building Materials - Traci Rose Rider, Stacy Glass, Jessica McNaughton - Google Books
Heat-Air-Moisture Transport: Measurements on Building Materials - Phālgunī Mukhopādhyāẏa, M. K. Kumaran - Google Books
External links
Building materials
Building materials | List of building materials | [
"Physics",
"Engineering"
] | 330 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Architecture lists",
"Matter",
"Architecture"
] |
3,959,396 | https://en.wikipedia.org/wiki/Biotin%20carboxyl%20carrier%20protein | Biotin carboxyl carrier protein (BCCP) refers to proteins containing a biotin attachment domain that carry biotin and carboxybiotin throughout the ATP-dependent carboxylation by biotin-dependent carboxylases. The biotin carboxyl carrier protein is an Acetyl CoA subunit that allows for Acetyl CoA to be catalyzed and converted to malonyl-CoA. More specifically, BCCP catalyzes the carboxylation of the carrier protein to form an intermediate. Then the carboxyl group is transferred by the transcacrboxylase to form the malonyl-CoA. This conversion is an essential step in the biosynthesis of fatty acids. In the case of E. coli Acetyl-CoA carboxylase, the BCCP is a separate protein known as accB (). On the other hand, in Haloferax mediterranei, propionyl-CoA carboxylase, the BCCP pccA () is fused with biotin carboxylase.
The biosynthesis of fatty acids in plants, such as triacylglycerol, is vital to the plant's overall health because it allows for accumulation of seed oil. The biosynthesis that is catalyzed by BCCP usually takes place in the chloroplast of plant cells. The biosynthesis performed by the BCCP protein allows for the transfer of CO2 within active sites of the cell.
The biotin carboxyl carrier protein carries approximately 1 mol of biotin per 22,000 g of protein.
There is not much research on BCCPs at the moment. However, a recent studyon plant genomics found that Brassica BCCPs might play a key role in abiotic and biotic stress responses. Meaning that these proteins may be relaying messages to the rest of the plant body after it has been exposed to extreme conditions that disrupt the plant's homeostasis.
Synthesis of Malonyl-CoA
The synthesis of Malonyl-CoA consists of two half reactions. The first being the carboxylation of biotin with bicarbonate and the second being the transfer of the CO2 group to acetyl-CoA from carboxybiotin to allow for the formation of malonyl-CoA. Two different protein subassemblies, along with BCCP, are required for this two step reaction to be successful: biotin carboxylase (BC) and carboxyltransferase (CT). BCCP contains the biotin cofactor which is covalently bound to a lysine residue.
In fungi, mammals, and plant cytosols, all three of these components (BCCP, BC, and CT) exist on one polypeptide chain. However, most studies of this protein have been conducted on the E. coli form of the enzyme, where all three components exist as three separate complexes rather than being united on one polypeptide chain.
Structure
The first report of the BCCP structure was made by biochemists F. K. Athappilly and W. A. Hendrickson in 1995. It can be thought of as a long β-hairpin structure, with four pairs of antiparallel β-strands that wrap around a central hydrophobic core. The biotinylation motif Met-Lys-Met is located at the tip of the β-hairpin structure. Rotations around the CαCβ bond of this Lys residue contribute to the swinging-arm model. The connection to the rest of the enzyme at the N-terminus of BCCP core is located at the opposite end of the structure from the biotin moiety. Rotations around this region contribute to the swinging-domain model, and the N1′ atom of biotin is ~ 40 Å from this pivot point. This gives a range of ~ 80 Å for the swinging-domain model, and the BC–CT active site distances observed so far are between 40 and 80 Å. In addition, the linker before the BCCP core in the holoenzyme could also be flexible, which would give further reach for the biotin N1′ atom.
The structures of biotin-accepting domains from E. coli BCCP-87 and the 1.3S subunit of P. shermanii TC were determined by both X-ray crystallography and nuclear magnetic resonance studies. (Athappilly and Hendrickson, 1995; Roberts et al., 1999; Reddy et al., 1998). These produced essentially the same structures that are structurally related to the lipoyl domains of 2-oxo acid dehydrogenase multienzyme complexes (Brocklehurst and Perham, 1993; Dardel et al., 1993), which similarly undergo an analogous post-translational modification. These domains form a flattened β-barrel structure comprising two four-stranded β-sheets with the N- and C-terminal residues close together at one end of the structure. At the other end of the molecule, the biotinyl- or lipoyl-accepting lysine resides on a highly exposed, tight hairpin loop between β4 and β5 strands. The structure of the domain is stabilized by a core of hydrophobic residues, which are important structural determinants. Conserved glycine residues occupy β-turns linking the β-strands.
The structure of the Biotin-accepting domain consists of BCCP-87 which contains a seven-amino-acid insertion common to certain prokaryotic acetyl-CoA carboxylases but not present in other biotindomains (Chapman-Smith and Cronan, 1999). This region of the peptide adopts a thumb structure between the β2 and β3 strands and, interestingly, forms direct contacts with the biotin moiety in both the crystal and solution structures (Athappilly and Hendrickson, 1995; Roberts et al., 1999). It has been proposed that this thumb may function as a mobile lid for either, or possibly both, the biotin carboxylase or carboxyl- transferase active sites in the biotin-dependent enzyme (Cronan, 2001). The function of this lid could aid to prevent solvation of the active sites, thereby aiding in the transfer of CO2 from carboxybiotin to acetyl CoA. Secondly, the thumb is required for dimerization of BCCP, necessary for the formation of the active acetyl CoA carboxylase complex (Cronan, 2001). In conclusion, the thumb functions to inhibit the aberrant lipoylation of the target lysine by lipoyl protein ligase (Reche and Perham, 1999). Removal of the thumb by mutagenesis rendered BCCP-87 a favorable substrate for lipoylation but abolished biotinylation (Reche and Perham, 1999). The thumb structure, however, is not a highly conserved feature amongst all biotin domains. Many biotin-dependent enzymes do not contain this insertion, including all five mammalian enzymes. However, it appears the interactions between biotin and protein might be a conserved feature and important for catalysis as similar contacts have been observed in the "thumbless" domains from P. shermanii transcarboxylase (Jank et al., 2002) and the biotinyl/lipoyl attachment protein of B. subtilis (Cui et al., 2006). The significance of this requires further investigation but it is possible that the mechanism employed by the biotin enzymes may involve noncovalent interactions between the protein and the prosthetic group.
References
Coenzymes
Enzymes
Metabolism
Proteins | Biotin carboxyl carrier protein | [
"Chemistry",
"Biology"
] | 1,599 | [
"Biomolecules by chemical classification",
"Coenzymes",
"Organic compounds",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Metabolism"
] |
40,081,998 | https://en.wikipedia.org/wiki/Variant%20surface%20glycoprotein | Variant surface glycoprotein (VSG) is a ~60kDa protein which densely packs the cell surface of protozoan parasites belonging to the genus Trypanosoma. This genus is notable for their cell surface proteins. They were first isolated from Trypanosoma brucei in 1975 by George Cross. VSG allows the trypanosomatid parasites to evade the mammalian host's immune system by extensive antigenic variation. They form a 12–15 nm surface coat. VSG dimers make up ~90% of all cell surface protein and ~10% of total cell protein. For this reason, these proteins are highly immunogenic and an immune response raised against a specific VSG coat will rapidly kill trypanosomes expressing this variant. However, with each cell division there is a possibility that the progeny will switch expression to change the VSG that is being expressed. VSG has no prescribed biochemical activity.
The parasite has a large cellular repertoire of antigenically distinct VSGs (~1500/2000 complete and partial (pseudogenes)) located in telomeric and subtelomeric arrays (on megabase chromosomes or minichromosomes). VSGs are expressed from a bloodstream expression site (BES, ES) in a polycistron by RNA polymerase I (recruited to a ribosomal-type promoter) with other ES-associated genes (ESAGs), of which transferrin receptor (Tfr: ESAG6, ESAG7) is one. Only one VSG gene is expressed at a time, as only one of the ~15 ES are active in a cell. VSG expression is 'switched' by homologous recombination of a silent basic copy gene from an array (directed by homology) into the active telomerically-located expression site. During this transition, trypanosomes simultaneously display both pre- and post-switch VSGs on their surface. This coat replacement process is critical for the survival of recently switched cells because initial VSGs remain targets for the escalating host Ab response. Mosaic VSG genes can be created by homologous recombination of a partial VSG gene from an array. This partial gene may replace any portion of the residing VSG gene, creating a new mosaic VSG. VSG half-life measurements suggest that initial VSGs may persist on the surface of genetically switched trypanosomes for several days. It remains unclear whether the regulation of VSG switching is purely stochastic or whether environmental stimuli affect switching frequency. The fact that switching occurs in vitro suggests that there is at least some host-independent, stochastic element to the process.
The antigenic variation causes cyclical waves of parasitemia, which is one of the characteristics of human African trypanosomiasis. The cyclical process take 5–8 days. This occurs because a diverse range of coats expressed by the trypanosome population means that the immune system is always one step behind: it takes several days for an immune response against a given VSG to develop, giving the population time to diversify as individuals undergo further switching events. The repetition of this process prevents the extinction of the infecting trypanosome population, allowing chronic persistence of parasites in the host and enhancing opportunities for transmission.
In Trypanosoma brucei
In Trypanosoma brucei, the cell surface is covered by a dense coat of ~5 x 106 VSG dimers, ~90% of all cell surface protein and ~10% of total cell protein.
The properties of the VSG coat that enable immune evasion are:
Shielding – the dense nature of the VSG coat (VSG proteins pack shoulder-to-shoulder) prevents the immune system of the mammalian host from accessing the plasma membrane or any other parasitic invariant surface epitopes (such as ion channels, transporters, receptors etc.). The coat is uniform, made up of millions of copies of the same molecule; therefore, VSG is the only part of the trypanosome that the immune system can recognize.
Periodic antigenic variation – the VSG coat undergoes frequent stochastic genetic modification—'switching'—allowing variants expressing a new VSG coat to escape the specific immune response raised against the previous coat. This antigenic variation creates cyclical waves of parasitemia characteristic of Human African Trypanosomiasis.
Antigen 'cleaning' and VSG recycling—VSG is efficiently recycled through the trypanosome flagellar pocket, allowing antibodies to be 'cleaned' from VSG before re-incorporation back into the cellular membrane. Importantly, VSGs recognized and bound by antibodies are selectively pushed toward the flagellar pocket at a quicker rate than unidentified VSG; in this scenario, the antibody acts as a 'sail', which quickens the process of VSG being brought to the area of recycling.
The VSGs from T. brucei are attached to the plasma membrane via a covalent attachment to two glycosyl-phosphatidylinositol (GPI) anchors (one per monomer), which directs its forward-trafficking from the ER to the flagellar pocket for incorporation into the membrane, as predicted by the GPI valence hypothesis.
VSGs are replaced by an equally dense coat of procyclins when the parasite differentiates into the procyclic form in the tsetse fly midgut. There is a very fast inhibition of VSG gene transcription which occurs as soon as the temperature is lowered.
Expression
The source of VSG variability during infection is a large 'archive' of VSG genes present in the T. brucei genome. Some of these are full-length, intact genes; others are pseudogenes (typically with frameshift mutations, premature stop codons, or fragmentation). Expression of an antigenically different VSG can occur by simply switching to a different full-length VSG gene by Expression Site switching (switching which ES is active). In addition, chimeric or 'mosaic' VSG genes can be generated by combining segments from more than one silent VSG gene. The formation of mosaic VSGs allows the (partial) expression of pseudogene VSGs, which can constitute the major portion of the VSG archive, and can contribute directly to antigenic variation, vastly increasing the trypanosome's capacity for immune evasion and posing a major problem for vaccine development.
VSG genes can be kept silent and switched on at any given time. The expressed VSG is always located in an Expression Site (ES), which are specialised expression loci found at the telomeres of some of the large and intermediate chromosomes. Each ES is a polycistronic unit, containing a number of Expression Site-Associated Genes (ESAGs) all expressed along with the active VSG. While multiple ES exist, only a single one is ever active at one time. A number of mechanisms appear to be involved in this process, but the exact nature of the silencing is still unclear.
The expressed VSG can be switched either by activating a different expression site (and thus changing to express the VSG in that site), or by changing the VSG gene in the active site to a different variant. The genome contains many copies of VSG genes, both on minichromosomes and in repeated sections in the interior of the chromosomes. These are generally silent, typically with omitted sections or premature stop codons, but are important in the evolution of new VSG genes. It is estimated up to 10% of the T.brucei genome may be made up of VSG genes or pseudogenes. Any of these genes can be moved into the active site by recombination for expression. Again, the exact mechanisms that control this are unclear, but the process seems to rely on DNA repair machinery and a process of homologous recombination.
The bloodstream expression site (BES), or telomeric expression site, is used for exchanging variant surface glycoproteins when in host's blood stream to escape the complement system. BESs are polymorphic in size and structure but reveal a surprisingly conserved architecture in the context of extensive recombination. Very small BESs do exist and many functioning BESs do not contain the full complement of expression site associated genes (ESAGs). There is a collection of an estimated 20-30 sites, each being active at a time. Active VSG expression sites are depleted of nucleosomes.
The gene repertoires in T. brucei have diverged to become strain-specific.
The variant surface glycoprotein genes of T. brucei have been classified into two groups depending upon whether or not duplication of the genes is observed when they are expressed.
Secretory trafficking
Trypanosoma have a simple, polarized membrane transport system consisting of a single ER, lysosome, and Golgi apparatus.
VSG is first transcribed as a polycistron and then undergoes trypanosomatid-specific poly-adenylation and trans-splicing directed by polypyrimidine tracts. Because there is no transcriptional control, the VSG 3'UTR is important for its RNA stability (most importantly, the 8mer and 14mer). VSG is then transcribed on membrane-bound polysomes, and the appearance of the N-terminal signal sequence directs VSG to the ER. VSG is thereby co-translationally transported into the ER lumen, rapidly N-glycosylated (on asn-x-ser/thr sites) and GPI anchored at the ω site by a transamination reaction (removing of the C-term hydrophobic 17 or 23 aa GPI anchoring sequence). The ω site is always Ser (usually in 17 aa signal sequence peptides), Asp (usually in 23 aa signal sequence peptides), or Asn. Also, the number of N-glycosylation sites per VSG may vary (usually 1-3 N-glycans). VSG MITat.1.5 is glycosylated at all three potential N-glycosylation sites.
VSG then undergoes the calreticulin/calnexin folding cycle (calnexin is absent in Trypanosoma brucei), where it is transiently monoglucosylated and deglucosylated, and interacts with various ER chaperone proteins, such as BiP, in order to fold correctly. VSG efficiently folds and dimerizes (suggesting intrinsically favorable folding) and is transported through the Golgi to the flagellar pocket for incorporation into the cell membrane.
Importantly, following incorporation into the cellular membrane, VSG may later be recycled through the flagellar pocket and sorted back to the cell surface. VSG is not turned over by lysosomal or proteasomal canonical degradation pathways, but is instead lost from the cell by specific cleavage of its GPI anchor by GPI-specific PLC.
Structure
VSG genes are hugely variable at the sequence (primary) level, but variants are thought to have strongly conserved structural (tertiary) features, based on two determined 3-dimensional structures and conservation of 2-dimensional sequence motifs (descending and ascending alpha-helices that make up the dimerization interface), allowing them to perform a similar shielding function. VSGs are made up of N terminal domain of around 300–350 amino acids with low sequence homology (13–30% identity), and a more conserved C terminal domain of ~100 amino acids. N-terminal domains are grouped into classes A-C depending on their cysteine patterns. C-term domains are grouped by sequence homology into classes I-III, with apparently no restriction on which N-term classes they can pair with to form a full VSG. To dimerize, VSG N-terminal domains form a bundle of four alpha helices directed by hydrophobic interactions, around which hang smaller structural features (five smaller helices and three beta-sheets).
VSG is anchored to the cell membrane via a glycophosphatidylinositol (GPI) anchor—a noncovalent linkage from the C-terminus which directs its forward trafficking from the ER to the membrane. This GPI anchor is specifically cleaved by GPI Phospholipase C, cleaving the membrane-form VSG, and allowing VSG protein and portion of the GPI anchor to be lost into the extracellular milieu as soluble VSG (sVSG, which is can be recognized as Cross-Reacting Determinant, or CRD), while retaining the two 1,2-dimyristolglycerol chains in the membrane.
Antigenic variation
VSG is highly immunogenic, and an immune response raised against a specific VSG coat will rapidly kill trypanosomes expressing this variant. Antibody-mediated trypanosome killing can also be observed in vitro by a complement-mediated lysis assay. However, with each cell division there is a possibility that one or both of the progeny will switch expression to change the VSG that is being expressed. The frequency of VSG switching has been measured to be approximately 0.1% per division, though switching rates do differ in culture vs. in vivo. As T. brucei populations can peak at a size of 1011 within a host this rapid rate of switching ensures that the parasite population is constantly diverse. A diverse range of coats expressed by the trypanosome population means that the immune system is always one step behind: it takes several days for an immune response against a given VSG to develop, giving the population time to diversify as individuals undergo further switching events. Reiteration of this process prevents extinction of the infecting trypanosome population, allowing chronic persistence of parasites in the host, enhancing opportunities for transmission. The clinical effect of this cycle is successive 'waves' of parasitaemia (trypanosomes in the blood).
In other trypanosomes
Variable surface glycoproteins are also found in other Trypanosoma species.
In Trypanosoma equiperdum, a parasite causing the covering sickness in horses, these proteins allow the parasite to efficiently evade the host animal's immune system. These VSGs allow the organism to constantly manipulate and change the surface structure of its proteins, which means it is constantly being presented to the immune system as a new foreign organism and this prevents the body from mounting a large enough immune response to eradicate the disease. In this sense, Trypanosoma equiperdum is a very efficient organism; it may infect fewer species than other diseases, but it infects and survives very efficiently within its specified hosts. The VSG proteins in T. equiperdum are also phosphorylated.
A VSG gene from Trypanosoma evansi, a parasite that causes a form of surra in animals, has been cloned in Escherichia coli. The expressed protein is immunoreactive with all the sera combinations. The animals immunized with whole cell lysate or recombinant protein show similar antibody reactions in ELISA (enzyme-linked immunosorbent assay) and CATT (card agglutination test for trypanosomiasis). The variable surface glycoprotein RoTat 1.2 PCR can be used as a specific diagnostic tool for the detection of T. evansi infections.
The smallest VSG protein (40 kDa in size) to date (1996) has been found in Trypanosoma vivax, which bears little carbohydrate.
In Trypanosoma congolense, in vitro analyses of the incorporated sugars after hydrolysis of the glycoprotein showed that glucosamine and mannose are utilized in the biosynthesis of the carbohydrate moiety directly whereas galactose was converted possibly to other intermediates before being incorporated into the antigen. The unglycosylated VSG with a molecular weight of 47 kDa had completely lost its size heterogeneity.
See also
Amastin, another surface (trans-membrane) glycoprotein in trypanosomatid parasites
Coat protein (disambiguation)
Glycocalyx
List of MeSH codes (D23)
List of MeSH codes (D12.776.395)
List of MeSH codes (D12.776.543)
References
External links
www.icp.ucl.ac.be
Kinetoplastid proteins
Glycoproteins
Parasitic excavates | Variant surface glycoprotein | [
"Chemistry"
] | 3,452 | [
"Glycoproteins",
"Glycobiology"
] |
40,084,006 | https://en.wikipedia.org/wiki/Smart%20lock | A smart lock is an electromechanical lock that is designed to perform locking and unlocking operations on a door when it receives a prompt via an electronic keypad, biometric sensor, access card, Bluetooth, or Wi-FI from a registered mobile device. These locks are called smart locks because they use advanced technology and Internet communication to enable easier access for users and enhanced security from intruders. The main components of the smart lock include the physical lock, the key (which can be electronic, digitally encrypted, or a virtual key to provide keyless entry), a secure Bluetooth or Wi-Fi connection, and a management mobile app. Smart locks may also monitor access and send alerts in response to the different events it monitors, as well as other critical events related to the status of the device. Smart locks can be considered part of a smart home.
Most smart locks are installed on mechanical locks (simple types of locks, including deadbolts) and they physically upgrade the ordinary lock. Recently, smart locking controllers have also appeared at the market.
Smart locks, like the traditional locks, need two main parts to work: the lock and the key. In the case of these electronic locks, the key is not a physical key but a smartphone or a special key fob or keycard configured explicitly for this purpose which wirelessly performs the authentication needed to automatically unlock the door
Smart locks allow users to grant access to a third party by means of a virtual key. This key can be sent to the recipient smartphone over standard messaging protocols such as e-mail or SMS, or via a dedicated application. Once this key is received, the recipient will be able to unlock the smart lock using their mobile device during the timeframe previously specified by the sender.
Certain smart locks include a built-in Wi-Fi connection that allows for monitoring features such as access notifications or cameras to show the person requesting access. Some smart locks work with a smart doorbell to allow the user to see who and when someone is at a door. Many smart locks now also feature biometric features, such as fingerprint sensors. Biometrics are becoming increasingly popular because they offer more security than passwords alone. This is because they use unique physical characteristics rather than stored information.
Smart locks may use Bluetooth Low Energy and SSL to communicate, encrypting communications using 128/256-bit AES.
Industry smart lock
Industrial smart locks (passive electronic lock) are a branch of the smart lock field. They are an iterative product of mechanical locks like smart locks. However, the application areas of industrial smart locks are not smart homes, but fields that have extremely high requirements for key management, such as communications, power utilities, water utilities, public safety, transportation, data centers, etc.
Industry smart locks mainly have three components: locks and keys, and management systems. Similarly, the key is no longer a physical key, but a special electronic key. When unlocking, the unlocking authority needs to be assigned before. Through the management system, the administrator needs to set the user, unlock date and time period for the key. Whenever the user unlocks or locks the lock, the unlock record will be saved in the key. The unlocking record can be tracked through the management software.
At the same time, industry smart locks can also remotely assign permissions through a mobile app.
Security
Due to the inherent complexity of digital and wireless technologies, it can be difficult for the end user to confirm or refute the security claims of various product offerings on the market. The devices may also gather personal information; representations by the vendors involved concerning the care and handling of this information is also difficult for the end user to verify.
See also
Door loop, a method for providing electric cabling to a door
References
External links
Smart devices
Locks (security device) | Smart lock | [
"Technology"
] | 778 | [
"Home automation",
"Smart devices"
] |
40,084,415 | https://en.wikipedia.org/wiki/Drug-induced%20QT%20prolongation | QT prolongation is a measure of delayed ventricular repolarisation, which means the heart muscle takes longer than normal to recharge between beats. It is an electrical disturbance which can be seen on an electrocardiogram (ECG). Excessive QT prolongation can trigger tachycardias such as torsades de pointes (TdP). QT prolongation is an established side effect of antiarrhythmics, but can also be caused by a wide range of non-cardiac medicines, including antibiotics, antidepressants, antihistamines, opioids, and complementary medicines. On an ECG, the QT interval represents the summation of action potentials in cardiac muscle cells, which can be caused by an increase in inward current through sodium or calcium channels, or a decrease in outward current through potassium channels. By binding to and inhibiting the “rapid” delayed rectifier potassium current protein, certain drugs are able to decrease the outward flow of potassium ions and extend the length of phase 3 myocardial repolarization, resulting in QT prolongation.
Background
A QT interval is a value that is measured on an electrocardiogram. Measurements begin from the start of the Q wave to the end of the T wave. The value is an indication of the time it takes for a ventricle from the beginning of a contraction to the end of relaxation. The value for a normal QT interval is similar in males and females from birth up to adolescence. During infancy, a normal QTc is defined as . Before puberty, the 99th percentile of QTc values is 460 milliseconds. After puberty, this value increases to 470 milliseconds in males and 480 milliseconds in females.
Torsades de pointes (TdP) is an arrhythmia. More specifically, it is one form of a polymorphic ventricular tachycardia that presents with a long QT interval. Diagnosis is made by electrocardiogram (ECG), which shows rapid irregular QRS complexes. The term "torsades de pointes" is translated from French as "twisting of the peaks" because the complexes appear to undulate, or twist around, the EKG baseline. TdP can be acquired by inheritance of a congenital long QT syndrome, or more commonly from the ingestion of a pharmacologic drug. During TdP episodes, patients have a heart rate of 200 to 250 beats/minute, which may present as palpitations or syncope. TdP often self-resolves, however, it may lead to ventricular fibrillation and cause sudden cardiac death.
Risk factors
Although it is difficult to predict which individuals will be affected from drug-induced long QT syndrome, there are general risk factors that can be associated with the use of certain medications.
Generally, as the dose of a drug increases, the risk of QT prolongation increases as well. In addition, factors such as rapid infusion, concurrent use of more than one drug known to prolong QT interval, diuretic treatment, electrolyte derangements (hypokalemia, hypomagnesemia, or hypocalcemia), advanced age, bradyarrhythmias, and female sex have all been shown to be risk factors for developing drug-induced QT prolongation. TdP has been shown to occur up to three times more often in female patients compared with males, likely as a result of post-pubertal hormonal influence on cardiac ion channels. The QTc interval is longer in females, as well as having a stronger response to IKr-blocking agents. In males, the presence of testosterone upregulates IKr channels and therefore decreases QT interval. Stated otherwise, estrogens prolong the QT interval, while androgens shorten it and decrease the response to IKr-blocking agents.
Structural heart disease, such as heart failure, myocardial infarction, and left ventricular hypertrophy, are also risk factors. Diuretic-induced hypokalemia and/or hypomagnesemia taken for heart failure can induce proarrthymia. The ischemia that results from myocardial infarctions also induce QT prolongation.
Drugs that cause QT prolongation
The main groups of drugs that can cause QT prolongation are antiarrhythmic medications, psychiatric medications, and antibiotics. Other drugs include antivirals and antifungals.
Antiarrhythmic agents
Source:
Class IA
Class IA antiarrhythmic drugs work by blocking sodium and potassium channels. Blocking sodium channels tend to shorten the action potential duration, while blocking potassium channels prolongs the action potential. When the drug concentration is at a low to normal concentration, the potassium channel blocking activity takes precedence over the sodium channel blocking activity
Disopyramide
Procainamide
Propafenone
Quinidine
Because of the predominance of the potassium blocking activity, TdP is seen more frequently with therapeutic levels of quinidine. Sodium blocking activity is dominant with subtherapeutic levels, which does not lead to QT prolongation and TdP.
Class III
Class III antiarrhythmic drugs are potassium channel blockers that cause QT prolongation and are associated with TdP.
Amiodarone
Amiodarone works in many ways. It blocks sodium, potassium, and calcium channels, as well as alpha and beta adrenergic receptors. Because of its multiple actions, amiodarone causes QT prolongation but TdP is rarely observed.
Dofetilide
Ibutilide
Ibutilide differs from other class III antiarrhythmic agents in that it activates the slow, delayed inward sodium channels rather than inhibiting outward potassium channels.
Sotalol
Sotalol has beta-blocking activity. Approximately 2 to 7 percent of patients taking at least 320 mg/day experience proarrhythmia, most often in the form of TdP. The risks and effects are dose-dependent.
Psychiatric medications
Psychiatric medications include antipsychotics and antidepressants that have been shown to lengthen the QT interval and induce TdP, especially when given intravenously or in higher concentrations.
Typical antipsychotics
Chlorpromazine
Haloperidol
Haloperidol functions by blocking the KCNH2 channel, the same pathway that other drug-inducing LQTS block. Patients taking haloperidol are at a higher risk if they also have electrolyte abnormalities (such as hypokalemia and/or hypomagnesemia), congenital LQTS, cardiac abnormalities, hypothyroidism, or if they are concurrently taking other medications known to lengthen the QT interval.
Sulpiride
Thioridazine (especially high risk; withdrawn by the manufacturer for this precise reason)
Atypical antipsychotics
Amisulpride
Quetiapine
Overdoses on quetiapine cause QT prolongation in patients with cardiac risks.
Risperidone
Mild QT prolongation can be caused by risperidone but there are no specific drug warnings associated with this.
Sertindole
Ziprasidone
SSRIs
An ECG is recommended before patients are prescribed SSRI agents citalopram and escitalopram if the prescribed dose is above 40 mg or 20 mg per day, respectively.
Fluoxetine
Paroxetine
Sertraline
SNRIs
Venlafaxine
Tricyclic antidepressants
Amitriptyline
Desipramine
Doxepin
Imipramine
Antibiotics
Source:
Macrolides
Azithromycin
Clarithromycin
Erythromycin
When taken independently, erythromycin has been shown to cause both QT prolongation and TdP. Erythromycin works inhibiting the CYP3A protein. Patients who have low CYP3A activity and are also concurrently taking other medications such as disopyramide, which can lead to QT prolongation and TdP.
Fluoroquinolones
Ciprofloxacin
Levofloxacin
Moxifloxacin
Other agents
Source:
Chloroquine
Cisapride
Domperidone
Famotidine
Foscarnet
Hydroxychloroquine
Ibogaine
Ketoconazole
Methadone
Octreotide
Ondansetron
Tacrolimus
Tamoxifen
Pathophysiology
IKr blockade
On EKG, the QT interval represents the summation of action potentials in cardiac muscle cells. QT prolongation therefore results from action potential prolongation, which can be caused by an increase in inward current through sodium or calcium channels, or a decrease in outward current through potassium channels. By binding to and inhibiting the “rapid” delayed rectifier potassium current protein, IKr, which is encoded by the hERG gene, certain drugs are able to decrease the outward flow of potassium ions and extend the length of phase 3 myocardial repolarization, which is reflected as QT prolongation.
Diagnosis
Most patients with drug-induced QT prolongation are asymptomatic and are diagnosed solely by EKG in association with a history of using medications known to cause QT prolongation. A minority of patients are symptomatic and typically present with one or more signs of arrhythmia, such as lightheadedness, syncope, or palpitations. If the arrhythmia persists, patients may experience sudden cardiac arrest.
Management
Treatment requires identifying and removing any causative medications and correcting any underlying electrolyte abnormalities. While TdP often self-resolves, cardioversion may be indicated if patients become hemodynamically unstable, as evidenced by signs such as hypotension, altered mental status, chest pain, or heart failure. Intravenous magnesium sulfate has been proven to be highly effective for both the treatment and prevention of TdP.
Managing patients with TdP is dependent on the patient's stability. Vital signs, level of consciousness, and current symptoms are used to assess stability. Patients who are stable should be managed by removing the underlying cause and correcting electrolyte abnormalities, especially hypokalemia. An EKG should be obtained, a cardiac monitor should be attached, IV access should be established, supplemental oxygen should be given, and blood samples should be sent for appropriate studies. Patients should be continually re-evaluated for signs of deterioration until the TdP resolves. In addition to correcting the electrolyte abnormalities, magnesium given intravenously has also been shown to be helpful. Magnesium sulfate given as a 2 g IV bolus mixed with D5W can be given over a period of 15 minutes in patients without cardiac arrest Atrial pacing or administering isoproterenol can normalize the heart rate.
Unstable patients exhibit signs of chest pain, hypotension, elevated heart rate, and/or heart failure. Patients who develop cardiac arrest will be pulseless and unconscious. Defibrillation and resuscitation is indicated in these cases. Patients with cardiac arrest should be given IV magnesium sulfate over a period of two minutes.After diagnosing and treating the cause of LQTS, it is also important to perform a thorough history and EKG screening. Immediate family members should also be screened for inherited and congenital causes of drug-induced QT syndrome.
Incidence
Unfortunately, there is no absolute definition that describes the incidence of drug-induced QT prolongation, as most data is obtained from case reports or small observational studies. Although QT interval prolongation is one of the most common reasons for drug withdrawal from the market, the overall incidence of drug-induced QT prolongation is difficult to estimate. One study in France estimated that between 5-7% of reports of ventricular tachycardia, ventricular fibrillation, or sudden cardiac death were in fact due to drug-induced QT prolongation and torsades de pointes. An observational study from the Netherlands showed that 3.1% of patients who experienced sudden cardiac death were also using a QT-prolonging drug.
See also
Long QT syndrome
Torsades de pointes
References
Further reading
Cardiac arrhythmia
Drug-induced diseases | Drug-induced QT prolongation | [
"Chemistry"
] | 2,586 | [
"Drug-induced diseases",
"Drug safety"
] |
40,084,830 | https://en.wikipedia.org/wiki/Runoff%20footprint | A runoff footprint is the total surface runoff that a site produces over the course of a year. According to the United States Environmental Protection Agency (EPA) stormwater is "rainwater and melted snow that runs off streets, lawns, and other sites". Urbanized areas with high concentrations of impervious surfaces like buildings, roads, and driveways produce large volumes of runoff which can lead to flooding, sewer overflows, and poor water quality. Since soil in urban areas can be compacted and have a low infiltration rate, the surface runoff estimated in a runoff footprint is not just from impervious surfaces, but also pervious areas including yards. The total runoff is a measure of the site’s contribution to stormwater issues in an area, especially in urban areas with sewer overflows. Completing a runoff footprint for a site allows a property owner to understand what areas on his or her site are producing the most runoff and what scenarios of stormwater green solutions like rain barrels and rain gardens are most effective in mitigating this runoff and its costs to the community.
Significance
The runoff footprint is the stormwater equivalent to the carbon/energy footprint. When homeowners or business owners complete an energy audit or carbon footprint, they understand how they are consuming energy and learn how this consumption can be reduced through energy efficiency measures. Correspondingly, the runoff footprint allows someone to calculate their baseline annual runoff and assess what the impact of ideal stormwater green solutions would be for their site. Since the passage of the Clean Water Act in 1972, the EPA has monitored and regulated stormwater issues in urban areas. Municipalities across the United States are now required to upgrade sanitary and stormwater systems to meet EPA mandates. The total cost for these upgrades across the United States exceeds $3000 billion. The stormwater runoff from every property in an area can contribute to the overall stormwater issues including overflows and water pollution. Stormwater runoff carries nonpoint source pollution which is a leading cause of water quality issues.
By completing a runoff footprint, homeowners and business owners can consider how stormwater green solutions can reduce runoff on-site. Stormwater green solutions (also called green infrastructure) use "vegetation, soils, and natural processes to manage water and create healthier urban environments. At the scale of a city or county, green infrastructure refers to the patchwork of natural areas that provides habitat, flood protection, cleaner air, and cleaner water. At the scale of a neighborhood or site, green infrastructure refers to stormwater management systems that mimic nature by soaking up and storing water". Stormwater green solutions include bioswales (directional rain gardens), cisterns, green roofs, permeable pavement, rain barrels, and rain gardens. According to the EPA, onsite stormwater green solutions or low-impact developments (LIDs) can significantly reduce runoff and costly stormwater/sewer infrastructure upgrades.
Stormwater green solutions can also reduce energy consumption. Treating and pumping water is an energy-intensive activity. According to the River Network, the U.S. consumes at least 521 million MWh a year for water-related purposes which is the equivalent to 13% of the nation’s electricity consumption Potable water must be treated and then pumped to the consumer. Wastewater is treated before being discharged. In areas with combined sewer systems or old separate sewer systems with high inflow and infiltration, stormwater is also treated at the wastewater treatment facilities. By capturing stormwater runoff onsite in rain barrels and cisterns, the consumption of potable water for irrigation and its corresponding energy impact can be reduced. The reduction of runoff from all types of stormwater green solutions reduces the stormwater that may end up at the wastewater treatment facility in areas with combined sewer systems or old separate sewers.
Completing a runoff footprint
There are number of methods available to complete a runoff footprint. The simplest methods involve using a runoff coefficient, which according to the State Water Resources Control Board of California is "a dimensionless coefficient relating the amount of runoff to the amount of precipitation received. It is a larger value for areas with low infiltration and high runoff (pavement, steep gradient), and lower for permeable, well vegetated areas (forest, flat land)." The runoff coefficients for different surface types on a site can be multiplied with the area for each surface along with the annual precipitation to generate a rough runoff footprint. If the runoff coefficient and areas of proposed stormwater green solutions like rain gardens and bioswales for the site are known, the reduction in overall runoff from these improvements can be estimated.
More accurate runoff footprint tools exist. By using computer modeling and detailed weather data, complex runoff footprints can be made easy. The amounts of pollution in the stormwater runoff can be estimated, and the effects of combinations of stormwater green solutions can be assessed. The James River Association of central Virginia provides an online tool where property owners in the James River watershed can generate a site-specific runoff pollution report. MyRunoff.org provides an online runoff footprint calculator for property owners across the United States to estimate their baseline runoff and the reduction from different scenarios of rain barrels and rain gardens. The EPA launched the National Stormwater Calculator in July, 2013, which is a desktop application for Windows allowing users to model the annual impact of a range of stormwater green solutions.
References
External links
MyRunoff.org's Runoff Footprint Calculator
EPA's National Stormwater Calculator
James River Association Runoff Calculator
Water supply
Water pollution
Water and the environment
Environmental engineering
Hydrology and urban planning
Landscape
Sustainable urban planning | Runoff footprint | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,138 | [
"Hydrology",
"Chemical engineering",
"Water pollution",
"Civil engineering",
"Hydrology and urban planning",
"Environmental engineering",
"Water supply"
] |
40,087,321 | https://en.wikipedia.org/wiki/Quadrisecant | In geometry, a quadrisecant or quadrisecant line of a space curve is a line that passes through four points of the curve. This is the largest possible number of intersections that a generic space curve can have with a line, and for such curves the quadrisecants form a discrete set of lines. Quadrisecants have been studied for curves of several types:
Knots and links in knot theory, when nontrivial, always have quadrisecants, and the existence and number of quadrisecants has been studied in connection with knot invariants including the minimum total curvature and the ropelength of a knot.
The number of quadrisecants of a non-singular algebraic curve in complex projective space can be computed by a formula derived by Arthur Cayley.
Quadrisecants of arrangements of skew lines touch subsets of four lines from the arrangement. They are associated with ruled surfaces and the Schläfli double six configuration.
Definition and motivation
A quadrisecant is a line that intersects a curve, surface, or other set in four distinct points. It is analogous to a secant line, a line that intersects a curve or surface in two points; and a trisecant, a line that intersects a curve or surface in three points.
Compared to secants and trisecants, quadrisecants are especially relevant for space curves, because they have the largest possible number of intersection points of a line with a generic curve. In the plane, a generic curve can be crossed arbitrarily many times by a line; for instance, small generic perturbations of the sine curve are crossed infinitely often by the horizontal axis. In contrast, if an arbitrary space curve is perturbed by a small distance to make it generic, there will be no lines through five or more points of the perturbed curve. Nevertheless, any quadrisecants of the original space curve will remain present nearby in its perturbation. For generic space curves, the quadrisecants form a discrete set of lines. In contrast, when trisecants occur, they form continuous families of lines.
One explanation for this phenomenon is visual: looking at a space curve from far away, the space of such points of view can be described as a two-dimensional sphere, one point corresponding to each direction. Pairs of strands of the curve may appear to cross from all of these points of view, or from a two-dimensional subset of them. Three strands will form a triple crossing when the point of view lies on a trisecant, and four strands will form a quadruple crossing from a point of view on a quadrisecant. Each constraint that the crossing of a pair of strands lies on another strand reduces the number of degrees of freedom by one (for a generic curve), so the points of view on trisecants form a one-dimensional (continuously infinite) subset of the sphere, while the points of view on quadrisecants form a zero-dimensional (discrete) subset. C. T. C. Wall writes that the fact that generic space curves are crossed at most four times by lines is "one of the simplest theorems of the kind", a model case for analogous theorems on higher-dimensional transversals.
Depending on the properties of the curve, it may have no quadrisecants, finitely many, or infinitely many. These considerations make it of interest to determine conditions for the existence of quadrisecants, or to find bounds on their number in various special cases, such as knotted curves, algebraic curves, or arrangements of lines.
For special classes of curves
Knots and links
In three-dimensional Euclidean space, every nontrivial tame knot or link has a quadrisecant. Originally established in the case of knotted polygons and smooth knots by Erika Pannwitz,
this result was extended to knots in suitably general position and links with nonzero linking number,
and later to all nontrivial tame knots and links.
Pannwitz proved more strongly that, for a locally flat disk having the knot as its boundary, the number of singularities of the disk can be used to construct a lower bound on the number of distinct quadrisecants. The existence of at least one quadrisecant follows from the fact that any such disk must have at least one singularity. conjectured that the number of distinct quadrisecants of a given knot is always at least , where is the crossing number of the knot. Counterexamples to this conjecture have since been discovered.
Two-component links have quadrisecants in which the points on the quadrisecant appear in alternating order between the two components, and nontrivial knots have quadrisecants in which the four points, ordered cyclically as on the knot, appear in order along the quadrisecant. The existence of these alternating quadrisecants can be used to derive the Fáry–Milnor theorem, a lower bound on the total curvature of a nontrivial knot. Quadrisecants have also been used to find lower bounds on the ropelength of knots.
G. T. Jin and H. S. Kim conjectured that, when a knotted curve has finitely many quadrisecants, can be approximated with an equivalent polygonal knot with its vertices at the points where the quadrisecants intersect , in the same order as they appear on . However, their conjecture is false: in fact, for every knot type, there is a realization for which this construction leads to a self-intersecting polygon, and another realization where this construction produces a knot of a different type.
It has been conjectured that every wild knot has an infinite number of quadrisecants.
Algebraic curves
Arthur Cayley derived a formula for the number of quadrisecants of an algebraic curve in three-dimensional complex projective space, as a function of its degree and genus. For a curve of degree and genus , the number of quadrisecants is
This formula assumes that the given curve is non-singular; adjustments may be necessary if it has singular points.
Skew lines
In three-dimensional Euclidean space, every set of four skew lines in general position has either two quadrisecants (also in this context called transversals) or none. Any three of the four lines determine a hyperboloid, a doubly ruled surface in which one of the two sets of ruled lines contains the three given lines, and the other ruling consists of trisecants to the given lines. If the fourth of the given lines pierces this surface, it has two points of intersection, because the hyperboloid is defined by a quadratic equation. The two trisecants of the ruled surface, through these two points, form two quadrisecants of the given four lines. On the other hand, if the fourth line is disjoint from the hyperboloid, then there are no quadrisecants. In spaces with complex number coordinates rather than real coordinates, four skew lines always have exactly two quadrisecants.
The quadrisecants of sets of lines play an important role in the construction of the Schläfli double six, a configuration of twelve lines intersecting each other in 30 crossings. If five lines are given in three-dimensional space, such that all five are intersected by a common line but are otherwise in general position, then each of the five quadruples of the lines has a second quadrisecant , and the five lines formed in this way are all intersected by a common line . These twelve lines and the 30 intersection points form the double six.
An arrangement of complex lines with a given number of pairwise intersections and otherwise skew may be interpreted as an algebraic curve with degree and with genus determined from its number of intersections, and Cayley's aforementioned formula used to count its quadrisecants. The same result as this formula can also be obtained by classifying the quadruples of lines by their intersections, counting the number of quadrisecants for each type of quadruple, and summing over all quadruples of lines in the given set.
References
Knot theory
Algebraic geometry | Quadrisecant | [
"Mathematics"
] | 1,690 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
40,088,410 | https://en.wikipedia.org/wiki/Michael%20M.%20Khonsari | Michael Khonsari is Dow Chemical Endowed Chair, and Professor and Director of the Center for Rotating Machinery (CeRoM) at the Department of Mechanical Engineering, Louisiana State University, Fellow of the ASME.
Khonsari received his BS, MS and PhD degrees in Mechanical Engineering from the University of Texas at Austin. His research is in the field of Tribology and, in particular, application of thermodynamic methods in Tribology. Khonsari has received the ASME Burt L. Newkirk Award, STLE Presidential Award, Alcoa Foundation Award, and William Kepler Whiteford Faculty Fellow award from the University of Pittsburgh. In 2014, he received the Mayo D. Hersey Award from the ASME. He is the author of several books. Among his known students is Michael Lovell, the current Chancellor of University of Wisconsin-Milwaukee.
Until 2022, he was Editor-in-Chief of the Journal of Tribology published by the American Society of Mechanical Engineers.
References
Louisiana State University faculty
Cockrell School of Engineering alumni
Living people
Year of birth missing (living people)
Tribologists | Michael M. Khonsari | [
"Materials_science"
] | 230 | [
"Tribology",
"Tribologists"
] |
40,089,667 | https://en.wikipedia.org/wiki/Publications%20of%20the%20Astronomical%20Society%20of%20Australia | Publications of the Astronomical Society of Australia is a peer-reviewed scientific journal covering all aspects of astrophysics and astronomy. The editor-in-chief is Ivo Rolf Seitenzahl (University of New South Wales).
The journal was established at the inaugural meeting of the newly-formed Astronomical Society of Australia on 30th November 1966 as the Proceedings of the Astronomical Society of Australia with the first volume going into print in March 1967. It was run by a single Editor, Dick McGee, until 1989 when an Editorial Board was established. Up to 1994 its primary purpose was to publish papers presented at the Annual General Meeting of the ASA, although historical papers and book reviews were also considered for publication.
Starting with Volume 12 published in 1994 the name was changed to the current Publications of the Astronomical Society of Australia, reflecting a wider remit towards publishing general astronomy research papers. PASA was first published electronically in 1996 under a partnership with CSIRO publishing. Since 2013 it has been published by Cambridge University Press on behalf of the Astronomical Society of Australia.
Past Editors
Past Editors of PASA include:
Paul Wild (Australian scientist) (1967-1969)
Richard "Dick" McGee (1971-1988)
Richard Hunstead (1989-1991)
Ravi Sood (1992-1993)
Jenny Nicholls (1994-1995)
Michelle Storey (1996-2000)
Russell Cannon (2001)
John Lattanzio (2002-2008)
Bryan Gaensler (2009-2014)
Daniel Price (2015-2017)
Melanie Johnston-Hollitt (2018-2019)
Stas Shabala (2019-2022)
Ivo Seitenzahl (2022-)
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.571.
References
External links
Astronomy journals
Astrophysics journals
English-language journals
Academic journals established in 1995
Cambridge University Press academic journals
Continuous journals | Publications of the Astronomical Society of Australia | [
"Physics",
"Astronomy"
] | 389 | [
"Astronomy journals",
"Works about astronomy",
"Astronomy stubs",
"Astrophysics journals",
"Astrophysics",
"Astronomy journal stubs"
] |
30,744,059 | https://en.wikipedia.org/wiki/Advances%20in%20Difference%20Equations | Advances in Difference Equations is a peer-reviewed mathematics journal covering research on difference equations, published by Springer Open.
The journal was established in 2004 and publishes articles on theory, methodology, and application of difference and differential equations. Originally published by Hindawi Publishing Corporation, the journal was acquired by Springer Science+Business Media in early 2011. The editors-in-chief are Ravi Agarwal, Martin Bohner, and Elena Braverman.
Abstracting and indexing
The journal is abstracted and indexed by the Science Citation Index Expanded, Current Contents/Physical, Chemical & Earth Sciences, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.803. from July 1, the journal has been transitioning to a new title that opens the scope of the journal to broader developments in theory and applications of models. Under the new title, Advances in Continuous and Discrete Models: Theory and Modern Applications, the journal will cover developments in machine learning, data driven modeling, differential equations, numerical analysis, scientific computing, control, optimization, and computing.
References
External links
Algebra journals
Academic journals established in 2004
English-language journals
Springer Science+Business Media academic journals
Open access journals | Advances in Difference Equations | [
"Mathematics"
] | 247 | [
"Algebra journals",
"Algebra"
] |
30,747,790 | https://en.wikipedia.org/wiki/Cationic%20polymerization | In polymer chemistry, cationic polymerization is a type of chain growth polymerization in which a cationic initiator transfers charge to a monomer, which then becomes reactive. This reactive monomer goes on to react similarly with other monomers to form a polymer.
The types of monomers necessary for cationic polymerization are limited to alkenes with electron-donating substituents and heterocycles. Similar to anionic polymerization reactions, cationic polymerization reactions are very sensitive to the type of solvent used. Specifically, the ability of a solvent to form free ions will dictate the reactivity of the propagating cationic chain.
Cationic polymerization is used in the production of polyisobutylene (used in inner tubes) and poly(N-vinylcarbazole) (PVK).
Monomers
Monomer scope for cationic polymerization is limited to two main types: alkene and heterocyclic monomers. Cationic polymerization of both types of monomers occurs only if the overall reaction is thermally favorable. In the case of alkenes, this is due to isomerization of the monomer double bond; for heterocycles, this is due to release of monomer ring strain and, in some cases, isomerization of repeating units. Monomers for cationic polymerization are nucleophilic and form a stable cation upon polymerization.
Alkenes
Cationic polymerization of olefin monomers occurs with olefins that contain electron-donating substituents. These electron-donating groups make the olefin nucleophilic enough to attack electrophilic initiators or growing polymer chains. At the same time, these electron-donating groups attached to the monomer must be able to stabilize the resulting cationic charge for further polymerization. Some reactive olefin monomers are shown below in order of decreasing reactivity, with heteroatom groups being more reactive than alkyl or aryl groups. Note, however, that the reactivity of the carbenium ion formed is the opposite of the monomer reactivity.
Heterocyclic monomers
Heterocyclic monomers that are cationically polymerized are lactones, lactams and cyclic amines. Upon addition of an initiator, cyclic monomers go on to form linear polymers. The reactivity of heterocyclic monomers depends on their ring strain. Monomers with large ring strain, such as oxirane, are more reactive than 1,3-dioxepane which has considerably less ring strain. Rings that are six-membered and larger are less likely to polymerize due to lower ring strain.
Synthesis
Initiation
Initiation is the first step in cationic polymerization. During initiation, a carbenium ion is generated from which the polymer chain is made. The counterion should be non-nucleophilic, otherwise the reaction is terminated instantaneously. There are a variety of initiators available for cationic polymerization, and some of them require a coinitiator to generate the needed cationic species.
Classical protic acids
Strong protic acids can be used to form a cationic initiating species. High concentrations of the acid are needed in order to produce sufficient quantities of the cationic species. The counterion (A−) produced must be weakly nucleophilic so as to prevent early termination due to combination with the protonated alkene. Common acids used are phosphoric, sulfuric, fluoro-, and triflic acids. Only low molecular weight polymers are formed with these initiators.
Lewis acids/Friedel-Crafts catalysts
Lewis acids are the most common compounds used for initiation of cationic polymerization. The more popular Lewis acids are SnCl4, AlCl3, BF3, and TiCl4. Although these Lewis acids alone are able to induce polymerization, the reaction occurs much faster with a suitable cation source. The cation source can be water, alcohols, or even a carbocation donor such as an ester or an anhydride. In these systems the Lewis acid is referred to as a coinitiator while the cation source is the initiator. Upon reaction of the initiator with the coinitiator, an intermediate complex is formed which then goes on to react with the monomer unit. The counterion produced by the initiator-coinitiator complex is less nucleophilic than that of the Brønsted acid A− counterion. Halogens, such as chlorine and bromine, can also initiate cationic polymerization upon addition of the more active Lewis acids.
Carbenium ion salts
Stable carbenium ions are used to initiate chain growth of only the most reactive alkenes and are known to give well defined structures. These initiators are most often used in kinetic studies due to the ease of measuring the disappearance of the carbenium ion absorbance. Common carbenium ions are trityl and tropylium cations.
Ionizing radiation
Ionizing radiation can form a radical-cation pair that can then react with a monomer to start cationic polymerization. Control of the radical-cation pairs is difficult and often depends on the monomer and reaction conditions. Formation of radical and anionic species is often observed.
Propagation
Propagation proceeds by addition of monomer to the active species, i.e. the carbenium ion. The monomer is added to the growing chain in a head-to-tail fashion; in the process, the cationic end group is regenerated to allow for the next round of monomer addition.
Effect of temperature
The temperature of the reaction has an effect on the rate of propagation. The overall activation energy for the polymerization () is based upon the activation energies for the initiation (), propagation (), and termination () steps:
Generally, is larger than the sum of and , meaning the overall activation energy is negative. When this is the case, a decrease in temperature leads to an increase in the rate of propagation. The converse is true when the overall activation energy is positive.
Chain length is also affected by temperature. Low reaction temperatures, in the range of 170–190 K, are preferred for producing longer chains. This comes as a result of the activation energy for termination and other side reactions being larger than the activation energy for propagation. As the temperature is raised, the energy barrier for the termination reaction is overcome, causing shorter chains to be produced during the polymerization process.
Effect of solvent and counterion
The solvent and the counterion (the gegen ion) have a significant effect on the rate of propagation. The counterion and the carbenium ion can have different associations according to intimate ion pair theory; ranging from a covalent bond, tight ion pair (unseparated), solvent-separated ion pair (partially separated), and free ions (completely dissociated).
The association is strongest as a covalent bond and weakest when the pair exists as free ions. In cationic polymerization, the ions tend to be in equilibrium between an ion pair (either tight or solvent-separated) and free ions. The more polar the solvent used in the reaction, the better the solvation and separation of the ions. Since free ions are more reactive than ion pairs, the rate of propagation is faster in more polar solvents.
The size of the counterion is also a factor. A smaller counterion, with a higher charge density, will have stronger electrostatic interactions with the carbenium ion than will a larger counterion which has a lower charge density. Further, a smaller counterion is more easily solvated by a polar solvent than a counterion with low charge density. The result is increased propagation rate with increased solvating capability of the solvent.
Termination
Termination generally occurs by unimolecular rearrangement with the counterion. In this process, an anionic fragment of the counterion combines with the propagating chain end. This not only inactivates the growing chain, but it also terminates the kinetic chain by reducing the concentration of the initiator-coinitiator complex.
Chain transfer
Chain transfer can take place in two ways. One method of chain transfer is hydrogen abstraction from the active chain end to the counterion. In this process, the growing chain is terminated, but the initiator-coinitiator complex is regenerated to initiate more chains.
The second method involves hydrogen abstraction from the active chain end to the monomer. This terminates the growing chain and also forms a new active carbenium ion-counterion complex which can continue to propagate, thus keeping the kinetic chain intact.
Cationic ring-opening polymerization
Cationic ring-opening polymerization follows the same mechanistic steps of initiation, propagation, and termination. However, in this polymerization reaction, the monomer units are cyclic in comparison to the resulting polymer chains which are linear. The linear polymers produced can have low ceiling temperatures, hence end-capping of the polymer chains is often necessary to prevent depolymerization.
Kinetics
The rate of propagation and the degree of polymerization can be determined from an analysis of the kinetics of the polymerization. The reaction equations for initiation, propagation, termination, and chain transfer can be written in a general form:
In which I+ is the initiator, M is the monomer, M+ is the propagating center, and , , , and are the rate constants for initiation, propagation, termination, and chain transfer, respectively. For simplicity, counterions are not shown in the above reaction equations and only chain transfer to monomer is considered. The resulting rate equations are as follows, where brackets denote concentrations:
Assuming steady-state conditions, i.e. the rate of initiation = rate of termination:
This equation for [M+] can then be used in the equation for the rate of propagation:
From this equation, it is seen that propagation rate increases with increasing monomer and initiator concentration.
The degree of polymerization, , can be determined from the rates of propagation and termination:
If chain transfer rather than termination is dominant, the equation for becomes
Living polymerization
In 1984, Higashimura and Sawamoto reported the first living cationic polymerization for alkyl vinyl ethers. This type of polymerization has allowed for the control of well-defined polymers. A key characteristic of living cationic polymerization is that termination is essentially eliminated, thus the cationic chain growth continues until all monomer is consumed.
Commercial applications
The largest commercial application of cationic polymerization is in the production of polyisobutylene (PIB) products which include polybutene and butyl rubber. These polymers have a variety of applications from adhesives and sealants to protective gloves and pharmaceutical stoppers. The reaction conditions for the synthesis of each type of isobutylene product vary depending on the desired molecular weight and what type(s) of monomer(s) is used. The conditions most commonly used to form low molecular weight (5–10 x 104 Da) polyisobutylene are initiation with AlCl3, BF3, or TiCl4 at a temperature range of −40 to 10 °C. These low molecular weight polyisobutylene polymers are used for caulking and as sealants. High molecular weight PIBs are synthesized at much lower temperatures of −100 to −90 °C and in a polar medium of methylene chloride. These polymers are used to make uncrosslinked rubber products and are additives for certain thermoplastics. Another characteristic of high molecular weight PIB is its low toxicity which allows it to be used as a base for chewing gum. The main chemical companies that produce polyisobutylene are Esso, ExxonMobil, and BASF.
Butyl rubber, in contrast to PIB, is a copolymer in which the monomers isobutylene (~98%) and isoprene (2%) are polymerized in a process similar to high molecular weight PIBs. Butyl rubber polymerization is carried out as a continuous process with AlCl3 as the initiator. Its low gas permeability and good resistance to chemicals and aging make it useful for a variety of applications such as protective gloves, electrical cable insulation, and even basketballs. Large scale production of butyl rubber started during World War II, and roughly 1 billion pounds/year are produced in the U.S. today.
Polybutene is another copolymer, containing roughly 80% isobutylene and 20% other butenes (usually 1-butene). The production of these low molecular weight polymers (300–2500 Da) is done within a large range of temperatures (−45 to 80 °C) with AlCl3 or BF3. Depending on the molecular weight of these polymers, they can be used as adhesives, sealants, plasticizers, additives for transmission fluids, and a variety of other applications. These materials are low-cost and are made by a variety of different companies including BP Chemicals, Esso, and BASF.
Other polymers formed by cationic polymerization are homopolymers and copolymers of polyterpenes, such as pinenes (plant-derived products), that are used as tackifiers. In the field of heterocycles, 1,3,5-trioxane is copolymerized with small amounts of ethylene oxide to form the highly crystalline polyoxymethylene plastic. Also, the homopolymerization of alkyl vinyl ethers is achieved only by cationic polymerization.
References
Polymerization reactions | Cationic polymerization | [
"Chemistry",
"Materials_science"
] | 2,870 | [
"Polymerization reactions",
"Polymer chemistry"
] |
30,747,791 | https://en.wikipedia.org/wiki/High-refractive-index%20polymer | A high-refractive-index polymer (HRIP) is a polymer that has a refractive index greater than 1.50.
Such materials are required for anti-reflective coating and photonic devices such as light emitting diodes (LEDs) and image sensors. The refractive index of a polymer is based on several factors which include polarizability, chain flexibility, molecular geometry and the polymer backbone orientation.
As of 2004, the highest refractive index for a polymer was 1.76. Substituents with high molar fractions or high-n nanoparticles in a polymer matrix have been introduced to increase the refractive index in polymers.
Properties
Refractive index
A typical polymer has a refractive index of 1.30–1.70, but a higher refractive index is often required for specific applications. The refractive index is related to the molar refractivity, structure and weight of the monomer. In general, high molar refractivity and low molar volumes increase the refractive index of the polymer.
Optical properties
Optical dispersion is an important property of an HRIP. It is characterized by the Abbe number. A high refractive index material will generally have a small Abbe number, or a high optical dispersion. A low birefringence has been required along with a high refractive index for many applications. It can be achieved by using different functional groups in the initial monomer to make the HRIP. Aromatic monomers both increase refractive index and decrease the optical anisotropy and thus the birefringence.
A high clarity (optical transparency) is also desired in a high refractive index polymer. The clarity is dependent on the refractive indexes of the polymer and of the initial monomer.
Thermal stability
When looking at thermal stability, the typical variables measured include glass transition, initial decomposition temperature, degradation temperature and the melting temperature range. The thermal stability can be measured by thermogravimetric analysis and differential scanning calorimetry. Polyesters are considered thermally stable with a degradation temperature of 410 °C. The decomposition temperature changes depending on the substituent that is attached to the monomer used in the polymerization of the high refractive index polymer. Thus, longer alkyl substituents results in lower thermal stability.
Solubility
Most applications favor polymers which are soluble in as many solvents as possible. Highly refractive polyesters and polyimides are soluble in common organic solvents such as dichloromethane, methanol, hexanes, acetone and toluene.
Synthesis
The synthesis route depends on the HRIP type. The Michael polyaddition is used for a polyimide because it can be carried out at room temperature and can be used for step-growth polymerization. This synthesis was first succeeded with polyimidothiethers, resulting in optically transparent polymers with high refractive index. Polycondensation reactions are also common to make high refractive index polymers, such as polyesters and polyphosphonates.
Types
High refractive indices have been achieved either by introducing substituents with high molar refractions (intrinsic HRIPs) or by combining high-n nanoparticles with polymer matrixes (HRIP nanocomposites).
Intrinsic HRIP
Sulfur-containing substituents including linear thioether and sulfone, cyclic thiophene, thiadiazole and thianthrene are the most commonly used groups for increasing refractive index of a polymer. Polymers with sulfur-rich thianthrene and tetrathiaanthracene moieties exhibit n values above 1.72, depending on the degree of molecular packing.
Halogen elements, especially bromine and iodine, were the earliest components used for developing HRIPs. In 1992, Gaudiana et al. reported a series of polymethylacrylate compounds containing lateral brominated and iodinated carbazole rings. They had refractive indices of 1.67–1.77 depending on the components and numbers of the halogen substituents. However, recent applications of halogen elements in microelectronics have been severely limited by the WEEE directive and RoHS legislation adopted by the European Union to reduce potential pollution of the environment.
Phosphorus-containing groups, such as phosphonates and phosphazenes, often exhibit high molar refractivity and optical transmittance in the visible light region. Polyphosphonates have high refractive indices due to the phosphorus moiety even if they have chemical structures analogous to polycarbonates. Shaver et al. reported a series of polyphosphonates with varying backbones, reaching the highest refractive index reported for polyphosphonates at 1.66. In addition, polyphosphonates exhibit good thermal stability and optical transparency; they are also suitable for casting into plastic lenses.
Organometallic components result in HRIPs with good film forming ability and relatively low optical dispersion. Polyferrocenylsilanes and polyferrocenes containing phosphorus spacers and phenyl side chains show unusually high n values (n=1.74 and n=1.72). They might be good candidates for all-polymer photonic devices because of their intermediate optical dispersion between organic polymers and inorganic glasses.
HRIP nanocomposite
Hybrid techniques which combine an organic polymer matrix with highly refractive inorganic nanoparticles could result in high n values. The factors affecting the refractive index of a high-n nanocomposite include the characteristics of the polymer matrix, nanoparticles and
the hybrid technology between inorganic and organic components. The refractive index of a nanocomposite can be estimated as , where , and stand for the refractive indices of the nanocomposite, nanoparticle and organic matrix, respectively. and represent the volume fractions of the nanoparticles and organic matrix, respectively. The nanoparticle load is also important in designing HRIP nanocomposites for optical applications, because excessive concentrations increase the optical loss and decrease the processability of the nanocomposites. The choice of nanoparticles is often influenced by their size and surface characteristics. In order to increase optical transparency and reduce Rayleigh scattering of the nanocomposite, the diameter of the nanoparticle should be below 25 nm. Direct mixing of nanoparticles with the polymer matrix often results in the undesirable aggregation of nanoparticles – this is avoided by modifying their surface. The most commonly used nanoparticles for HRIPs include TiO2 (anatase, n=2.45; rutile, n=2.70), ZrO2 (n=2.10), amorphous silicon (n=4.23), PbS (n=4.20) and ZnS (n=2.36). Polyimides have high refractive indexes and thus are often used as the matrix for high-n nanoparticles. The resulting nanocomposites exhibit a tunable refractive index ranging from 1.57 to 1.99.
Applications
Image sensors
A microlens array is a key component of optoelectronics, optical communications, CMOS image sensors and displays. Polymer-based microlenses are easier to make and are more flexible than conventional glass-based lenses. The resulting devices use less power, are smaller in size and are cheaper to produce.
Lithography
Another application of HRIPs is in immersion lithography. In 2009 it was a new technique for circuit manufacturing using both photoresists and high refractive index fluids. The photoresist needs to have an n value of greater than 1.90. It has been shown that non-aromatic, sulfur-containing HRIPs are the best materials for an optical photoresist system.
LEDs
Light-emitting diodes (LEDs) are a common solid-state light source. High-brightness LEDs (HBLEDs) are often limited by the relatively low light extraction efficiency due to the mismatch of the refractive indices between the LED material (GaN, n=2.5) and the organic encapsulant (epoxy or silicone, n=1.5). Higher light outputs can be achieved by using an HRIP as the encapsulant.
See also
Refractive index
Refractometer
Abbe number
Optoelectronics
Polarizability
Birefringence
Lorentz-Lorenz equation
Dispersion
Optical anisotropy
Nanocomposite
Image sensor
Immersion lithography
Organic light emitting diode (OLED)
References
Further reading
Optical materials
Polymers | High-refractive-index polymer | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,830 | [
"Materials",
"Optical materials",
"Polymer chemistry",
"Polymers",
"Matter"
] |
30,747,793 | https://en.wikipedia.org/wiki/Plasma%20polymerization | Plasma polymerization (or glow discharge polymerization) uses plasma sources to generate a gas discharge that provides energy to activate or fragment gaseous or liquid monomer, often containing a vinyl group, in order to initiate polymerization. Polymers formed from this technique are generally highly branched and highly cross-linked, and adhere to solid surfaces well. The biggest advantage to this process is that polymers can be directly attached to a desired surface while the chains are growing, which reduces steps necessary for other coating processes such as grafting. This is very useful for pinhole-free coatings of 100 picometers to 1-micrometer thickness with solvent insoluble polymers.
Introduction
In as early as the 1870s "polymers" formed by this process were known, but these polymers were initially thought of as undesirable byproducts associated with electric discharge, with little attention being given to their properties. It was not until the 1960s that the properties of these polymers were found to be useful. It was found that flawless thin polymeric coatings could be formed on metals, although for very thin films (<10nm) this has recently been shown to be an oversimplification. By selecting the monomer type and the energy density per monomer, known as the Yasuda parameter, the chemical composition and structure of the resulting thin film can be varied with a wide range. These films are usually inert, adhesive, and have low dielectric constants. Some common monomers polymerized by this method include styrene, ethylene, methacrylate, and pyridine, just to name a few. The 1970s brought about many advances in plasma polymerization, including the polymerization of many different types of monomers. The mechanisms of deposition however were largely ignored until more recently. Since this time most attention devoted to plasma polymerization has been in the fields of coatings, but since it is difficult to control polymer structure, it has limited applications.
Basic operating mechanism
Glow discharge
Plasma consists of a mixture of electrons, ions, radicals, neutrals, and photons. Some of these species are in local thermodynamic equilibrium, while others are not. Even for simple gases like argon, this mixture can be complex. For plasmas of organic monomers, the complexity can rapidly increase as some components of the plasma fragment, while others interact and form larger species. Glow discharge is a technique in polymerization which forms free electrons which gain energy from an electric field, and then lose energy through collisions with neutral molecules in the gas phase. This leads to many chemically reactive species, which then leads to a plasma polymerization reaction. The electric discharge process for plasma polymerization is the "low-temperature plasma" method because higher temperatures cause degradation. These plasmas are formed by a direct current, alternating current or radio frequency generator.
Types of reactors
There are a few designs for apparatus used in plasma polymerization, one of which is the Bell (static type), in which monomer gas is put into the reaction chamber, but does not flow through the chamber. It comes in and polymerizes without removal. This type of reactor is shown in Figure 1. This reactor has internal electrodes, and polymerization commonly takes place on the cathode side. All devices contain the thermostatic bath, which is used to regulate temperature, and a vacuum to regulate pressure.
Operation: The monomer gas comes into the Bell-type reactor as a gaseous species, and then is put into the plasma state by the electrodes, in which the plasma may consist of radicals, anions and cations. These monomers are then polymerized on the cathode surface, or some other surface placed in the apparatus by different mechanisms of which details are discussed below. The deposited polymers then propagate off the surface and form growing chains with seemingly uniform consistency.
Another popular reactor type is the flow-through reactor (continuous flow reactor), which also has internal electrodes, but this reactor allows monomer gas to flow through the reaction chamber as its name implies, which should give a more even coating for polymer film deposition. It has the advantage that more monomer keeps flowing into the reactor to deposit more polymer. It has the disadvantage of forming what is called "tail flame", which is when polymerization extends into the vacuum line.
A third popular type of reactor is the electrodeless. This uses an RF coil wrapped around the glass apparatus, which then uses a radio frequency generator to form the plasma inside of the housing without the use of direct electrodes (see Inductively Coupled Plasma). The polymer can then be deposited as it is pushed through this RF coil toward the vacuum end of the apparatus. This has the advantage of not having polymer building up on the electrode surface, which is desirable when polymerizing onto other surfaces.
A fourth type of system growing in popularity is the atmospheric-pressure plasma system, which is useful for depositing thin polymer films. This system bypasses the requirements for special hardware involving vacuums, which then makes it favorable for integrated industrial use. It has been shown that polymers formed at atmospheric pressure can have similar properties for coatings as those found in low-pressure systems.
Physical process characteristics
The formation of plasma for polymerization depends on many of the following. An electron energy of 1–10 eV is required, with electron densities of 109 to 1012 per cubic centimeter, to form the desired plasma state. The formation of a low-temperature plasma is important; the electron temperatures are not equal to the gas temperatures and have a ratio of Te/Tg of 10 to 100, so that this process can occur at near ambient temperatures, which is advantageous because polymers degrade at high temperatures, so if a high-temperature plasma was used the polymers would degrade after formation or would never be formed. This entails non-equilibrium plasmas, which means that charged monomer species have more kinetic energy than neutral monomer species, and cause the transfer of energy to a substrate instead of an uncharged monomer.
Kinetics
The kinetic rate of these reactions depends mostly on the monomer gas, which must be either gaseous or vaporized. However, other parameters are also important as well, such as power, pressure, flow rate, frequency, electrode gap, and reactor configuration. Low flow rates usually only depend on the number of reactive species present for polymerization, whereas high flow rates depend on the amount of time that is spent in the reactor. Therefore, the maximum rate of polymerization is somewhere in the middle.
The fastest reactions tend to be in the order of triple-bonded > double-bonded > single bonded molecules, and also lower molecular weight molecules are faster than higher ones. So acetylene is faster than ethylene, and ethylene is faster than propene, etc. The molecular weight factor in polymer deposition is dependent on the monomer flow rate, in which a higher molecular weight monomer typically near 200 g/mol needs a much higher flow rate of 15 g/cm2, whereas lower molecular weights around 50 g/mol require a flow rate of only 5 g/cm2. A heavy monomer, therefore, needs a faster flow, and would likely lead to increased pressures, decreasing polymerization rates.
Increased pressure tends to decrease polymerization rates reducing uniformity of deposition since uniformity is controlled by constant pressure. This is a reason that high-pressure plasma or atmospheric-pressure plasmas are not usually used in favor of low-pressure systems. At pressures greater than 1 torr, oligomers is formed on the electrode surface, and the monomers also on the surface can dissolve them to get a low degree of polymerization forming an oily substance. At low pressures, the reactive surfaces are low in monomer and facilitate the growth of high molecular weight polymers.
The rate of polymerization depends on input power, until power saturation occurs and the rate becomes independent of it. A narrower electrode gap also tends to increase polymerization rates because a higher electron density per unit area is formed. Polymerization rates also depend on the type of apparatus used for the process. In general, increasing the frequency of alternating current glow discharge up to about 5 kHz increases the rate due to the formation of more free radicals. After this frequency, the inertial effects of colliding monomers inhibit polymerization. This forms the first plateau for polymerization frequencies. A second maximum in frequency occurs at 6 MHz, where side reactions are overcome again and the reaction occurs through free radicals diffused from plasma to the electrodes, at which point a second plateau is obtained. These parameters differ slightly for each monomer and must be optimized in-situ.
Synthetic routes
Plasma contains many species such as ions, free radicals, and electrons, so it is important to look at what contributes to the polymerization process most. The first suggested process by Westwood et al. was that of a cationic polymerization since in a direct current system polymerization occurs mainly on the cathode. However, more investigation has led to the belief that the mechanism is more of a radical polymerization process, since radicals tend to be trapped in the films, and termination can be overcome by reinitiation of oligomers. Other kinetic studies also appear to support this theory.
However, since the mid-1990s several papers focusing on the formation of highly functionalized plasma polymers have postulated a more significant role for cations, particularly where the plasma sheath is collisionless. The assumption that the plasma ion density is low and consequently the ion flux to surfaces is low has been challenged, pointing out that ion flux is determined according to the Bohm sheath criterion i.e. ion flux is proportional to the square root of the electron temperature and not RT.
In polymerization, both gas phase and surface reactions occur, but the mechanism differs between high and low frequencies. At high frequencies, it occurs in reactive intermediates, whereas at low frequencies polymerization happens mainly on surfaces. As polymerization occurs, the pressure inside the chamber decreases in a closed system, since gas-phase monomers go to solid polymers. An example diagram of the ways that polymerization can take place is shown in Figure 2, wherein the most abundant pathway is shown in blue with double arrows, with side pathways shown in black. The ablation occurs by gas formation during polymerization. Polymerization has two pathways, either the plasma state or plasma-induced processes, which both lead to the deposited polymer.
Polymers can be deposited on many substrates other than the electrode surfaces, such as glass, other organic polymers, or metals, when either a surface is placed in front of the electrodes, or placed in the middle between them. The ability for them to build off of electrode surfaces is likely to be an electrostatic interaction, while on other surfaces covalent attachment is possible.
Polymerization is likely to take place through either ionic and/or radical processes which are initiated by plasma formed from the glow discharge. The classic view presented by Yasuda based upon thermal initiation of Parylene polymerization is that there are many propagating species present at any given time as shown in Figure 3. This figure shows two different pathways by which the polymerization may take place. The first pathway is a monofunctionalization process, which bears resemblance to a standard free radical polymerization mechanism (M•)- although with the caveat that the reactive species may be ionic and not necessarily radical. The second pathway refers to a difunctional mechanism, which for example may contain a cationic and a radical propagating center on the same monomer (•M•). A consequence is that 'polymer' can grow in multiple directions by multiple pathways off one species, such as a surface or other monomer. This possibility let Yasuda to term the mechanism as a very rapid step-growth polymerization. In the diagram, Mx refers to the original monomer molecule or any of many dissociation products such as chlorine, fluorine and hydrogen. The M• species refers to those that are activated and capable of participating in reactions to form new covalent bonds. The •M• species refers to an activated difunctional monomer species. The subscripts i, j, and k show the sizes of the different species involved. Even though radicals represent the activated species, any ion or radical could be used in the polymerization. As can be seen here, plasma polymerization is a very complex process, with many parameters affecting everything from rate to chain length.
Selection or the favoring of one particular pathway can be achieved by altering the plasma parameters. For example, pulsed plasma with selected monomers appears to favor much more regular polymer structures and it has been postulated these grow by a mechanism akin to (radical) chain growth in the plasma off-time.
Common monomers/polymers
Monomers
As can be seen in the monomer table, many simple monomers are readily polymerized by this method, but most must be smaller ionizable species because they have to be able to go into the plasma state. Though monomers with multiple bonds polymerize readily, it is not a requirement, as ethane, silicones and many others polymerize also.
Other stipulations exist. Yasuda et al. studied 28 monomers and found that those containing aromatic groups, silicon, olefinic group or nitrogen (NH, NH2, CN) were readily polymerizable, while those containing oxygen, halides, aliphatic hydrocarbons and cyclic hydrocarbons were decomposed more readily. The latter compounds have more ablation or side reactions present, which inhibit stable polymer formation. It is also possible to incorporate N2, H2O, and CO into copolymers of styrene.
Plasma polymers can be thought of as a type of graft polymer since they are grown off of a substrate. These polymers are known to form nearly uniform surface deposition, which is one of their desirable properties. Polymers formed from this process often cross-link and form branches due to the multiple propagating species present in the plasma. This often leads to very insoluble polymers, which gives an advantage to this process, since hyperbranched polymers can be deposited directly without solvent.
Polymers
Common polymers include: polythiophene, polyhexafluoropropylene, polytetramethyltin, polyhexamethyldisiloxane, polytetramethyldisiloxane, polypyridine, polyfuran, and poly-2-methyloxazoline.
The following are listed in order of decreasing rate of polymerization: polystyrene, polymethyl styrene, polycyclopentadiene, polyacrylate, polymethyl acrylate, polymethyl methacrylate, polyvinyl acetate, polyisoprene, polyisobutene, and polyethylene.
Nearly all polymers created by this method have excellent appearance, are clear, and are significantly cross-linked. Linear polymers are not formed readily by plasma polymerization methods based on propagating species. Many other polymers could be formed by this method.
General characteristics of plasma polymers
The properties of plasma polymers differ greatly from those of conventional polymers. While both types are dependent on the chemical properties of the monomer, the properties of plasma polymers depend more greatly on the design of the reactor and the chemical and physical characteristics of the substrate on which the plasma polymer is deposited. The location within the reactor where the deposition occurs also affects the resultant polymer's properties. In fact, by using plasma polymerization with a single monomer and varying the reactor, substrate, etc. a variety of polymers, each having different physical and chemical properties, can be prepared. The large dependence of the polymer features on these factors makes it difficult to assign a set of basic characteristics, but a few common properties that set plasma polymers apart from conventional polymers do exist.
The most significant difference between conventional polymers and plasma polymers is that plasma polymers do not contain regular repeating units. Due to the number of different propagating species present at any one time as discussed above, the resultant polymer chains are highly branched and are randomly terminated with a high degree of cross-linking. An example of a proposed structure for plasma polymerized ethylene demonstrating a large extend of cross-linking and branching is shown in Figure 4.
All plasma polymers contain free radicals as well. The amount of free radicals present varies between polymers and is dependent on the chemical structure of the monomer. Because the formation of the trapped free radicals is tied to the growth mechanism of the plasma polymers, the overall properties of the polymers directly correlate to the number of free radicals.
Plasma polymers also contain internal stress. If a thick layer (e.g. 1 µm) of a plasma polymer is deposited on a glass slide, the plasma polymer will buckle and frequently crack. The curling is attributed to an internal stress formed in the plasma polymer during the polymer deposition. The degree of curling is dependent on the monomer as well as the conditions of the plasma polymerization.
Most plasma polymers are insoluble and infusible. These properties are due to the large amount of cross-linking in the polymers, previously discussed. Consequently, the kinetic path length for these polymers must be sufficiently long, so these properties can be controlled to a point.
The permeabilities of plasma polymers also differ greatly from those of conventional polymers. Because of the absence of large-scale segmental mobility and the high degree of cross-linking within the polymers, the permeation of small molecules does not strictly follow the typical mechanisms of "solution-diffusion" or molecular-level sieve for such small permeants. The permeability characteristics of plasma polymers fall between these two ideal cases.
A final common characteristic of plasma polymers is the adhesion ability. The specifics of the adhesion ability for a given plasma polymer, such as thickness and characteristics of the surface layer, again are particular for a given plasma polymer and few generalizations can be made.
Advantages and disadvantages
Plasma polymerization offers several advantages over other polymerization methods in general. The most significant advantage of plasma polymerization is its ability to produce polymer films of organic compounds that do not polymerize under normal chemical polymerization conditions. Nearly all monomers, even saturated hydrocarbons and organic compounds without a polymerizable structure such as a double bond, can be polymerized with this technique.
A second advantage is the ease of application of the polymers as coatings versus conventional coating processes. While coating a substrate with conventional polymers requires several steps, plasma polymerization accomplishes all these in essentially a single step. This leads to a cleaner and 'greener' synthesis and coating process since no solvent is needed during the polymer preparation and no cleaning of the resultant polymer is needed either. Another 'green' aspect of the synthesis is that no initiator is needed for the polymer preparation since reusable electrodes cause the reaction to proceed. The resultant polymer coatings also have several advantages over typical coatings. These advantages include being nearly pinhole-free, highly dense, and the thickness of the coating can easily be varied.
There are also several disadvantages relating to plasma polymerization versus conventional methods. The most significant disadvantage is the high cost of the process. A vacuum system is required for the polymerization, significantly increasing the set-up price.
Another disadvantage is due to the complexity of plasma processes. Because of the complexity, it is not easy to achieve good control over the chemical composition of the surface after modification. The influence of process parameters on the chemical composition of the resultant polymer means it can take a long time to determine the optimal conditions. The complexity of the process also makes it impossible to theorize what the resultant polymer will look like, unlike conventional polymers which can be easily determined based on the monomer.
Applications
The advantages offered by plasma polymerization have resulted in substantial research on the applications of these polymers. The vastly different chemical and mechanical properties offered by polymers formed with plasma polymerization means they can be applied to countless different systems. Applications ranging from adhesion, composite materials, protective coatings, printing, membranes, biomedical applications, water purification, and so on have all been studied.
Of particular interest since the 1980s has been the deposition of functionalized plasma polymer films. For example, functionalized films are used as a means of improving biocompatibility for biological implants6 and for producing super-hydrophobic coatings. They have also been extensively employed in biomaterials for cell attachment, protein binding, and anti-fouling surfaces. Through the use of low-power and pressure plasma, high functional retention can be achieved which has led to substantial improvements in the biocompatibility of some products, a simple example being the development of extended-wear contact lenses. Due to these successes, the huge potential of functional plasma polymers is slowly being realized by workers in previously unrelated fields such as water treatment and wound management. Emerging technologies such as nanopatterning, 3D scaffolds, micro-channel coating, and microencapsulation are now also utilizing functionalized plasma polymers, areas for which traditional polymers are often unsuitable
A significant area of research has been on the use of plasma polymer films as permeation membranes. The permeability characteristics of plasma polymers deposited on porous substrates are different than usual polymer films. The characteristics depend on the deposition and polymerization mechanism. Plasma polymers as membranes for separation of oxygen and nitrogen, ethanol and water, and water vapor permeation have all been studied. The application of plasma polymerized thin films as reverse osmosis membranes has received considerable attention as well. Yasuda et al. have shown membranes prepared with plasma polymerization made from nitrogen-containing monomers can yield up to 98% salt rejection with a flux of 6.4 gallons/ft2 a day. Further research has shown that varying the monomers of the membrane offers other properties as well, such as chlorine resistance.
Plasma-polymerized films have also found electrical applications. Given that plasma polymers frequently contain many polar groups, which form when the radicals react with oxygen in the air during the polymerization process, the plasma polymers were expected to be good dielectric materials in thin film form. Studies have shown that plasma polymers generally do have a higher dielectric property. Some plasma polymers have been applied as chemical sensory devices due to their electrical properties. Plasma polymers have been studied as chemical sensory devices for humidity, propane, and carbon dioxide amongst others. Thus far issues with instability against aging and humidity have limited their commercial applications.
The application of plasma polymers as coatings has also been studied. Plasma polymers formed from tetramethoxysilane have been studied as protective coatings and have been shown to increase the hardness of polyethylene and polycarbonate. The use of plasma polymers to coat plastic lenses is increasing in popularity. Plasma depositions can easily coat curved materials with good uniformity, such as those of bifocals. The different plasma polymers used can be not only scratch resistant but also hydrophobic leading to anti-fogging effects.
Plasma polymer surfaces with tunable wettability and reversibly switchable pH-responsiveness have shown promising prospects due to their unique property in applications, such as drug delivery, biomaterial engineering, oil/water separation processes, sensors, and biofuel cells.
References
Chemical processes
Plasma processing
Polymerization reactions
Chemical vapor deposition | Plasma polymerization | [
"Chemistry",
"Materials_science"
] | 4,791 | [
"Chemical processes",
"nan",
"Polymer chemistry",
"Chemical process engineering",
"Chemical vapor deposition",
"Polymerization reactions"
] |
33,262,435 | https://en.wikipedia.org/wiki/FRW/CFT%20duality | In physics, the Friedmann–Robertson–Walker/conformal field theory-duality or FRW/CFT duality is a conjectured duality for Friedmann–Robertson–Walker metric inspired by the AdS/CFT correspondence. It assumes that the cosmological constant is exactly zero, which is only the case for models with exact unbroken supersymmetry. Because the energy density does not approach zero as we approach spatial infinity, the metric is not asymptotically flat. This is not an asymptotically cold solution.
Overview
In eternal inflation, our universe passes through a series of phase transitions with progressively lower cosmological constant. Our current phase has a cosmological constant of size , which is conjectured to be metastable in string theory. It is possible our universe might tunnel into a supersymmetric phase with an exactly zero cosmological constant. In fact, any particle in eternal inflation will eventually terminate in a phase with exactly zero or negative cosmological constant. The phases with negative cosmological constant will end in a Big Crunch. Stephen Shenker and Leonard Susskind called this the census taker's hat.
The conformal compactification of the terminal phase has a Penrose diagram shaped like a hat for future null infinity. A Euclidean Liouville quantum field theory is assumed to reside there. The null coordinate corresponds to the running of the renormalization group.
The terminal phase has an ever-expanding FRW metric in which the average energy density goes to zero.
References
Quantum gravity | FRW/CFT duality | [
"Physics"
] | 317 | [
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Relativity stubs",
"Theory of relativity",
"Physics beyond the Standard Model",
"Quantum physics stubs"
] |
33,264,144 | https://en.wikipedia.org/wiki/Cyborg%20anthropology | Cyborg anthropology is a discipline that studies the interaction between humanity and technology from an anthropological perspective. The discipline offers novel insights on new technological advances and their effect on culture and society.
History
Donna Haraway’s 1984 "A Cyborg Manifesto" was the first widely-read academic text to explore the philosophical and sociological ramifications of the cyborg. A sub-focus group within the American Anthropological Association's annual meeting in 1992 presented a paper entitled "Cyborg Anthropology", which cites Haraway's "Manifesto". The group described cyborg anthropology as the study of how humans define humanness in relationship to machines, as well as the study of science and technology as activities that can shape and be shaped by culture. This includes studying the ways that all people, including those who are not scientific experts, talk about and conceptualize technology. The sub-group was closely related to STS and the Society for the Social Studies of Science. More recently, Amber Case has been responsible for explicating the concept of Cyborg Anthropology to the general public. She believes that a key aspect of cyborg anthropology is the study of networks of information among humans and technology.
Many academics have helped develop cyborg anthropology, and many more who haven't heard the term still are today conducting research that may be considered cyborg anthropology, particularly research regarding technologically advanced prosthetics and how they can influence an individual's life. A 2014 summary of holistic American anthropology intersections with cyborg concepts (whether explicit or not) by Joshua Wells explained how the information-rich and culture-laden ways in which humans imagine, construct, and use tools may extend the cyborg concept through the human evolutionary lineage. Amber Case generally tells people that the actual number of self-described cyborg anthropologists is "about seven". The Cyborg Anthropology Wiki, overseen by Case, aims to make the discipline as accessible as possible, even to people who do not have a background in anthropology.
Methodology
Cyborg anthropology uses traditional methods of anthropological research like ethnography and participant observation, accompanied by statistics, historical research, and interviews. By nature it is a multidisciplinary study; cyborg anthropology can include aspects of science and technology Studies, cybernetics, feminist theory, and more. It primarily focuses on how people use discourse about science and technology in order to make these meaningful in their lives.
'Cyborg' origins and meaning
The word cyborg was originally coined in a 1960 paper about space exploration, the term is short for cybernetic organism. A cyborg is traditionally defined as a system with both organic and inorganic parts. In the narrowest sense of the word, cyborgs are people with machinated body parts. These cyborg parts may be restorative technologies that help a body function where the organic system has failed, like pacemakers, insulin pumps, and bionic limbs, or enhanced technologies that improve the human body beyond its natural state. In the broadest sense, all human interactions with technology could qualify as a cyborg. Most cyborg anthropologists lean towards the latter view of the cyborg; some, like Amber Case, even claim that humans are already cyborgs because people's daily life and sense of self is so intertwined with technology. Haraway's "Cyborg Manifesto" suggests that technology like virtual avatars, artificial insemination, sexual reassignment surgery, and artificial intelligence might make dichotomies of sex and gender irrelevant, even nonexistent. She goes on to say that other human distinctions (like life and death, human and machine, virtual and real) may similarly disappear in the wake of the cyborg.
Digital vs. cyborg anthropology
Digital anthropology is concerned with how digital advances are changing how people live their lives, as well as consequent changes to how anthropologists do ethnography and to a lesser extent how digital technology can be used to represent and undertake research. Cyborg anthropology also looks at disciplines like genetics and nanotechnology, which are not strictly digital. Cybernetics/informatics covers the range of cyborg advances better than the label digital.
Key concepts and research
Actor–network theory
Questions of subjectivity, agency, actors, and structures have always been of interest in social and cultural anthropology. In cyborg anthropology the question of what type of cybernetic system constitutes an actor/subject becomes all the more important. Is it the actual technology that acts on humanity (the Internet), the general techno-culture (Silicon Valley), government sanctions (net neutrality), specific innovative humans (Steve Jobs), or some type of combination of these elements? Some academics believe that only humans have agency and technology is an object humans act upon, while others argue that humans have no agency and culture is entirely shaped by material and technological conditions. Actor-network theory (ANT), proposed by Bruno Latour, is a theory that helps scholars understand how these elements work together to shape techno-cultural phenomena. Latour suggests that actors and the subjects they act on are parts of larger networks of mutual interaction and feedback loops. Humans and technology both have the agency to shape one another. ANT best describes the way cyborg anthropology approaches the relationship between humans and technology. Similarly, Wells explain how new forms of networked political expression such as the Pirate Party movement and free and open-source software philosophies are generated from human reliance on information technologies in all walks of life.
Artificial intelligence
Researchers like Kathleen Richardson have conducted ethnographic research on the humans who build and interact with artificial intelligence. Recently, Stuart Geiger, a PhD student at University of California, Berkeley suggested that robots may be capable of creating a culture of their own, which researchers could study with ethnographic methods. Anthropologists react to Geiger with skepticism because, according to Geiger, they believe that culture is specific to living creatures and ethnography limited to human subjects.
Posthumanism
The most basic definition of anthropology is the study of humans. However, cyborgs, by definition, describe something that is not entirely an organic human. Moreover, limiting a discipline to the study of humans may be difficult the more that technology allows humans to transcend the normal conditions of organic life. The prospect of a posthuman condition calls into question the nature and necessity of a field focused on studying humans.
Sociologist of technology Zeynep Tufekci argues that any symbolic expression of ourselves, even the most ancient cave painting, can be considered "posthuman" because it exists outside of our physical bodies. To her, this means that the human and the "posthuman" have always existed alongside one another, and anthropology has always concerned itself with the posthuman as well as the human. Neil L. Whitehead and Michael Welsch point out that the concern that posthumanism will decenter the human in anthropology ignores the discipline's long history of engaging with the unhuman (like spirits and demons that humans believe in) and the culturally "subhuman" (like marginalized groups within a society). Contrarily, Wells, taking a deep-time perspective, points out the ways that tool-centric and technologically communicated values and ethics typify the human condition, and that cross-cultural and ethnological trends in conceptions of lifeways, power dynamics, and definitions of humanity often incorporate information-rich technological symbology.
Notable figures
Manfred E. Clynes
Nathan S. Kline
Amber Case
Sherry Turkle
Sharon Traweek
Lucien Castaing-Taylor
Allucquere Rosanne Stone
Dada
See also
Transhumanism
Robotics
Posthumanization
Digital humanities
References
Further reading
Defining aging in cyborgs
Case, Amber. "The Cell Phone and its Technosocial Sites of Engagement." Thesis for Lewis and Clark College. 2007.
Anthropology
Cybernetics
Actor-network theory | Cyborg anthropology | [
"Technology",
"Biology"
] | 1,602 | [
"Actor-network theory",
"Science and technology studies",
"Cyborgs"
] |
33,265,912 | https://en.wikipedia.org/wiki/Iodine%20monobromide | Iodine monobromide is an interhalogen compound with the formula IBr. It is a dark red solid that melts near room temperature. Like iodine monochloride, IBr is used in some types of iodometry. It serves as a source of I+. Its Lewis acid properties are compared with those of ICl and I2 in the ECW model. It can form CT adducts with Lewis donors.
Iodine monobromide is formed when iodine and bromine are combined in a chemical reaction:.
I2 + Br2 → 2 IBr
References
Iodine compounds
Interhalogen compounds
Diatomic molecules
Bromides | Iodine monobromide | [
"Physics",
"Chemistry"
] | 137 | [
"Inorganic compounds",
"Molecules",
"Interhalogen compounds",
"Oxidizing agents",
"Salts",
"Inorganic compound stubs",
"Bromides",
"Diatomic molecules",
"Matter"
] |
33,268,859 | https://en.wikipedia.org/wiki/Superfluid%20vacuum%20theory | Superfluid vacuum theory (SVT), sometimes known as the BEC vacuum theory, is an approach in theoretical physics and quantum mechanics where the fundamental physical vacuum (non-removable background) is considered as a superfluid or as a Bose–Einstein condensate (BEC).
The microscopic structure of this physical vacuum is currently unknown and is a subject of intensive studies in SVT. An ultimate goal of this research is to develop scientific models that unify quantum mechanics (which describes three of the four known fundamental interactions) with gravity, making SVT a derivative of quantum gravity and describes all known interactions in the Universe, at both microscopic and astronomic scales, as different manifestations of the same entity, superfluid vacuum.
History
The concept of a luminiferous aether as a medium sustaining electromagnetic waves was discarded after the advent of the special theory of relativity, as the presence of the concept alongside special relativity results in several contradictions; in particular, aether having a definite velocity at each spacetime point will exhibit a preferred direction. This conflicts with the relativistic requirement that all directions within a light cone are equivalent.
However, as early as in 1951 P.A.M. Dirac published two papers where he pointed out that we should take into account quantum fluctuations in the flow of the aether.
His arguments involve the application of the uncertainty principle to the velocity of aether at any spacetime point, implying that the velocity will not be a well-defined quantity. In fact, it will be distributed over various possible values. At best, one could represent the aether by a wave function representing the perfect vacuum state for which all aether velocities are equally probable.
Inspired by Dirac's ideas, K. P. Sinha, C. Sivaram and E. C. G. Sudarshan published in 1975 a series of papers that suggested a new model for the aether according to which it is a superfluid state of fermion and anti-fermion pairs, describable by a macroscopic wave function.
They noted that particle-like small fluctuations of superfluid background obey the Lorentz symmetry, even if the superfluid itself is non-relativistic.
Nevertheless, they decided to treat the superfluid as the relativistic matter – by putting it into the stress–energy tensor of the Einstein field equations.
This did not allow them to describe the relativistic gravity as a small fluctuation of the superfluid vacuum, as subsequent authors have noted .
Since then, several theories have been proposed within the SVT framework. They differ in how the structure and properties of the background superfluid must look.
In absence of observational data which would rule out some of them, these theories are being pursued independently.
Relation to other concepts and theories
Lorentz and Galilean symmetries
According to the approach, the background superfluid is assumed to be essentially non-relativistic whereas the Lorentz symmetry is not an exact symmetry of Nature but rather the approximate description valid only for small fluctuations.
An observer who resides inside such vacuum and is capable of creating or measuring the small fluctuations would observe them as relativistic objects – unless their energy and momentum are sufficiently high to make the Lorentz-breaking corrections detectable.
If the energies and momenta are below the excitation threshold then the superfluid background behaves like the ideal fluid, therefore, the Michelson–Morley-type experiments would observe no drag force from such aether.
Further, in the theory of relativity the Galilean symmetry (pertinent to our macroscopic non-relativistic world) arises as the approximate one – when particles' velocities are small compared to speed of light in vacuum.
In SVT one does not need to go through Lorentz symmetry to obtain the Galilean one – the dispersion relations of most non-relativistic superfluids are known to obey the non-relativistic behavior at large momenta.
To summarize, the fluctuations of vacuum superfluid behave like relativistic objects at "small" momenta (a.k.a. the "phononic limit")
and like non-relativistic ones
at large momenta.
The yet unknown nontrivial physics is believed to be located somewhere between these two regimes.
Relativistic quantum field theory
In the relativistic quantum field theory the physical vacuum is also assumed to be some sort of non-trivial medium to which one can associate certain energy.
This is because the concept of absolutely empty space (or "mathematical vacuum") contradicts the postulates of quantum mechanics.
According to QFT, even in absence of real particles the background is always filled by pairs of creating and annihilating virtual particles.
However, a direct attempt to describe such medium leads to the so-called ultraviolet divergences.
In some QFT models, such as quantum electrodynamics, these problems can be "solved" using the renormalization technique, namely, replacing the diverging physical values by their experimentally measured values.
In other theories, such as the quantum general relativity, this trick does not work, and reliable perturbation theory cannot be constructed.
According to SVT, this is because in the high-energy ("ultraviolet") regime the Lorentz symmetry starts failing so dependent theories cannot be regarded valid for all scales of energies and momenta.
Correspondingly, while the Lorentz-symmetric quantum field models are obviously a good approximation below the vacuum-energy threshold, in its close vicinity the relativistic description becomes more and more "effective" and less and less natural since one will need to adjust the expressions for the covariant field-theoretical actions by hand.
Curved spacetime
According to general relativity, gravitational interaction is described in terms of spacetime curvature using the mathematical formalism of differential geometry.
This was supported by numerous experiments and observations in the regime of low energies. However, the attempts to quantize general relativity led to various severe problems, therefore, the microscopic structure of gravity is still ill-defined.
There may be a fundamental reason for this—the degrees of freedom of general relativity are based on what may be only approximate and effective. The question of whether general relativity is an effective theory has been raised for a long time.
According to SVT, the curved spacetime arises as the small-amplitude collective excitation mode of the non-relativistic background condensate.
The mathematical description of this is similar to fluid-gravity analogy which is being used also in the analog gravity models.
Thus, relativistic gravity is essentially a long-wavelength theory of the collective modes whose amplitude is small compared to the background one.
Outside this requirement the curved-space description of gravity in terms of the Riemannian geometry becomes incomplete or ill-defined.
Cosmological constant
The notion of the cosmological constant makes sense in a relativistic theory only, therefore, within the SVT framework this constant can refer at most to the energy of small fluctuations of the vacuum above a background value, but not to the energy of the vacuum itself. Thus, in SVT this constant does not have any fundamental physical meaning, and related problems such as the vacuum catastrophe, simply do not occur in the first place.
Gravitational waves and gravitons
According to general relativity, the conventional gravitational wave is:
the small fluctuation of curved spacetime which
has been separated from its source and propagates independently.
Superfluid vacuum theory brings into question the possibility that a relativistic object possessing both of these properties exists in nature.
Indeed, according to the approach, the curved spacetime itself is the small collective excitation of the superfluid background, therefore, the property (1) means that the graviton would be in fact the "small fluctuation of the small fluctuation", which does not look like a physically robust concept (as if somebody tried to introduce small fluctuations inside a phonon, for instance).
As a result, it may be not just a coincidence that in general relativity the gravitational field alone has no well-defined stress–energy tensor, only the pseudotensor one.
Therefore, the property (2) cannot be completely justified in a theory with exact Lorentz symmetry which the general relativity is.
Though, SVT does not a priori forbid an existence of the non-localized wave-like excitations of the superfluid background which might be responsible for the astrophysical phenomena which are currently being attributed to gravitational waves, such as the Hulse–Taylor binary. However, such excitations cannot be correctly described within the framework of a fully relativistic theory.
Mass generation and Higgs boson
The Higgs boson is the spin-0 particle that has been introduced in electroweak theory to give mass to the weak bosons. The origin of mass of the Higgs boson itself is not explained by electroweak theory. Instead, this mass is introduced as a free parameter by means of the Higgs potential, which thus makes it yet another free parameter of the Standard Model. Within the framework of the Standard Model (or its extensions) the theoretical estimates of this parameter's value are possible only indirectly and results differ from each other significantly. Thus, the usage of the Higgs boson (or any other elementary particle with predefined mass) alone is not the most fundamental solution of the mass generation problem but only its reformulation ad infinitum.
Another known issue of the Glashow–Weinberg–Salam model is the wrong sign of mass term in the (unbroken) Higgs sector for
energies above the symmetry-breaking scale.
While SVT does not explicitly forbid the existence of the electroweak Higgs particle, it has its own idea of the fundamental mass generation mechanism – elementary particles acquire mass due to the interaction with the vacuum condensate, similarly to the gap generation mechanism in superconductors or superfluids.
Although this idea is not entirely new, one could recall the relativistic Coleman-Weinberg approach,
SVT gives the meaning to the symmetry-breaking relativistic scalar field as describing small fluctuations of background superfluid which can be interpreted as an elementary particle only under certain conditions. In general, one allows two scenarios to happen:
Higgs boson exists: in this case SVT provides the mass generation mechanism which underlies the electroweak one and explains the origin of mass of the Higgs boson itself;
Higgs boson does not exist: then the weak bosons acquire mass by directly interacting with the vacuum condensate.
Thus, the Higgs boson, even if it exists, would be a by-product of the fundamental mass generation phenomenon rather than its cause.
Also, some versions of SVT favor a wave equation based on the logarithmic potential rather than on the quartic one. The former potential has not only the Mexican-hat shape, necessary for the spontaneous symmetry breaking, but also some other features which make it more suitable for the vacuum's description.
Logarithmic BEC vacuum theory
In this model the physical vacuum is conjectured to be strongly-correlated quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode whereas relativistic elementary particles can be described by the particle-like modes in the limit of low energies and momenta.
The essential difference of this theory from others is that in the logarithmic superfluid the maximal velocity of fluctuations is constant in the leading (classical) order.
This allows to fully recover the relativity postulates in the "phononic" (linearized) limit.
The proposed theory has many observational consequences.
They are based on the fact that at high energies and momenta the behavior of the particle-like modes eventually becomes distinct from the relativistic one – they can reach the speed of light limit at finite energy.
Among other predicted effects is the superluminal propagation and vacuum Cherenkov radiation.
Theory advocates the mass generation mechanism which is supposed to replace or alter the electroweak Higgs one.
It was shown that masses of elementary particles can arise as a result of interaction with the superfluid vacuum, similarly to the gap generation mechanism in superconductors. For instance, the photon propagating in the average interstellar vacuum acquires a tiny mass which is estimated to be about 10−35 electronvolt.
One can also derive an effective potential for the Higgs sector which is different from the one used in the Glashow–Weinberg–Salam model, yet it yields the mass generation and it is free of the imaginary-mass problem appearing in the conventional Higgs potential.
See also
Analog gravity
Acoustic metric
Casimir vacuum
Dilatonic quantum gravity
Hawking radiation
Induced gravity
Logarithmic Schrödinger equation
Hořava–Lifshitz gravity
Sonic black hole
Vacuum energy
Hydrodynamic quantum analogs
Fluid solution
Vacuum solution (general relativity)
Notes
References
Physics beyond the Standard Model
Superfluidity
Theories of gravity | Superfluid vacuum theory | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,739 | [
"Physical phenomena",
"Phase transitions",
"Theoretical physics",
"Unsolved problems in physics",
"Phases of matter",
"Superfluidity",
"Exotic matter",
"Condensed matter physics",
"Particle physics",
"Theories of gravity",
"Physics beyond the Standard Model",
"Matter",
"Fluid dynamics"
] |
43,021,100 | https://en.wikipedia.org/wiki/Electrostatic%E2%80%93pneumatic%20activation | Electrostatic–pneumatic activation is an actuation method for shaping thin membranes for microelectromechanical and microoptoelectromechanical systems. This method benefits from operation at high speed and low power consumption. It can also cause large deflection on thin membranes. Electrostatic-pneumatic MEMS devices usually consist of two membranes with a sealed cavity in between. One membrane-calling actuator deflects into the cavity by electrostatic pressure to compress air and increase air pressure. Elevated pressure pushes the other membrane and causes a dome shape. With direct electrostatic actuation on the membrane, a concave shape is achieved.
This method is used in MEMS deformable mirrors
to create convex and concave mirrors. Electrostatic-pneumatic actuation can double maximum displacement of a thin membrane compared to only electrostatic actuated membrane.
Moreover, mechanical advantage is possible by use of electrostatic-pneumatic actuation. Since the cavity is filled with air, mechanical amplification is lower than hydraulic machinery with a non-compressible fluid.
References
Mirrors
Microtechnology | Electrostatic–pneumatic activation | [
"Materials_science",
"Engineering"
] | 232 | [
"Materials science",
"Microtechnology"
] |
43,021,118 | https://en.wikipedia.org/wiki/Gordonia%20%28bacterium%29 | Gordonia is a genus of gram-positive, aerobic, catalase-positive bacterium in the Actinomycetota, closely related to the Rhodococcus, Mycobacterium, Skermania, and Nocardia genera. Gordonia bacteria are aerobic, non-motile, and non-sporulating. Gordonia is from the same lineage that includes Mycobacterium tuberculosis.
The genus was discovered by Tsukamura in 1971 and named after American bacteriologist Ruth Gordon. Many species are often found in the soil, while other species have been isolated from aquatic environments. Some species have been associated with problems like sludge bulking and foaming in wastewater treatment plants. Gordonia species are rarely known to cause infections in humans.
Some pathogenic instances of Gordonia have been reported to cause skin and soft tissue infections, including bacteremia and cutaneous infections. Though infections are generally treated with antibiotics, surgical procedures are sometimes used to contain infections. Some investigations have found that 28 °C is the ideal temperature for the growth of Gordonia bacteria. Gordonia species often have high G-C base pair contents in DNA, ranging from 63% to 69%. G-C base pair content levels are generally positively correlated with melting temperature.
Some species of Gordonia, such as Gordonia rubripertincta, produce colonies that have a bright orange or orange-red color.
Some strains of Gordonia have recently garnered interest in the biotechnology industry due to their ability to degrade environmental pollutants.
Cases of pathogenicity
Gordonia bronchialis has occasionally shown pathogenicity, infecting sternal wounds from surgery. However, since G. bronchialis infections can present with minimal and mild symptoms, few reports of G. bronchialis infections have been documented.
Gordonia can infect immunocompetent and immunocompromised individuals.
Environmental applications
Gordonia species are able to degrade various environmental pollutants toxins and other natural compounds that cannot regularly be biodegraded. Two common materials, natural and synthetic isoprene rubber (cis-1,4-polyisoprene), can be biodegraded and used as a carbon and energy source by Gordonia. Gordonia are commonly detected in activated sludge wastewater treatment plants, where they along with other mycolic acid containing actinomycetes are well known contributors to sludge foaming issues that impede biomass settling and process performance.
Gordonia as a bacteriophage host
Gordonia species are also being studied as hosts to bacteriophages, or bacteria-parasitizing viruses. Because of their relatedness to Mycobacterium, Gordonia were used as hosts in the SEA-PHAGES project, greatly contributing to the number of isolated Gordonia phages. According to the Actinobacteriophage Database PhagesDb.org, more than 2,806 Gordonia-infecting types of bacteriophages have been identified as of April 26, 2023. Research with bacteriophages parasitizing Gordonia and other genera can be used to develop bacteriophage therapies for drug-resistant human, animal, and plant bacterial infections; contamination prevention in food processing facilities; targeted gene delivery; and more.
Species
Gordonia comprises the following species:
G. aichiensis corrig. (Tsukamura 1983) Klatte et al. 1994
G. alkaliphila Cha and Cha 2013
G. alkanivorans Kummer et al. 1999
G. amarae corrig. (Lechevalier and Lechevalier 1974) Klatte et al. 1994
G. amicalis Kim et al. 2000
G. araii Kageyama et al. 2006
G. asplenii Suriyachadkun et al. 2021
"G. australis" Schneider et al. 2008
G. bronchialis corrig. (Tsukamura 1971) Stackebrandt et al. 1989
G. caeni Srinivasan et al. 2012
G. cholesterolivorans Drzyzga et al. 2009
G. crocea Tamura et al. 2020
G. defluvii Soddell et al. 2006
G. desulfuricans Kim et al. 1999
G. didemni de Menezes et al. 2016
G. effusa Kageyama et al. 2006
G. hankookensis Park et al. 2009
G. hirsuta corrig. Klatte et al. 1996
G. hongkongensis Tsang et al. 2016
G. humi Kämpfer et al. 2011
G. hydrophobica corrig. Bendinger et al. 1995
G. insulae Kim et al. 2020
G. iterans Kang et al. 2014
"G. jacobaea" De Miguel et al. 2000
G. jinghuaiqii Zhang et al. 2021
G. jinhuaensis Li et al. 2014
G. lacunae Le Roes et al. 2009
G. malaquae Yassin et al. 2007
G. mangrovi Xie et al. 2020
G. namibiensis Brandão et al. 2002
G. neofelifaecis Liu et al. 2011
G. oryzae Muangham et al. 2019
G. otitidis Iida et al. 2005
G. paraffinivorans Xue et al. 2003
G. phosphorivorans Kämpfer et al. 2013
G. phthalatica Jin et al. 2017
G. polyisoprenivorans Linos et al. 1999
"G. pseudoamarae" Batinovic et al. 2021
G. rhizosphera Takeuchi and Hatano 1998
G. rubripertincta corrig. (Hefferan 1904) Stackebrandt et al. 1989
G. sediminis Sangkanu et al. 2019
G. shandongensis Luo et al. 2007
G. sihwensis Kim et al. 2003
G. sinesedis Maldonado et al. 2003
G. soli Shen et al. 2006
G. spumicola Tamura et al. 2020
G. sputi corrig. (Tsukamura and Yano 1985) Stackebrandt et al. 1989
G. terrae corrig. (Tsukamura 1971) Stackebrandt et al. 1989
"G. terrea" Stobdan et al. 2008
G. westfalica Linos et al. 2002
G. zhaorongruii Zhang et al. 2021
See also
Unicellular organism
Gram-positive bacteria
Gordonia sp. nov. Q8
References
External links
Gordonia at BacDive - the Bacterial Diversity Metadatabase
Mycobacteriales
Bacteria genera
Soil biology | Gordonia (bacterium) | [
"Biology"
] | 1,427 | [
"Soil biology"
] |
43,023,680 | https://en.wikipedia.org/wiki/3%2C4-Methylenedioxypropiophenone | 3,4-Methylenedioxypropiophenone, also known as 3,4-(Methylenedioxy)phenyl-1-propanone (MDP1P), is a phenylpropanoid found in some plants of the genus Piper and is an isomer of 3,4-methylenedioxyphenyl-2-propanone (MDP2P).
Natural occurrence
Studies of various chemotypes of Piper marginatum have either detected this compound to be the dominant constituent of the plant's essential oil or absent from it altogether. Of 22 samples collected from South America, specimens from the following regions had the greatest amount of the chemical by dry leaf mass: Manaus (0.35%), Melgaço (0.348%), Belterra (0.33%), Monte Alegre (0.241 to 0.266%), and Alta Floresta (0.123%).
Uses
MDP1P is a can be used as a precursor in the synthesis of methylone and various other substituted methylenedioxy- phenethylamine derivatives. It can be prepared via a Grignard reaction between ethylmagnesium bromide and piperonylonitrile.
Legal status
United States
MDP1P is not a scheduled drug at the federal level in the United States nor is it on the DEA list of chemicals.
Florida
"3,4-methylenedioxy-propiophenone" along with "2-Bromo-3,4-Methylenedioxypropiophenone" and "3,4-methylenedioxy-propiophenone-2-oxime" are Schedule I controlled substances in the state of Florida making them illegal to buy, sell, or possess in Florida.
References
Benzodioxoles
Phenylpropanoids
Aromatic ketones | 3,4-Methylenedioxypropiophenone | [
"Chemistry"
] | 400 | [
"Biomolecules by chemical classification",
"Phenylpropanoids"
] |
35,925,580 | https://en.wikipedia.org/wiki/Thiele%20modulus | The Thiele modulus was developed by Ernest Thiele in his paper 'Relation between catalytic activity and size of particle' in 1939. Thiele reasoned that a large enough particle has a reaction rate so rapid that diffusion forces can only carry the product away from the surface of the catalyst particle. Therefore, only the surface of the catalyst would experience any reaction.
The Thiele Modulus was developed to describe the relationship between diffusion and reaction rates in porous catalyst pellets with no mass transfer limitations. This value is generally used to measure the effectiveness factor of pellets.
The Thiele modulus is represented by different symbols in different texts, but is defined in Hill as hT.
Overview
The derivation of the Thiele Modulus (from Hill) begins with a material balance on the catalyst pore. For a first-order irreversible reaction in a straight cylindrical pore at steady state:
where is a diffusivity constant, and is the rate constant.
Then, turning the equation into a differential by dividing by and taking the limit as approaches 0,
This differential equation with the following boundary conditions:
and
where the first boundary condition indicates a constant external concentration on one end of the pore and the second boundary condition indicates that there is no flow out of the other end of the pore.
Plugging in these boundary conditions, we have
The term on the right side multiplied by C represents the square of the Thiele Modulus, which we now see rises naturally out of the material balance. Then the Thiele modulus for a first order reaction is:
From this relation it is evident that with large values of , the rate term dominates and the reaction is fast, while slow diffusion limits the overall rate. Smaller values of the Thiele modulus represent slow reactions with fast diffusion.
Other forms
Other order reactions may be solved in a similar manner as above. The results are listed below for irreversible reactions in straight cylindrical pores.
Second order Reaction
Zero order reaction
Effectiveness Factor
The effectiveness factor η relates the diffusive reaction rate with the rate of reaction in the bulk stream.
For a first order reaction in a slab geometry, this is:
References
Catalysis | Thiele modulus | [
"Chemistry"
] | 452 | [
"Catalysis",
"Chemical kinetics"
] |
35,925,743 | https://en.wikipedia.org/wiki/Maximum%20time%20interval%20error | Maximum time interval error (MTIE) is the maximum error committed by a clock under test in measuring a time interval for a given period of time. It is used to specify clock stability requirements in telecommunications standards. MTIE measurements can be used to detect clock instability that can cause data loss on a communications channel.
Measurement
A given dataset (clock waveform) is first compared to some reference. Phase error (usually measured in nanoseconds) is calculated for an observation interval. This phase shift is known as time interval error (TIE). MTIE is a function of the observation interval. An observation interval window moved across the dataset. Each time the peak-to-peak distance between the largest and smallest TIE in that window is noted. This distance varies as the window moves, being maximal for some window position. This maximal distance is known as MTIE for the given observation interval.
Plotting MTIE vs. different observation interval duration gives a chart useful for characterizing the stability of the clock.
See also
Allan variance
Clock drift
Instantaneous phase
Jitter
Phase noise
Plesiochronous digital hierarchy
Time deviation
References
Measurement | Maximum time interval error | [
"Physics",
"Mathematics"
] | 229 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
34,790,475 | https://en.wikipedia.org/wiki/Phragmen%E2%80%93Brouwer%20theorem | In topology, the Phragmén–Brouwer theorem, introduced by Lars Edvard Phragmén and Luitzen Egbertus Jan Brouwer, states that if X is a normal connected locally connected topological space, then the following two properties are equivalent:
If A and B are disjoint closed subsets whose union separates X, then either A or B separates X.
X is unicoherent, meaning that if X is the union of two closed connected subsets, then their intersection is connected or empty.
The theorem remains true with the weaker condition that A and B be separated.
References
García-Maynez, A. and Illanes, A. ‘A survey of multicoherence’, An. Inst. Autonoma Mexico 29 (1989) 17–67.
Wilder, R. L. Topology of manifolds, AMS Colloquium Publications, Volume 32. American Mathematical Society, New York (1949).
Theorems in topology
Trees (topology) | Phragmen–Brouwer theorem | [
"Mathematics"
] | 209 | [
"Theorems in topology",
"Topology",
"Mathematical problems",
"Mathematical theorems",
"Trees (topology)"
] |
34,794,974 | https://en.wikipedia.org/wiki/Picard%E2%80%93Lefschetz%20theory | In mathematics, Picard–Lefschetz theory studies the topology of a complex manifold by looking at the critical points of a holomorphic function on the manifold. It was introduced by Émile Picard for complex surfaces in his book , and extended to higher dimensions by . It is a complex analog of Morse theory that studies the topology of a real manifold by looking at the critical points of a real function. extended Picard–Lefschetz theory to varieties over more general fields, and Deligne used this generalization in his proof of the Weil conjectures.
Picard–Lefschetz formula
The Picard–Lefschetz formula describes the monodromy at a critical point.
Suppose that f is a holomorphic map from an (k+1)-dimensional projective complex manifold to the projective line P1. Also suppose that all critical points are non-degenerate and lie in different fibers, and have images x1,...,xn in P1. Pick any other point x in P1. The fundamental group π1(P1 – {x1, ..., xn}, x) is generated by loops wi going around the points xi, and to each point xi there is a vanishing cycle in the homology Hk(Yx) of the fiber at x. Note that this is the middle homology since the fibre has complex dimension k, hence real dimension 2k.
The monodromy action of π1(P1 – {x1, ..., xn}, x) on Hk(Yx) is described as follows by the Picard–Lefschetz formula. (The action of monodromy on other homology groups is trivial.) The monodromy action of a generator wi of the fundamental group on ∈ Hk(Yx) is given by
where δi is the vanishing cycle of xi. This formula appears implicitly for k = 2 (without the explicit coefficients of the vanishing cycles δi) in . gave the explicit formula in all dimensions.
Example
Consider the projective family of hyperelliptic curves of genus defined by
where is the parameter and . Then, this family has double-point degenerations whenever . Since the curve is a connected sum of tori, the intersection form on of a generic curve is the matrix
we can easily compute the Picard-Lefschetz formula around a degeneration on . Suppose that are the -cycles from the -th torus. Then, the Picard-Lefschetz formula reads
if the -th torus contains the vanishing cycle. Otherwise it is the identity map.
See also
Lefschetz pencil
References
Algebraic geometry | Picard–Lefschetz theory | [
"Mathematics"
] | 554 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
37,317,764 | https://en.wikipedia.org/wiki/Minicharged%20particle | Minicharged particles (or milli-charged particles) are a proposed type of subatomic particle. They are charged, but with a tiny fraction of the charge of the electron. They weakly interact with matter. Minicharged particles are not part of the Standard Model. One proposal to detect them involved photons tunneling through an opaque barrier in the presence of a perpendicular magnetic field, the rationale being that a pair of oppositely charged minicharged particles are produced that curve in opposite directions, and recombine on the other side of the barrier reproducing the photon again.
Minicharged particles would result in vacuum magnetic dichroism, and would cause energy loss in microwave cavities. Photons from the cosmic microwave background would be dissipated by galactic-scale magnetic fields if minicharged particles existed, so this effect could be observable. In fact the dimming observed of remote supernovae that was used to support dark energy could also be explained by the formation of minicharged particles.
Tests of Coulomb's law can be applied to set bounds on minicharged particles.
References
Hypothetical particles
Dark matter | Minicharged particle | [
"Physics",
"Astronomy"
] | 229 | [
"Dark matter",
"Hypothetical particles",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Subatomic particles",
"Particle physics",
"Exotic matter",
"Particle physics stubs",
"Physics beyond the Standard Model",
"Matter"
] |
37,319,629 | https://en.wikipedia.org/wiki/Genetic%20engineering%20techniques | Genetic engineering techniques allow the modification of animal and plant genomes. Techniques have been devised to insert, delete, and modify DNA at multiple levels, ranging from a specific base pair in a specific gene to entire genes. There are a number of steps that are followed before a genetically modified organism (GMO) is created. Genetic engineers must first choose what gene they wish to insert, modify, or delete. The gene must then be isolated and incorporated, along with other genetic elements, into a suitable vector. This vector is then used to insert the gene into the host genome, creating a transgenic or edited organism.
The ability to genetically engineer organisms is built on years of research and discovery on gene function and manipulation. Important advances included the discovery of restriction enzymes, DNA ligases, and the development of polymerase chain reaction and sequencing.
Added genes are often accompanied by promoter and terminator regions as well as a selectable marker gene. The added gene may itself be modified to make it express more efficiently. This vector is then inserted into the host organism's genome. For animals, the gene is typically inserted into embryonic stem cells, while in plants it can be inserted into any tissue that can be cultured into a fully developed plant.
Tests are carried out on the modified organism to ensure stable integration, inheritance and expression. First generation offspring are heterozygous, requiring them to be inbred to create the homozygous pattern necessary for stable inheritance. Homozygosity must be confirmed in second generation specimens.
Early techniques randomly inserted the genes into the genome. Advances allow targeting specific locations, which reduces unintended side effects. Early techniques relied on meganucleases and zinc finger nucleases. Since 2009 more accurate and easier systems to implement have been developed. Transcription activator-like effector nucleases (TALENs) and the Cas9-guideRNA system (adapted from CRISPR) are the two most common.
History
Many different discoveries and advancements led to the development of genetic engineering. Human-directed genetic manipulation began with the domestication of plants and animals through artificial selection in about 12,000 BC. Various techniques were developed to aid in breeding and selection. Hybridization was one way rapid changes in an organism's genetic makeup could be introduced. Crop hybridization most likely first occurred when humans began growing genetically distinct individuals of related species in close proximity. Some plants were able to be propagated by vegetative cloning.
Genetic inheritance was first discovered by Gregor Mendel in 1865, following experiments crossing peas. In 1928 Frederick Griffith proved the existence of a "transforming principle" involved in inheritance, which was identified as DNA in 1944 by Oswald Avery, Colin MacLeod, and Maclyn McCarty. Frederick Sanger developed a method for sequencing DNA in 1977, greatly increasing the genetic information available to researchers.
After discovering the existence and properties of DNA, tools had to be developed that allowed it to be manipulated. In 1970 Hamilton Smiths lab discovered restriction enzymes, enabling scientists to isolate genes from an organism's genome. DNA ligases, which join broken DNA together, were discovered earlier in 1967. By combining the two enzymes it became possible to "cut and paste" DNA sequences to create recombinant DNA. Plasmids, discovered in 1952, became important tools for transferring information between cells and replicating DNA sequences. Polymerase chain reaction (PCR), developed by Kary Mullis in 1983, allowed small sections of DNA to be amplified (replicated) and aided identification and isolation of genetic material.
As well as manipulating DNA, techniques had to be developed for its insertion into an organism's genome. Griffith's experiment had already shown that some bacteria had the ability to naturally uptake and express foreign DNA. Artificial competence was induced in Escherichia coli in 1970 by treating them with calcium chloride solution (CaCl2). Transformation using electroporation was developed in the late 1980s, increasing the efficiency and bacterial range. In 1907 a bacterium that caused plant tumors, Agrobacterium tumefaciens, had been discovered. In the early 1970s it was found that this bacteria inserted its DNA into plants using a Ti plasmid. By removing the genes in the plasmid that caused the tumor and adding in novel genes, researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA into the genomes of the plants.
Choosing target genes
The first step is to identify the target gene or genes to insert into the host organism. This is driven by the goal for the resultant organism. In some cases only one or two genes are affected. For more complex objectives entire biosynthetic pathways involving multiple genes may be involved. Once found genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. Bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at -80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research.
Genetic screens can be carried out to determine potential genes followed by other tests that identify the best candidates. A simple screen involves randomly mutating DNA with chemicals or radiation and then selecting those that display the desired trait. For organisms where mutation is not practical, scientists instead look for individuals among the population who present the characteristic through naturally-occurring mutations. Processes that look at a phenotype and then try and identify the gene responsible are called forward genetics. The gene then needs to be mapped by comparing the inheritance of the phenotype with known genetic markers. Genes that are close together are likely to be inherited together.
Another option is reverse genetics. This approach involves targeting a specific gene with a mutation and then observing what phenotype develops. The mutation can be designed to inactivate the gene or only allow it to become active under certain conditions. Conditional mutations are useful for identifying genes that are normally lethal if non-functional. As genes with similar functions share similar sequences (homologous) it is possible to predict the likely function of a gene by comparing its sequence to that of well-studied genes from model organisms. The development of microarrays, transcriptomes and genome sequencing has made it much easier to find desirable genes.
The bacteria Bacillus thuringiensis was first discovered in 1901 as the causative agent in the death of silkworms. Due to these insecticidal properties, the bacteria was used as a biological insecticide, developed commercially in 1938. The cry proteins were discovered to provide the insecticidal activity in 1956, and by the 1980s, scientists had successfully cloned the gene that encodes this protein and expressed it in plants. The gene that provides resistance to the herbicide glyphosate was found after seven years of searching in bacteria living in the outflow pipe of a Monsanto RoundUp manufacturing facility. In animals, the majority of genes used are growth hormone genes.
Gene manipulation
All genetic engineering processes involve the modification of DNA. Traditionally DNA was isolated from the cells of organisms. Later, genes came to be cloned from a DNA segment after the creation of a DNA library or artificially synthesised. Once isolated, additional genetic elements are added to the gene to allow it to be expressed in the host organism and to aid selection.
Extraction from cells
First the cell must be gently opened, exposing the DNA without causing too much damage to it. The methods used vary depending on the type of cell. Once it is open, the DNA must be separated from the other cellular components. A ruptured cell contains proteins and other cell debris. By mixing with phenol and/or chloroform, followed by centrifuging, the nucleic acids can be separated from this debris into an upper aqueous phase. This aqueous phase can be removed and further purified if necessary by repeating the phenol-chloroform steps. The nucleic acids can then be precipitated from the aqueous solution using ethanol or isopropanol. Any RNA can be removed by adding a ribonuclease that will degrade it. Many companies now sell kits that simplify the process.
Gene isolation
The gene researchers are looking to modify (known as the gene of interest) must be separated from the extracted DNA. If the sequence is not known then a common method is to break the DNA up with a random digestion method. This is usually accomplished using restriction enzymes (enzymes that cut DNA). A partial restriction digest cuts only some of the restriction sites, resulting in overlapping DNA fragment segments. The DNA fragments are put into individual plasmid vectors and grown inside bacteria. Once in the bacteria the plasmid is copied as the bacteria divides. To determine if a useful gene is present in a particular fragment, the DNA library is screened for the desired phenotype. If the phenotype is detected then it is possible that the bacteria contains the target gene.
If the gene does not have a detectable phenotype or a DNA library does not contain the correct gene, other methods must be used to isolate it. If the position of the gene can be determined using molecular markers then chromosome walking is one way to isolate the correct DNA fragment. If the gene expresses close homology to a known gene in another species, then it could be isolated by searching for genes in the library that closely match the known gene.
For known DNA sequences, restriction enzymes that cut the DNA on either side of the gene can be used. Gel electrophoresis then sorts the fragments according to length. Some gels can separate sequences that differ by a single base-pair. The DNA can be visualised by staining it with ethidium bromide and photographing under UV light. A marker with fragments of known lengths can be laid alongside the DNA to estimate the size of each band. The DNA band at the correct size should contain the gene, where it can be excised from the gel. Another technique to isolate genes of known sequences involves polymerase chain reaction (PCR). PCR is a powerful tool that can amplify a given sequence, which can then be isolated through gel electrophoresis. Its effectiveness drops with larger genes and it has the potential to introduce errors into the sequence.
It is possible to artificially synthesise genes. Some synthetic sequences are available commercially, forgoing many of these early steps.
Modification
The gene to be inserted must be combined with other genetic elements in order for it to work properly. The gene can be modified at this stage for better expression or effectiveness. As well as the gene to be inserted most constructs contain a promoter and terminator region as well as a selectable marker gene. The promoter region initiates transcription of the gene and can be used to control the location and level of gene expression, while the terminator region ends transcription. A selectable marker, which in most cases confers antibiotic resistance to the organism it is expressed in, is used to determine which cells are transformed with the new gene. The constructs are made using recombinant DNA techniques, such as restriction digests, ligations and molecular cloning.
Inserting DNA into the host genome
Once the gene is constructed it must be stably integrated into the genome of the target organism or exist as extrachromosomal DNA. There are a number of techniques available for inserting the gene into the host genome and they vary depending on the type of organism targeted. In multicellular eukaryotes, if the transgene is incorporated into the host's germline cells, the resulting host cell can pass the transgene to its progeny. If the transgene is incorporated into somatic cells, the transgene can not be inherited.
Transformation
Transformation is the direct alteration of a cell's genetic components by passing the genetic material through the cell membrane. About 1% of bacteria are naturally able to take up foreign DNA, but this ability can be induced in other bacteria. Stressing the bacteria with a heat shock or electroporation can make the cell membrane permeable to DNA that may then be incorporated into the genome or exist as extrachromosomal DNA. Typically the cells are incubated in a solution containing divalent cations (often calcium chloride) under cold conditions, before being exposed to a heat pulse (heat shock). Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA to enter the host cell. It is suggested that exposing the cells to divalent cations in cold condition may change or weaken the cell surface structure, making it more permeable to DNA. The heat-pulse is thought to create a thermal imbalance across the cell membrane, which forces the DNA to enter the cells through either cell pores or the damaged cell wall. Electroporation is another method of promoting competence. In this method the cells are briefly shocked with an electric field of 10-20 kV/cm, which is thought to create holes in the cell membrane through which the plasmid DNA may enter. After the electric shock, the holes are rapidly closed by the cell's membrane-repair mechanisms. Up-taken DNA can either integrate with the bacterials genome or, more commonly, exist as extrachromosomal DNA.
In plants the DNA is often inserted using Agrobacterium-mediated recombination, taking advantage of the Agrobacteriums T-DNA sequence that allows natural insertion of genetic material into plant cells. Plant tissue are cut into small pieces and soaked in a fluid containing suspended Agrobacterium. The bacteria will attach to many of the plant cells exposed by the cuts. The bacteria uses conjugation to transfer a DNA segment called T-DNA from its plasmid into the plant. The transferred DNA is piloted to the plant cell nucleus and integrated into the host plants genomic DNA.The plasmid T-DNA is integrated semi-randomly into the genome of the host cell.
By modifying the plasmid to express the gene of interest, researchers can insert their chosen gene stably into the plants genome. The only essential parts of the T-DNA are its two small (25 base pair) border repeats, at least one of which is needed for plant transformation. The genes to be introduced into the plant are cloned into a plant transformation vector that contains the T-DNA region of the plasmid. An alternative method is agroinfiltration.
Another method used to transform plant cells is biolistics, where particles of gold or tungsten are coated with DNA and then shot into young plant cells or plant embryos. Some genetic material enters the cells and transforms them. This method can be used on plants that are not susceptible to Agrobacterium infection and also allows transformation of plant plastids. Plants cells can also be transformed using electroporation, which uses an electric shock to make the cell membrane permeable to plasmid DNA. Due to the damage caused to the cells and DNA the transformation efficiency of biolistics and electroporation is lower than agrobacterial transformation.
Transfection
Transformation has a different meaning in relation to animals, indicating progression to a cancerous state, so the process used to insert foreign DNA into animal cells is usually called transfection. There are many ways to directly introduce DNA into animal cells in vitro. Often these cells are stem cells that are used for gene therapy. Chemical based methods uses natural or synthetic compounds to form particles that facilitate the transfer of genes into cells. These synthetic vectors have the ability to bind DNA and accommodate large genetic transfers. One of the simplest methods involves using calcium phosphate to bind the DNA and then exposing it to cultured cells. The solution, along with the DNA, is encapsulated by the cells. Liposomes and polymers can be used as vectors to deliver DNA into cultured animal cells. Positively charged liposomes bind with DNA, while polymers can designed that interact with DNA. They form lipoplexes and polyplexes respectively, which are then up-taken by the cells. Other techniques include using electroporation and biolistics. In some cases, transfected cells may stably integrate external DNA into their own genome, this process is known as stable transfection.
To create transgenic animals the DNA must be inserted into viable embryos or eggs. This is usually accomplished using microinjection, where DNA is injected through the cell's nuclear envelope directly into the nucleus. Superovulated fertilised eggs are collected at the single cell stage and cultured in vitro. When the pronuclei from the sperm head and egg are visible through the protoplasm the genetic material is injected into one of them. The oocyte is then implanted in the oviduct of a pseudopregnant animal. Another method is Embryonic Stem Cell-Mediated Gene Transfer. The gene is transfected into embryonic stem cells and then they are inserted into mouse blastocysts that are then implanted into foster mothers. The resulting offspring are chimeric, and further mating can produce mice fully transgenic with the gene of interest.
Transduction
Transduction is the process by which foreign DNA is introduced into a cell by a virus or viral vector. Genetically modified viruses can be used as viral vectors to transfer target genes to another organism in gene therapy. First the virulent genes are removed from the virus and the target genes are inserted instead. The sequences that allow the virus to insert the genes into the host organism must be left intact. Popular virus vectors are developed from retroviruses or adenoviruses. Other viruses used as vectors include, lentiviruses, pox viruses and herpes viruses. The type of virus used will depend on the cells targeted and whether the DNA is to be altered permanently or temporarily.
Regeneration
As often only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants this is accomplished through the use of tissue culture. Each plant species has different requirements for successful regeneration. If successful, the technique produces an adult plant that contains the transgene in every cell. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Offspring can be screened for the gene. All offspring from the first generation are heterozygous for the inserted gene and must be inbred to produce a homozygous specimen. Bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. Selectable markers are used to easily differentiate transformed from untransformed cells.
Cells that have been successfully transformed with the DNA contain the marker gene, while those not transformed will not. By growing the cells in the presence of an antibiotic or chemical that selects or marks the cells expressing that gene, it is possible to separate modified from unmodified cells. Another screening method involves a DNA probe that sticks only to the inserted gene. These markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant.
Confirmation
Finding that a recombinant organism contains the inserted genes is not usually sufficient to ensure that they will be appropriately expressed in the intended tissues. Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene. These tests can also confirm the chromosomal location and copy number of the inserted gene. Once confirmed methods that look for and measure the gene products (RNA and protein) are also used to assess gene expression, transcription, RNA processing patterns and expression and localization of protein product(s). These include northern hybridisation, quantitative RT-PCR, Western blot, immunofluorescence, ELISA and phenotypic analysis. When appropriate, the organism's offspring are studied to confirm that the transgene and associated phenotype are stably inherited.
Gene insertion targeting
Traditional methods of genetic engineering generally insert the new genetic material randomly within the host genome. This can impair or alter other genes within the organism. Methods were developed that inserted the new genetic material into specific sites within an organism genome. Early methods that targeted genes at certain sites within a genome relied on homologous recombination (HR). By creating DNA constructs that contain a template that matches the targeted genome sequence, it is possible that the HR processes within the cell will insert the construct at the desired location. Using this method on embryonic stem cells led to the development of transgenic mice with targeted knocked out. It has also been possible to knock in genes or alter gene expression patterns.
If a vital gene is knocked out it can prove lethal to the organism. In order to study the function of these genes, site specific recombinases (SSR) were used. The two most common types are the Cre-LoxP and Flp-FRT systems. Cre recombinase is an enzyme that removes DNA by homologous recombination between binding sequences known as Lox-P sites. The Flip-FRT system operates in a similar way, with the Flip recombinase recognizing FRT sequences. By crossing an organism containing the recombinase sites flanking the gene of interest with an organism that expresses the SSR under control of tissue specific promoters, it is possible to knock out or switch on genes only in certain cells. This has also been used to remove marker genes from transgenic animals. Further modifications of these systems allowed researchers to induce recombination only under certain conditions, allowing genes to be knocked out or expressed at desired times or stages of development.
Genome editing uses artificially engineered nucleases that create specific double-stranded breaks at desired locations in the genome. The breaks are subject to cellular DNA repair processes that can be exploited for targeted gene knock-out, correction or insertion at high frequencies. If a donor DNA containing the appropriate sequence (homologies) is present, then new genetic material containing the transgene will be integrated at the targeted site with high efficiency by homologous recombination. There are four families of engineered nucleases: meganucleases, ZFNs, transcription activator-like effector nucleases (TALEN), the CRISPR/Cas (clustered regularly interspaced short palindromic repeat/CRISPRassociated protein (e.g. CRISPR/Cas9). Among the four types, TALEN and CRISPR/Cas are the two most commonly used. Recent advances have looked at combining multiple systems to exploit the best features of both (e.g. megaTAL that are a fusion of a TALE DNA binding domain and a meganuclease). Recent research has also focused on developing strategies to create gene knock-out or corrections without creating double stranded breaks (base editors).
Meganucleases and Zinc finger nucleases
Meganucleases were first used in 1988 in mammalian cells. Meganucleases are endodeoxyribonucleases that function as restriction enzymes with long recognition sites, making them more specific to their target site than other restriction enzymes. This increases their specificity and reduces their toxicity as they will not target as many sites within a genome. The most studied meganucleases are the LAGLIDADG family. While meganucleases are still quite susceptible to off-target binding, which makes them less attractive than other gene editing tools, their smaller size still makes them attractive particularly for viral vectorization perspectives.
Zinc-finger nucleases (ZFNs), used for the first time in 1996, are typically created through the fusion of Zinc-finger domains and the FokI nuclease domain. ZFNs have thus the ability to cleave DNA at target sites. By engineering the zinc finger domain to target a specific site within the genome, it is possible to edit the genomic sequence at the desired location. ZFNs have a greater specificity, but still hold the potential to bind to non-specific sequences.. While a certain amount of off-target cleavage is acceptable for creating transgenic model organisms, they might not be optimal for all human gene therapy treatments.
TALEN and CRISPR
Access to the code governing the DNA recognition by transcription activator-like effectors (TALE) in 2009 opened the way to the development of a new class of efficient TAL-based gene editing tools. TALE, proteins secreted by the Xanthomonas plant pathogen, bind with great specificity to genes within the plant host and initiate transcription of the genes helping infection. Engineering TALE by fusing the DNA binding core to the FokI nuclease catalytic domain allowed creation of a new tool of designer nucleases, the TALE nuclease (TALEN). They have one of the greatest specificities of all the current engineered nucleases. Due to the presence of repeat sequences, they are difficult to construct through standard molecular biology procedure and rely on more complicated method of such as Golden gate cloning.
In 2011, another major breakthrough technology was developed based on CRISPR/Cas (clustered regularly interspaced short palindromic repeat / CRISPR associated protein) systems that function as an adaptive immune system in bacteria and archaea. The CRISPR/Cas system allows bacteria and archaea to fight against invading viruses by cleaving viral DNA and inserting pieces of that DNA into their own genome. The organism then transcribes this DNA into RNA and combines this RNA with Cas9 proteins to make double-stranded breaks in the invading viral DNA. The RNA serves as a guide RNA to direct the Cas9 enzyme to the correct spot in the virus DNA. By pairing Cas proteins with a designed guide RNA CRISPR/Cas9 can be used to induce double-stranded breaks at specific points within DNA sequences. The break gets repaired by cellular DNA repair enzymes, creating a small insertion/deletion type mutation in most cases. Targeted DNA repair is possible by providing a donor DNA template that represents the desired change and that is (sometimes) used for double-strand break repair by homologous recombination. It was later demonstrated that CRISPR/Cas9 can edit human cells in a dish. Although the early generation lacks the specificity of TALEN, the major advantage of this technology is the simplicity of the design. It also allows multiple sites to be targeted simultaneously, allowing the editing of multiple genes at once. CRISPR/Cpf1 is a more recently discovered system that requires a different guide RNA to create particular double-stranded breaks (leaves overhangs when cleaving the DNA) when compared to CRISPR/Cas9.
CRISPR/Cas9 is efficient at gene disruption. The creation of HIV-resistant babies by Chinese researcher He Jiankui is perhaps the most famous example of gene disruption using this method. It is far less effective at gene correction. Methods of base editing are under development in which a “nuclease-dead” Cas 9 endonuclease or a related enzyme is used for gene targeting while a linked deaminase enzyme makes a targeted base change in the DNA. The most recent refinement of CRISPR-Cas9 is called Prime Editing. This method links a reverse transcriptase to an RNA-guided engineered nuclease that only makes single-strand cuts but no double-strand breaks. It replaces the portion of DNA next to the cut by the successive action of nuclease and reverse transcriptase, introducing the desired change from an RNA template.
See also
List of genetic engineering software: software to code the genetic modifications
Mutagenesis (molecular biology technique)
References
Genetic engineering | Genetic engineering techniques | [
"Chemistry",
"Engineering",
"Biology"
] | 5,690 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
24,631,354 | https://en.wikipedia.org/wiki/Tri-rated%20cable | Tri-rated cable is a high temperature, flame retardant electrical wire designed for use inside electrical equipment.
Tri-rated cable meets the requirements of three different international standards: BS 6231, UL 758, and CSA 22.2. Combining three standards in one product makes tri-rated cable suitable for use in equipment that is required to meet both North American and British wiring regulations.
Construction
Tri-rated cables are constructed with a flexible stranded copper conductor (Class 5 according to IEC 60228), and insulation of heat resistant PVC. Tri-rated cable is manufactured in a variety of insulation colours, including brown, orange, yellow, pink, and dark blue.
Uses
Tri-rated cable is designed for use in switch, control, relay and instrumentation panels of power circuits, and as internal connectors in rectifier equipment, motor starters and motor controllers.
Standards
BS 6231
BS 6231 is a British Standard, last revised in 2006 by the BSI Group. This standard specifies the performance and construction requirements of electrical cables that are single core, non-sheathed, PVC-insulated and rated 600/1000 V. Wire meeting the requirements of type CK of this standard is used as tri-rated wire. This standard specifies a nominal insulation thickness of 0.8mm for wire sizes 0.5mm2 to 6mm2. Larger wires sizes are required to have thicker insulation.
Wire that is approved to BS 6231 might not carry the UL and CSA ratings, if, for example, the wire is not suitable for use at the higher 105 °C temperature that is specified for those ratings. In that case, the wire is not tri-rated.
According to UL 758, the maximum operating temperature of tri-rated cable is 105 °C. British Standard BS 6231 requires only a maximum operating temperature of 90 °C for continuous use. UL and CSA give tri-rated cable a voltage rating of 600 V, while it is rated at 1000 V in the BS 6231 standard.
UL 758
UL 758 is a standard maintained by UL LLC, principally for the U.S. market. This standard covers "Appliance Wiring Material" ("AWM"), including single-insulated conductors and individual insulated conductors. Wire meeting the requirements of UL Style 1015 is considered tri-rated wire. UL Style 1015 specifies wire sized from 28 to 9 AWG, with a maximum temperature of 105 °C, a voltage rating of 600 V, and PVC insulation with a thickness of .
CSA 22.2
CSA 22.2 Part 210 covers "Appliance Wiring Material".
CSA 22.2 Part 127 covers "Equipment and lead wires".
See also
Electrical wiring
Power cable
References
Further reading
G. Stokes, 'A Practical Guide to the Wiring Regulations. Third Edition,' Oxford, 2002
Electrical wiring | Tri-rated cable | [
"Physics",
"Engineering"
] | 576 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
24,631,911 | https://en.wikipedia.org/wiki/List%20of%20physical%20properties%20of%20glass | This is a list of some physical properties of common glasses. Unless otherwise stated, the technical glass compositions and many experimentally determined properties are taken from one large study. Unless stated otherwise, the properties of fused silica (quartz glass) and germania glass are derived from the SciGlass glass database by forming the arithmetic mean of all the experimental values from different authors (in general more than 10 independent sources for quartz glass and Tg of germanium oxide glass).
The list is not exhaustive.
References
Materials science | List of physical properties of glass | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 105 | [
"Applied and interdisciplinary physics",
"Glass",
"Unsolved problems in physics",
"Materials science",
"Homogeneous chemical mixtures",
"nan",
"Amorphous solids"
] |
24,632,716 | https://en.wikipedia.org/wiki/Miriam%20Rafailovich | Miriam Rafailovich (born October 29, 1953) is an American materials engineering researcher. She is the director of the Garcia Materials Research Science and Engineering Center at Stony Brook University as well as former co-director of the Chemical and Molecular Engineering program at Stony Brook University. Her publications focus mainly on nanoscale materials engineering, including nanofibers, supercritical carbon dioxide, and biodegradable polymers.
Early life and education
Miriam Rafailovich received a Bachelor of Science degree from Brooklyn College in 1974 and a Ph.D in nuclear physics from SUNY Stony Brook in 1980. She speaks English, Romanian, Hebrew, Yiddish, French, and German.
Personal life
Rafailovich is married to Jonathan C. Sokolov, who is also a polymer engineering researcher.
Published works
Rafailovich has had over 140 papers published between 1975 and 2001.
She has co-edited the following publications:
Women in Chemistry and Physics (Eds. Louise S. Grinstein, Miriam H. Rafailovich, Rose K. Rose; Greenwood Press, 1993)
Polymer International Volume 49, Issue 5, 2000 Eds. M.H. Rafailovich, S.A. Schwarz
High Performance Polymers 2001, Eds. B. Hsaio, M.H. Rafailovich
Patents:
"Patterning Method to Produce Nanoscale Magnetic Structures Using Polymer Self Assembly": We demonstrate that it is possible to produce a magnetic nanopattern on a substrate by using a self assembled co-polymer film as a mask. The scale of the pattern can be selected to range from several nanometers to micrometres. The pattern can be produced on any arbitrary magnetic film or interface and hence the method is applicable to metal multi-layers produced using ultra high vacuum or MBE. The use of this method to produce a working Giant Magneto Resistance device is demonstrated. Developers: Richard J. Gambino, Miriam Rafailovich, Jonathan Sokolov, Shaoming Zhu (1997).
"A Compatibilizer for Immiscible Polymer Blends" Developers: Miriam Rafailovich, Jonathan Sokolov, Benjamin Chu, Benjamin Hsiao A procedure was developed for the universal compatibilization of polymer blend thin films using surface functional zed exfoliated clays. The clays are non specific and compatiblization of general multi -component systems is possible (2000).
Awards and honors
1987: Ruth E Recu Chair, Weizmann Institute of Science
1997: Fellow, Division of High Polymer Physics
1997: Outstanding Stony Brook Scientist
References
External links
Miriam Rafailovich
1953 births
Living people
Brooklyn College alumni
American materials scientists
Polymer scientists and engineers
Stony Brook University faculty
Women materials scientists and engineers
Fellows of the American Physical Society | Miriam Rafailovich | [
"Materials_science",
"Technology"
] | 554 | [
"Women materials scientists and engineers",
"Materials scientists and engineers",
"Women in science and technology"
] |
24,633,026 | https://en.wikipedia.org/wiki/Essam%20E.%20Khalil | Essam Eldin Khalil Hassan Khalil is an Egyptian mechanical engineer. Khalil is a professor in the mechanical power department at Cairo University. He is the author and co-author of several international researches in HVAC field. He has many years of experience in delivering courses in air-conditioning to University, college students, to building managers and maintenance staff in both the industrial and commercial sectors in Egypt, the Arabian countries and worldwide.
He has been selected by various universities and international organisations to lecture to graduate and post graduate level engineers, managers, supervisors and operating personnel on the subjects of HVAC design and optimisation, HVAC system management, energy utilization, waste heat recovery, plant management and other related subjects.
Khalil is ASME, AIAA and ASHRAE active fellow and is an ASHRAE distinguished lecturer on two topics; Ventilation of tombs of valley of kings and design of air conditioning systems for surgical operating theatres. The tombs include King Tutankh Amen, Ramses VII, Amhotep, Horemoheb, Ramses IV,V as well as Bay. It also includes the design of Air conditioning of the Hanging Church of Christ in Cairo.
Khalil is also the chairman of National HVAC Committee in Egypt, member of the National Energy Code Committee of Egypt and the chair of HVAC sub-group. He is a registered HVAC consultant and the president of the Arab Air Conditioning Code Committee. Khalil is the Convenor of ISO TC205 WG2 (Design of Energy Efficient Buildings) and is an active member of ISO TC163 Committee. He is the Chairman of Consulting Engineering Bureau, CEB.
Biography
Academic accomplishments
Khalil achieved his M.Sc. Degree in Mechanical Engineering from Cairo University in December 1973. In February 1977 he was able to achieve his Ph.D. degree in Mechanical Engineering at London University, Imperial College of Science and Technology, UK.
In 1977, the same year he achieved his Ph.D., Khalil acquired a Diploma of Imperial College at London University and a Postdoctoral Fellowship at Imperial College, London with the Support of Harwell Atomic Energy Research Establishment, United Kingdom. Khalil has published more than 700 papers on mechanical engineering.
Continual contribution
Research
Khalil is a productive contributors to the research in the fields of Combustion, Thermal Power and Heat Transfer. In addition to eight books, he has had more than 360 papers published some in journals and some papers discussed in symposiums.
Design consultancy
Khalil is a registered consultant who had contributions in major projects including ventilation of the tombs of the valley of kings, theatres and Cinema of Egypt, the Parliament of Egypt, studios, more than 64 big Hospitals, 15 luxurious hotels, more than 14 different buildings, fire fighting & detection design, hot water system, Laundry system, Kitchen system, Electric power supply works, light current and sound systems in factories, institutes and other various contributions.
Awards and honors
Khalil achieved a number of awards and honors including:
Cairo University Award of Excellence, May 2015
Cairo University Award of Excellence, April 2014
ASME Egypt Achievement Award, April 2013
University of Wisconsin, Milwaukee Distinguished Lecture Award, January 2013
Cairo University Award of Excellence, January 2013
ASME 2012 James Harry Potter Gold Medal, 2012
Cairo University Award of Excellence, July 2012
ASHRAE 2011 Distinguished Services Award, 2011
AIAA 2011 Sustained Service Award, 2011
ASHRAE 2010 Regional Award of Merit, 2010
AIAA Energy Systems Award, 2010
Cairo University Award of Excellence, July 2010
ASME/George Westinghouse Gold Medal, 2009
ASHRAE Fellow Award, 2009
ASHRAE Chapter Service Award, October 2009
ASHRAE Presidential Award of Excellence, Sustainability Activities, October 2009
Cairo University Award of Excellence, April 2009
AIAA Fellow Award, 2008
Best Paper Award, AIAA, IECEC, July 2008
Cairo University Award of Excellence, June 2007
Member of L’Institut D’Egypte, April 2007
Cairo University Award of Excellence, April 2006
Best Paper Award, AIAA, IECEC, August 2005
ASME Fellow Award 2003
ESME Fellow Award, 1991
National Award for Scientific Achievement in Engineering Sciences in (1981).
Decor Of Science And Arts From Egyptian President, 1st Order, 1981
Books published in the field of Mechanical Engineering
FLOW, MIXING & HEAT TRANSFER IN FURNACES (WITH K.H. KHALIL & F. M. ELMAHALLAWY) H.M.T. SERIES-VOLUMES 2, PERGAMON PRESS. JUNE-1978.
HEAT & FLUID IN POWER SYSTEM COMPONENTS (WITH A.M. RESK & M. M.KAMEL)
H.M.T. SERIES VOLUME 3, PERGAMON PRESS. NOVEMBER- 1979.
MODELING OF FURNACES & COMBUSTORS. ABACUS PRESS, 1ST ED 1983.
LASER TECHNOLOGY, GEBO, Egypt, 1987 (In Arabic)
POWER PLANT DESIGN. ABACUS PRESS, GORDON & BREECH 1990.
ENERGY FUTURE, ACADEMIC BOOKSHOP PUBLISHERS, 1999 (In Arabic)
WATER DESALINATION, ACADEMIC BOOKSHOP PUBLISHERS, 1999 (In Arabic)
Types and Performance of Pumps and Compressors, UNESCO_Ellos, 2012
Air Conditioning Of Hospitals And Healthcare Facilities, Lap Lambert Publishing, 2012,
Air Distribution in Buildings, Taylor & Francis, CRC Press, 2013,
Boiler Furnace Design, Lap Lambert Publishing, 2013,
Energy Efficiency in the Urban Environment, with (Heba Khalil), Taylor & Francis, CRC Press, 2015, ,
References
External links
ENERGY EFFICIENT DESALINATION TECHNOLOGY DEVELOPMENT IN EGYPTIAN INDUSTRIES
AIR-CONDITIONING SYSTEMS’ DEVELOPMENTS IN HOSPITALS: COMFORT, AIR QUALITY, AND ENERGY UTILIZATION
CFD APPLICATIONS FOR THE PRESERVATION OF THE TOMBS OF THE VALLEY OF KINGS, LUXOR
Heat Transfer Characteristics of Turbulent Flames in Furnaces and Combustion Chambers
Predictions of Energy Losses in Furnaces under Transient Conditions
https://web.archive.org/web/20110811062409/http://aiaa.org/pdf/inside/AIAA_HA_Brochure.pdf
20th-century Egyptian engineers
21st-century Egyptian engineers
1948 births
Cairo University alumni
Academic staff of Cairo University
Egyptian mechanical engineers
Living people
Fellows of ASHRAE | Essam E. Khalil | [
"Engineering"
] | 1,246 | [
"Building engineering",
"Fellows of ASHRAE"
] |
24,633,672 | https://en.wikipedia.org/wiki/Journal%20of%20Materials%20Science%3A%20Materials%20in%20Medicine | The Journal of Materials Science: Materials in Medicine is a peer-reviewed scientific journal published by Springer Science+Business Media. It is an offshoot of the Journal of Materials Science, focusing specifically on materials in medicine and dentistry. The founding editor in chief was William Bonfield; the current editor-in-chief is Luigi Ambrosio (National Research Council (CNR) Naples, Italy).
According to the Journal Citation Reports, the Journal of Materials Science: Materials in Medicine has a 2020 impact factor of 3.896.
Scope
The journal's content focusses on the development of synthetic and natural materials for orthopaedic, maxillofacial, cardiovascular, neurological, ophthalmic and dental applications. Further, biocompatibility studies, nanomedicine, studies on regenerative medicine, computer modelling, and other advanced experimental methodologies are included.
References
External links
European Society of Biomaterials
English-language journals
Academic journals established in 1990
Springer Science+Business Media academic journals
Materials science journals
Monthly journals | Journal of Materials Science: Materials in Medicine | [
"Materials_science",
"Engineering"
] | 212 | [
"Materials science journals",
"Materials science"
] |
24,634,481 | https://en.wikipedia.org/wiki/Biomedical%20Microdevices | Biomedical Microdevices is a bimonthly peer-reviewed scientific journal covering applications of Bio-MEMS (Microelectromechanical systems) and biomedical nanotechnology. It is published by Springer Science+Business Media and the editor-in-chief are Alessandro Grattoni (Houston Methodist Research Institute) and Arum Han (Texas A&M University).
Abstracting and indexing
The journal is abstracted/indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.783.
References
External links
English-language journals
Academic journals established in 1998
Biomedical engineering journals
Bimonthly journals
Springer Science+Business Media academic journals | Biomedical Microdevices | [
"Engineering",
"Biology"
] | 138 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
24,635,433 | https://en.wikipedia.org/wiki/Balanced%20Boolean%20function | In mathematics and computer science, a balanced Boolean function is a Boolean function whose output yields as many 0s as 1s over its input set. This means that for a uniformly random input string of bits, the probability of getting a 1 is 1/2.
Examples
Examples of balanced Boolean functions are the majority function, the "dictatorship function" that copies the first bit of its input to the output, and the parity check function that produces the exclusive or of the input bits.
If is a bent function on bits, and is any nonzero vector of bits, then the function that maps to is balanced. The bent functions are exactly the functions for which this is true, for all nonzero choices of .
The dictatorship function can be evaluated after examining only a single bit of the input, but that bit must always be examined. Benjamini, Schramm, and Wilson describe a more complex example based on percolation theory with the property that a randomized Las Vegas algorithm can compute the function exactly while ensuring that the probability of reading any particular input bit is small, roughly inversely proportional to the square root of the number of bits.
Application
Balanced Boolean functions are used in cryptography, where being balanced is one of "the most important criteria for cryptographically strong Boolean functions". If a function is not balanced, it will have a statistical bias, making it subject to cryptanalysis such as the correlation attack.
References
Boolean algebra | Balanced Boolean function | [
"Mathematics"
] | 298 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic"
] |
24,635,472 | https://en.wikipedia.org/wiki/Bert%20Meijer | Egbert (Bert) Willem Meijer (born 1955 in Groningen) is a Dutch organic chemist, known for his work in the fields of supramolecular chemistry, materials chemistry and polymer chemistry. Meijer, who is distinguished professor of Molecular Sciences at Eindhoven University of Technology (TU/e) and Academy Professor of the Royal Netherlands Academy of Arts and Sciences, is considered one of the founders of the field of supramolecular polymer chemistry. Meijer is a prolific author, sought-after academic lecturer and recipient of multiple awards in the fields of organic and polymer chemistry.
Education
After attending secondary school in Appingedam where he graduated in 1972, Meijer received his education in Organic Chemistry at the University of Groningen. He obtained his MSc degree in 1978, and subsequently his PhD degree under supervision of Professor Hans Wijnberg in 1982. Meijer graduated summa cum laude with his thesis on 'Chemiluminescence in action: syntheses, properties, and applications of 1,2-dioxetanes'.
Career
Meijer started his career in 1982 at the Philips Research Laboratories in Eindhoven as a research scientist in Molecular Materials. In 1989 he moved to DSM Research in Geleen to become head of the department for New Materials. In 1991 Meijer was installed as full professor of Organic Chemistry at the department for Chemistry & Chemical Engineering of Eindhoven University of Technology (TU/e) and in 1999 at the department for Biomedical Engineering at the same university. Since 2004 Bert Meijer is a distinguished university professor of Molecular Sciences at TU/e, where he founded the Institute for Complex Molecular Systems and served as its Scientific Director from 2008 until 2018. Meijer is adjunct professor of Macromolecular Chemistry at Radboud University Nijmegen since 1994 and distinguished visiting professor at the University of California, Santa Barbara since 2008. In 2014 Bert Meijer was inducted as Academy Professor of the Royal Netherlands Academy of Arts and Sciences.
Contributions to research
Meijer's research focuses on supramolecular systems with special properties and functions. It's founded on the principles of synthetic and organic chemistry to find solutions to challenges in materials science and life sciences. Meijer is recognized as a pioneer in the field of supramolecular materials, being one of the first chemists to explore and develop functional supramolecular polymers as a new class of materials. Via advanced molecular design and synthesis he has realized systems in which monomeric units self-assemble into long supramolecular polymeric chains, resulting in materials displaying unique dynamic properties that were thought to be exclusively reserved for macromolecules. His new class of supramolecular structures thus led to an adjusted definition of Staudinger’s description of what polymers are.
Meijer's career took off with breakthrough results in dendrimer chemistry including a dendritic box and super-amphiphiles (being the first examples of polymersomes). He synthesized poly(propylene imine) dendrimers that are now produced at commercially relevant quantities (at multiple kilogram scale) worldwide. His dendrimers form the basic compound of a phosphate binder currently used in the clinic. Meijer also developed novel semiconducting polymers with high electron mobilities. His exploration of the combination of chirality and mesoscopic morphology in these polymers led to the fabrication of an LED that emits circularly polarized light. Many years later the insights into chiral semiconductors were used to optimize the water splitting in a photoelectrochemical cell.
Meijer's discovery of ureidopyrimidinone-based supramolecular polymers is a landmark in supramolecular chemistry. He designed a simple quadruple hydrogen-bonded building block that is self-complementary and exhibits a very large association constant. Bringing two of these units together with a spacer resulted in a supramolecular polymer with unprecedented properties. Depending on the conditions applied it on the one hand it possess all the properties of macromolecules, both in solution and solid state, while on the other hand it displays the dynamic nature of organic molecules tied together via non-covalent bonds. Today, the concept of supramolecular polymers is investigated in many international academic and industrial laboratories. The Meijer lab has successfully started the company SupraPolix, offering a supramolecular polymer platform as a key component in several applications, including glue (as superflow elastomers), cosmetics and regenerative medicine of heart valves, for which clinical trials are underway by the Dutch/Swiss company Xeltis.
Following up on his discovery Meijer has unraveled the mechanisms behind chemical self-assembly and has proven that supramolecular polymerizations can be classified, based on their mechanism, in a way similar to conventional polymerizations. Current research in the Meijer lab focuses on complex multi-component supramolecular polymer systems and their assembly behaviour Also the potential use of supramolecular polymers is explored as mimics of biological tissue using a modular approach that allows for easy adjustment of their dynamics to external stimuli.
Achievements and awards
Scientific output and research management
Meijer has published over 600 peer reviewed research articles and reviews which have been cited more than 100.000 times, yielding him an H-index of well over 142 . He guided 100+ PhD students and more than 25 of his former group members now hold tenured academic professorships worldwide. Meijer has over 20 patents and co-founded the companies SyMO-Chem, a professional contract research company (2000) and SupraPolix, focusing on supramolecular polymers (2003).
Since 2006 Meijer is the chairman of the International Scientific Advisory Board of DSM, in 2008 he founded the Institute for Complex Molecular Systems at Eindhoven University of Technology, and he chairs the 27 million euro national Dutch research program into 'Functional Molecular Systems' since 2012. In 2017 Meijer was appointed member of the Board of Trustees of Leiden University. Meijer also serves as member of advisory or editorial boards of over 10 scientific journals, including Advanced Materials (since 1992), Angewandte Chemie (since 1998), Chemical Science (since 2010) and the Journal of the American Chemical Society (since 2010).
Academic invitations and memberships
Meijer has obtained visiting professorships and was invited to named lectureships at many universities. He was visiting professor at the universities of Leuven, Belgium (1995), Illinois (1998), Bordeaux (2007), Zhejiang (2008) and California, Santa Barbara (2008). He currently is a Humboldt visiting professor at the Free University Berlin (until 2024). He has been, amongst others, Bayer distinguished lecturer (Cornell, New York, 1998), Glaxo-Wellcome lecturer (Sheffield, 1998), Rohm & Haas lecturer (Berkeley, 2002), Xerox Distinguished Lecturer of Canada (Toronto and Montreal, 2004), Melville Lecturer (Cambridge University, 2006) Mordecai and Rivka Rubin Lecturer (Technion, Israel, 2015), Aldrich Lecturer (Stanford University, USA, 2015) and Eastman Lecturer (North Carolina, 2016). He gave the Van ‘t Hoff Centennial lecture at the Royal Netherlands Academy of Arts and Sciences in 2001, the Carothers Lecture at Dupont Wilmington in 2005 and provided the keynote science lecture at Lowlands University 2009, as part of the Lowlands music- and culture festival (Flevopolder, The Netherlands). In 2019, he was the Saul Winstein Lecturer at the University of California, Los Angeles and he is the 2020 Robert Robinson lecturer at the University of Oxford.
Meijer is an elected member of the Royal Netherlands Academy of Arts and Sciences (KNAW, since 2003) and the Royal Holland Society of Sciences and Humanities (KHMW, since 1997), the Deutsche Akademie der Technikwissenschaften since 2012), the Nordrhein-Westfälische Akademie der Wissenschaften und der Künste (since 2014), and the Academia Europaea (since 2012). He is Honorary Fellow of the Chemical Research Society of India (since 2012) and Fellow of the American Association for the Advancement of Science (since 2015). Meijer is furthermore elected as an Honorary Member of the Royal Netherlands Chemical Society (KNCV, since 2018). In 2019, he was elected as a member of the European Academy of Sciences and as International Honorary Member of the American Academy of Arts and Sciences.
Awards
Meijer has received numerous prominent awards, including the Gold Medal of the Royal Netherlands Chemical Society (1993), the Spinoza Award of the Netherlands Organisation for Scientific Research (2001), the ACS Award in Polymer Chemistry (2006) and the AkzoNobel Science Award (2010). In 2010 he received an ERC Advanced Research Grant and he was awarded the Wheland Medal of the University of Chicago and the International Award of the Society of Polymer Science of Japan. In 2012 the ACS presented him the Arthur C. Cope Scholar Award. In 2013 Meijer held the Solvay International Chair in Chemistry, and in 2014 he won the Belgium Polymer Group Award and the Prelog Medal of ETH Zürich. That same year he received the Academy Professor Prize of the Netherlands Academy of Arts and Sciences appointed as a lifetime achievement award. In 2017 Meijer was installed Doctorem Honoris Causa at the University of Mons (Belgium) and awarded the Forschungspreis of the Humboldt Foundation (Germany) and the Nagoya Gold Medal Award in Organic Chemistry (Japan). In 2018, he was awarded a second ERC Advanced Research Grant and the Chirality Medal of the Società Chimica Italiana (Princeton, 2018). In 2019, Bert Meijer was installed Doctorem Honoris Causa at the Free University Berlin. In 2020, he received the title of Commander of the Order of the Netherlands Lion, the prestigious Dutch order of chivalry founded by King William I in 1815. On September 12, 2022, the German Chemical Society GDCh presented him the Hermann Staudinger Prize 2022 for his 'outstanding and creative contributions to the field of supramolecular polymer chemistry'.
Personal life
Meijer was born in 1955 as the oldest son of Roelof Meijer and Winy Meijer–de Wit (both civil servants). In 1979 he married Iektje Oosterbeek with whom he has two sons: Roger Meijer (1985), CTO at Paylogic in Amsterdam, and Wieger Meijer (1988), who is an architect in Sydney, Australia.
See also
Subi Jacob George
References
External links
Meijer Research Group
Institute for Complex Molecular Systems
Bert Meijer profile at TU/e
1955 births
Living people
20th-century Dutch chemists
Academic staff of the Eindhoven University of Technology
Members of the Royal Netherlands Academy of Arts and Sciences
Scientists from Groningen (city)
Spinoza Prize winners
University of Groningen alumni
21st-century Dutch chemists
Polymer scientists and engineers
Academic staff of Radboud University Nijmegen | Bert Meijer | [
"Chemistry",
"Materials_science"
] | 2,289 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
24,638,351 | https://en.wikipedia.org/wiki/Carbon%20tetroxide | Carbon tetroxide or Oxygen carbonate (in its C2v isomer) is a highly unstable oxide of carbon with formula . It was proposed as an intermediate in the O-atom exchange between carbon dioxide () and oxygen () at high temperatures. The C2v isomer, which is 138 kJ mol−1 more stable than the D2d isomer, was first detected in electron-irradiated ices of carbon dioxide via infrared spectroscopy.
The isovalent carbon tetrasulfide CS4 is also known from inert gas matrix. It has D2d symmetry with the same atomic arrangement as CO4 (D2d).
References
Oxocarbons | Carbon tetroxide | [
"Chemistry"
] | 138 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
24,639,265 | https://en.wikipedia.org/wiki/Six-dimensional%20space | Six-dimensional space is any space that has six dimensions, six degrees of freedom, and that needs six pieces of data, or coordinates, to specify a location in this space. There are an infinite number of these, but those of most interest are simpler ones that model some aspect of the environment. Of particular interest is six-dimensional Euclidean space, in which 6-polytopes and the 5-sphere are constructed. Six-dimensional elliptical space and hyperbolic spaces are also studied, with constant positive and negative curvature.
Formally, six-dimensional Euclidean space, , is generated by considering all real 6-tuples as 6-vectors in this space. As such it has the properties of all Euclidean spaces, so it is linear, has a metric and a full set of vector operations. In particular the dot product between two 6-vectors is readily defined and can be used to calculate the metric. 6 × 6 matrices can be used to describe transformations such as rotations that keep the origin fixed.
More generally, any space that can be described locally with six coordinates, not necessarily Euclidean ones, is six-dimensional. One example is the surface of the 6-sphere, S6. This is the set of all points in seven-dimensional space (Euclidean) that are a fixed distance from the origin. This constraint reduces the number of coordinates needed to describe a point on the 6-sphere by one, so it has six dimensions. Such non-Euclidean spaces are far more common than Euclidean spaces, and in six dimensions they have far more applications.
Geometry
6-polytope
A polytope in six dimensions is called a 6-polytope. The most studied are the regular polytopes, of which there are only three in six dimensions: the 6-simplex, 6-cube, and 6-orthoplex. A wider family are the uniform 6-polytopes, constructed from fundamental symmetry domains of reflection, each domain defined by a Coxeter group. Each uniform polytope is defined by a ringed Coxeter–Dynkin diagram. The 6-demicube is a unique polytope from the D6 family, and 221 and 122 polytopes from the E6 family.
5-sphere
The 5-sphere, or hypersphere in six dimensions, is the five-dimensional surface equidistant from a point. It has symbol S5, and the equation for the 5-sphere, radius r, centre the origin is
The volume of six-dimensional space bounded by this 5-sphere is
which is 5.16771 × r6, or 0.0807 of the smallest 6-cube that contains the 5-sphere.
6-sphere
The 6-sphere, or hypersphere in seven dimensions, is the six-dimensional surface equidistant from a point. It has symbol S6, and the equation for the 6-sphere, radius r, centre the origin is
The volume of the space bounded by this 6-sphere is
which is 4.72477 × r7, or 0.0369 of the smallest 7-cube that contains the 6-sphere.
Applications
Transformations in three dimensions
In three dimensional space a rigid transformation has six degrees of freedom, three translations along the three coordinate axes and three from the rotation group SO(3). Often these transformations are handled separately as they have very different geometrical structures, but there are ways of dealing with them that treat them as a single six-dimensional object.
Screw theory
In screw theory angular and linear velocity are combined into one six-dimensional object, called a twist. A similar object called a wrench combines forces and torques in six dimensions. These can be treated as six-dimensional vectors that transform linearly when changing frame of reference. Translations and rotations cannot be done this way, but are related to a twist by exponentiation.
Phase space
Phase space is a space made up of the position and momentum of a particle, which can be plotted together in a phase diagram to highlight the relationship between the quantities. A general particle moving in three dimensions has a phase space with six dimensions, too many to plot but they can be analysed mathematically.
Rotations in four dimensions
The rotation group in four dimensions, SO(4), has six degrees of freedom. This can be seen by considering the 4 × 4 matrix that represents a rotation: as it is an orthogonal matrix the matrix is determined, up to a change in sign, by e.g. the six elements above the main diagonal. But this group is not linear, and it has a more complex structure than other applications seen so far.
Another way of looking at this group is with quaternion multiplication. Every rotation in four dimensions can be achieved by multiplying by a pair of unit quaternions, one before and one after the vector. These quaternion are unique, up to a change in sign for both of them, and generate all rotations when used this way, so the product of their groups, S3 × S3, is a double cover of SO(4), which must have six dimensions.
Although the space we live in is considered three-dimensional, there are practical applications for four-dimensional space. Quaternions, one of the ways to describe rotations in three dimensions, consist of a four-dimensional space. Rotations between quaternions, for interpolation, for example, take place in four dimensions. Spacetime, which has three space dimensions and one time dimension is also four-dimensional, though with a different structure to Euclidean space.
Electromagnetism
In electromagnetism, the electromagnetic field is generally thought of as being made of two things, the electric field and magnetic field. They are both three-dimensional vector fields, related to each other by Maxwell's equations. A second approach is to combine them in a single object, the six-dimensional electromagnetic tensor, a tensor- or bivector-valued representation of the electromagnetic field. Using this Maxwell's equations can be condensed from four equations into a particularly compact single equation:
where is the bivector form of the electromagnetic tensor, is the four-current and is a suitable differential operator.
String theory
In physics string theory is an attempt to describe general relativity and quantum mechanics with a single mathematical model. Although it is an attempt to model our universe it takes place in a space with more dimensions than the four of spacetime that we are familiar with. In particular a number of string theories take place in a ten-dimensional space, adding an extra six dimensions. These extra dimensions are required by the theory, but as they cannot be observed are thought to be quite different, perhaps compactified to form a six-dimensional space with a particular geometry too small to be observable.
Since 1997 another string theory has come to light that works in six dimensions. Little string theories are non-gravitational string theories in five and six dimensions that arise when considering limits of ten-dimensional string theory.
Theoretical background
Bivectors in four dimensions
A number of the above applications can be related to each other algebraically by considering the real, six-dimensional bivectors in four dimensions. These can be written for the set of bivectors in Euclidean space or for the set of bivectors in spacetime. The Plücker coordinates are bivectors in while the electromagnetic tensor discussed in the previous section is a bivector in . Bivectors can be used to generate rotations in either or through the exponential map (e.g. applying the exponential map of all bivectors in generates all rotations in ). They can also be related to general transformations in three dimensions through homogeneous coordinates, which can be thought of as modified rotations in .
The bivectors arise from sums of all possible wedge products between pairs of 4-vectors. They therefore have C = 6 components, and can be written most generally as
They are the first bivectors that cannot all be generated by products of pairs of vectors. Those that can are simple bivectors and the rotations they generate are simple rotations. Other rotations in four dimensions are double and isoclinic rotations and correspond to non-simple bivectors that cannot be generated by single wedge product.
6-vectors
6-vectors are simply the vectors of six-dimensional Euclidean space. Like other such vectors they are linear, can be added subtracted and scaled like in other dimensions. Rather than use letters of the alphabet, higher dimensions usually use suffixes to designate dimensions, so a general six-dimensional vector can be written . Written like this the six basis vectors are , , , , and .
Of the vector operators the cross product cannot be used in six dimensions; instead, the wedge product of two 6-vectors results in a bivector with 15 dimensions. The dot product of two vectors is
It can be used to find the angle between two vectors and the norm,
This can be used for example to calculate the diagonal of a 6-cube; with one corner at the origin, edges aligned to the axes and side length 1 the opposite corner could be at , the norm of which is
which is the length of the vector and so of the diagonal of the 6-cube.
Gibbs bivectors
In 1901 J.W. Gibbs published a work on vectors that included a six-dimensional quantity he called a bivector. It consisted of two three-dimensional vectors in a single object, which he used to describe ellipses in three dimensions. It has fallen out of use as other techniques have been developed, and the name bivector is now more closely associated with geometric algebra.
Footnotes
References
Dimension
dimensional space | Six-dimensional space | [
"Physics"
] | 1,977 | [
"Geometric measurement",
"Dimension",
"Physical quantities",
"Theory of relativity"
] |
24,641,580 | https://en.wikipedia.org/wiki/Speakeasy%20%28computational%20environment%29 | Speakeasy was a numerical computing interactive environment also featuring an interpreted programming language. It was initially developed for internal use at the Physics Division of Argonne National Laboratory by the theoretical physicist Stanley Cohen. He eventually founded Speakeasy Computing Corporation to make the program available commercially.
Speakeasy is a very long-lasting numerical package. In fact, the original version of the environment was built around a core dynamic data repository called "Named storage" developed in the early 1960s, while the most recent version has been released in 2006.
Speakeasy was aimed to make the computational work of the physicists at the Argonne National Laboratory easier.
History
Speakeasy was initially conceived to work on mainframes (the only kind of computers at that time), and was subsequently ported to new platforms (minicomputers, personal computers) as they became available. The porting of the same code on different platforms was made easier by using Mortran metalanguage macros to face systems dependencies and compilers deficiencies and differences. Speakeasy is currently available on several platforms: PCs running Windows, macOS, Linux, departmental computers and workstations running several flavors of Linux, AIX or Solaris.
Speakeasy was also among the first interactive numerical computing environments, having been implemented in such a way on a CDC 3600 system, and later on IBM TSO machines as one was in beta-testing at the Argonne National Laboratory at the time. By 1984 it was available on Digital Equipment Corporation's VAX systems.
Almost since the beginning (as the dynamic linking functionality was made available in the operating systems) Speakeasy features the capability of expanding its operational vocabulary using separated modules, dynamically linked to the core processor as they are needed. For that reason such modules were called "linkules" (LINKable-modULES). They are functions with a generalized interface, which can be written in FORTRAN or in C.
The independence of each of the new modules from the others and from the main processor is of great help in improving the system, especially it was in the old days.
This easy way of expanding the functionalities of the main processor was often exploited by the users to develop their own specialized packages. Besides the programs, functions and subroutines the user can write in the Speakeasy's own interpreted language, linkules add functionalities carried out with the typical performances of compiled programs.
Among the packages developed by the users, one of the most important is "Modeleasy", originally developed as "FEDeasy" in the early 1970s at the research department of the Federal Reserve Board of Governors in Washington D.C.. Modeleasy implements special objects and functions for large econometric models estimation and simulation. Its evolution led eventually to its distribution as an independent product.
Syntax
The symbol :_ (colon+underscore) is both the Speakeasy logo and the prompt of the interactive session.
The dollar sign is used for delimiting comments; the ampersand is used to continue a statement on the following physical line, in which case the prompt becomes :& (colon+ampersand); a semicolon can separate statements written on the same physical line.
As its own name tells, Speakeasy was aimed to expose a syntax as friendly as possible to the user, and as close as possible to the spoken language. The best example of that is given by the set of commands for reading/writing data from/to the permanent storage. E.g. (the languages keywords are in upper case to clarify the point):
:_ GET my_data FROM LIBRARY my_project
:_ KEEP my_data AS a_new_name_for_mydata IN LIBRARY other_project
Variables (i.e. Speakeasy objects) are given a name up to 255 character long, when LONGNAME option is ON, up to 8 characters otherwise (for backward compatibility). They are dynamically typed, depending on the value assigned to them.
:_ a=1
:_ whatis a
A is a REAL SCALAR.
:_ a="now a character array"
:_ whatis a
A is a 21 element CHARACTER ARRAY.
Arguments of functions are usually not required to be surrounded by parenthesis or separated by commas, provided that the context remains clear and unambiguous. For example:
:_ sin(grid(-pi,pi,pi/32)) $ fully specified syntax
can be written :
:_ sin grid(-pi,pi,pi/32) $ the argument of function sin is not surrounded by parenthesis
or even
:_ sin grid(-pi pi pi/32) $ the arguments of function grid can be separated by spaces
Many other syntax simplifications are possible; for example, to define an object named 'a' valued to a ten-elements array of zeroes, one can write any of the following statements:
:_ a=array(10:0,0,0,0,0,0,0,0,0,0)
:_ a=0,0,0,0,0,0,0,0,0,0
:_ a=0 0 0 0 0 0 0 0 0 0
:_ a=ints(10)*0
:_ a=10:
Speakeasy is a vector-oriented language: giving a structured argument to a function of a scalar, the result is usually an object with the same structure of the argument, in which each element is the result of the function applied to the corresponding element of the argument. In the example given above, the result of function sin applied to the array (let us call it x) generated by the function grid is the array answer whose element answer(i) equals sin(x(i)) for each i from 1 to noels(x) (the number of elements of x). In other words, the statement
:_ a=sin(grid(-pi pi pi/32))
is equivalent to the following fragment of program:
x=grid(-pi pi pi/32) $ generates an array of real numbers from -pi to pi, stepping by pi/32
for i = 1,noels(x) $ loops on the elements of x
a(i) = sin(x(i)) $ evaluates the i-th element of a
next i $ increment the loop index
The vector-oriented statements avoid writing programs for such loops and are much faster than them.
Work area and objects
By the very first statement of the session, the user can define the size of the "named storage" (or "work area", or "allocator"), which is allocated once and for all at the beginning of the session. Within this fixed-size work area, the Speakeasy processor dynamically creates and destroys the work objects as needed. A user-tunable garbage collection mechanism is provided to maximize the size of the free block in the work area, packing the defined objects in the low end or in the high end of the allocator. At any time, the user can ask about used or remaining space in the work area.
:_ SIZE 100M $ very first statement: the work area will be 100MB
:_ SIZE $ returns the size of the work area in the current session
:_ SPACELEFT $ returns the amount of data storage space currently unused
:_ SPACENOW $ returns the amount of data storage space currently used
:_ SPACEPEAK $ returns the maximum amount of data storage space used in the current session
Raw object orientation
Within reasonable conformity and compatibility constraints, the Speakeasy objects can be operated on using the same algebraic syntax.
From this point of view, and considering the dynamic and structured nature of the data held in the "named storage", it is possible to say that Speakeasy since the beginning implemented a very raw form of operator overloading, and a pragmatic approach to some features of what was later called "Object Oriented Programming", although it did not evolve further in that direction.
The object families
Speakeasy provides a bunch of predefined "families" of data objects: scalars, arrays (up to 15 dimensions), matrices, sets, time series.
The elemental data can be of kind real (8-bytes), complex (2x8-bytes), character-literal or name-literal ( matrices elements can be real or complex, time series values can only be real ).
Missing values
For time series processing, five types of missing values are provided. They are denoted by N.A. (not available), N.C. (not computable), N.D. (not defined), along with N.B. and N.E. the meaning of which is not predetermined and is left available for the linkules developer. They are internally represented by specific (and very small) numeric values, acting as codes.
All the time series operations take care of the presence of missing values, propagating them appropriately in the results.
Depending on a specific setting, missing values can be represented by the above notation, by a question mark symbol, or a blank (useful in tables). When used in input the question mark is interpreted as an N.A. missing value.
:_ b=timeseries(1,2,3,4 : 2010 1 4)
:_ b
B (A Time Series with 4 Components)
1 2 3 4
:_ b(2010 3) = ?
:_ showmval qmark
:_ b
B (A Time Series with 4 Components)
1 2 ? 4
:_ 1/b
1/B (A Time Series with 4 Components)
1 .5 ? .25
:_ showmval explain
:_ b
B (A Time Series with 4 Components)
1 2 N.A. 4
:_ 1/b
1/B (A Time Series with 4 Components)
1 .5 N.C. .25
In numerical objects other than time series, the concept of "missing values" is meaningless, and the numerical operations on them use the actual numeric values regardless they correspond to "missing values codes" or not (although "missing values codes" can be input and shown as such).
:_ 1+?
1+? = 1.00
:_ 1/?
1/? = 5.3033E36
:_ 1*?
1*? = ?
Note that, in other contexts, a question mark may have a different meaning: for example, when used as the first (and possibly only) character of a command line, it means the request to show more pieces of a long error message (which ends with a "+" symbol).
:_ a=array(10000,10000:)
ARRAY(10000,10000:) In line "A=ARRAY(10000,10000:)" Too much data.+
:_ ?
Allocator size must be at least 859387 kilobytes.+
:_ ?
Use FREE to remove no longer needed data
or
use CHECKPOINT to save allocator for later restart.+
:_ ?
Use NAMES to see presently defined names.
Use SIZE & RESTORE to restart with a larger allocator.
:_ ?
NO MORE INFORMATION AVAILABLE.
Logical values
Some support is provided for logical values, relational operators (the Fortran syntax can be used) and logical expressions.
Logical values are stored actually as numeric values: with 0 meaning false and non-zero (1 on output) meaning true.
:_ a = 1 2 3 4 5
:_ b = 1 3 2 5 4
:_ a>b
A>B (A 5 Component Array)
0 0 1 0 1
:_ a<=b
A<=B (A 5 Component Array)
1 1 0 1 0
:_ a.eq.b
A.EQ.B (A 5 Component Array)
1 0 0 0 0
:_ logical(2) $ this changes the way logical values are shown
:_ a>b; a<=b; a.eq.b
A>B (A 5 Component Array)
F F T F T
A<=B (A 5 Component Array)
T T F T F
A.EQ.B (A 5 Component Array)
T F F F F
Programming
Special objects such as "PROGRAM", "SUBROUTINE" and "FUNCTION" objects (collectively referred to as procedures) can be defined for operations automation. Another way for running several instructions with a single command is to store them into a use-file and make the processor read them by mean of the USE command.
Use-files
"USEing" a use-file is the simplest way for performing several instruction with minimal typed input. (This operation roughly corresponds to what "source-ing" a file is in other scripting languages.)
A use-file is an alternate input source to the standard console and can contain all the commands a user can input by the keyboard (hence no multi-line flow control construct is allowed). The processor reads and executes use-files one line at a time.
Use-file execution can be concatenated but not nested, i.e. the control does not return to the caller at the completion of the called use-file.
Procedures
Full programming capability is achieved using "procedures". They are actually Speakeasy objects, which must be defined in the work area to be executed. An option is available in order to make the procedures being automatically retrieved and loaded from the external storage as they are needed.
Procedures can contain any of the execution flow control constructs available in the Speakeasy programming language.
Programs
A program can be run simply invoking its name or using it as the argument of the command EXECUTE. In the latter case, a further argument can identify a label from which the execution will begin.
Speakeasy programs differs from the other procedures for being executed at the same scoping "level" they are referenced to, hence they have full visibility of all the objects defined at that level, and all the objects created during their execution will be left there for subsequent uses. For that reason no argument list is needed.
Subroutines and functions
Subroutines and Functions are executed at a new scoping level, which is removed when they finish. The communication with the calling scoping level is carried out through the argument list (in both directions). This implements data hiding, i.e. objects created within a Subroutine or a Function are not visible to other Subroutine and Functions but through argument lists.
A global level is available for storing object which must be visible from within any procedure, e.g. the procedures themselves.
The Functions differ from the Subroutines because they also return a functional value; reference to them can be part of more complex statement and are replaced by the returned functional value when evaluating the statement.
In some extent, Speakeasy Subroutines and Functions are very similar to the Fortran procedures of the same name.
Flow control
An IF-THEN-ELSE construct is available for conditional execution and two forms of FOR-NEXT construct are provided for looping.
A "GO TO label" statement is provided for jumping, while a Fortran-like computed GO TO statement can be used fort multiple branching.
An ON ERROR mechanism, with several options, provides a means for error handling.
Linkule writing
Linkules are functions usually written in Fortran (or, unsupportedly, in C). With the aid of Mortran or C macros and an API library, they can interface the Speakeasy workarea for retrieving, defining, manipulating any Speakeasy object.
Most of the Speakeasy operational vocabulary is implemented via linkules. They can be statically linked to the core engine, or dynamically loaded as they are needed, provided they are properly compiled as shared objects (unix) or dll (windows).
Notes
External links
Official website (via Wayback Machine)
The Econometric Modeling & Computing Corporation web site. (via Wayback Machine)
An interesting conversation with Stan Cohen.
Data analysis software
Mathematical software
Physics software
Proprietary cross-platform software
Numerical analysis software for Linux
Numerical analysis software for macOS
Numerical analysis software for Windows
Computer algebra system software for Windows
Computer algebra system software for macOS
Computer algebra system software for Linux
Array programming languages
Numerical programming languages
Numerical linear algebra
Statistical programming languages
Simulation programming languages
Programming languages created in 1964 | Speakeasy (computational environment) | [
"Physics",
"Mathematics"
] | 3,386 | [
"Mathematical software",
"Physics software",
"Computational physics"
] |
24,642,384 | https://en.wikipedia.org/wiki/Venus%20for%20Men | The Venus for Men, previously sold as the Venus 2000, is a self-actuated masturbation aid for men that applies sexual stimulation using a mechanism outwardly similar to a milking machine. The machine works with or without an erection. Metro has described it as a sex toy "for the serious onanist". In addition to masturbation, the machine may also be used for orgasm control practices such as edging or forced orgasm.
Creation
The Venus II was invented by Rick Gellert, with the assistance of Valentin Tsitrin, a Russian engineer. Gellert has written:
...It seems male desire for sexual activity ranges from needing an orgasm once in many weeks to wanting several in a day. [...] after hundreds of variations, I developed a product unique enough so that I was awarded patent #5501650.
The device was marketed as Venus II from October 1993 to April 1998. Gellert and Tsitrin presented their device to Abco Research Associates in 1993 (also manufacturer of the Sybian), and Abco helped them launch and market it. As a result, Abco became the primary marketer of the Venus II. In April 1997, Abco purchased the Venus II patent along with all manufacturing and marketing rights. This led to the launch of an improved version, Venus 2000.
In 2014, Abco modified the name of the product to Venus by Sybian. They also refer to the product as Venus For Men.
Mechanics
The design of the device was covered by U. S. patent 5501650. It consists of a main box linked to a cylindrical "receiver" that fits over the penis by a connecting hose. The "receiver" superficially resembles the teat cup of a milking machine, and contains an inner and outer chamber separated by a cylindrical flexible rubber liner. Only the outer chamber is linked to the main box, with the inner chamber open at one end, ready for the insertion of the penis. Unlike a milking machine, the other end of the Venus 2000's inner chamber is not connected to a suction hose, but instead covered by a cap that contains a one-way valve that leads to the open air.
The main box contains a gearmotor which drives a reciprocating diaphragm. Air moves to and from the outer chamber of the "receiver" via the hose. The device works by sucking in the shaft of the user's penis when the tip is placed against the receiver's opening before activation, using the partial vacuum created by the removal of air from the inner liner via the one-way valve at the closed end of the receiver. The amount of air in the system is adjustable and determines the stroke length. Most users can adjust it to move the full length of the shaft. A significant amount of personal lubricant is needed to be added within the liner for the machine to operate correctly.
The system is controlled during operation by two small control boxes. One of them is an electrical control box, with a speed adjustment knob, attached to the main box by an electrical cable. The other is a pneumatic control box, linked by a rubber tube to the diaphragm chamber, and has two push-buttons, one to add air to the system, and the other to remove air from the system. The final adjustment possible is an internal adjustment that can be used to change the amplitude of the pumping motion, but this is not available in operation, as adjusting it requires the main box of the machine to be dismantled and reassembled.
For best operation, all the components of the "receiver" need to be sized for the penile dimensions of the user. To this end, the plastic receiver bodies are available in a variety of tube lengths and diameters, and the rubber liner material also comes in a variety of diameters.
Specifications
Dimensions: 6" high, 8" wide and 9-1/2" long and weighs 11 pounds.
Gearmotor: The unit is powered by a 1/16 HP – 15:1 ratio gearmotor made by Bodine Electric Co.
Diaphragm: Air is moved by a specially designed and molded diaphragm. It has the ability to push and pull air at a high speed.
Personalized stroke-length adjustment: The Venus 2000 has an internal adjustment point with 5 possible settings that controls the amount of airflow. The unit setting is based on the size of receiver used.
See also
Sex machine
Breeding mount
References
External links
U.S. patent #5501650, "Automated masturbatory device"
As of October 20, 2009, this article uses content from Venusformen.com, which is licensed under the CC-By-SA and GFDL (see here). All relevant terms must be followed.
American inventions
Male sex toys
Machine sex
Ejaculation inducing devices | Venus for Men | [
"Physics",
"Technology"
] | 995 | [
"Physical systems",
"Machines",
"Machine sex"
] |
24,643,069 | https://en.wikipedia.org/wiki/EnglishRussia.com | EnglishRussia was a popular photoblog focusing on unusual aspects of Russian or former-Soviet culture. In 2007 Technorati rated it the 155th most popular website out of 94 million on its search engine. It was created by a Russian software technician and is currently more popular in America than in Russia.
The publication has experienced issues throughout its history. The Facebook page was hacked, and between June 13, 2009 and June 14, 2009, the design of the website changed, becoming more "clean", without logo. The privacy policy was written in Portuguese and the ads were controlled by a Brazilian company. With these changes, many old pages lost their pictures and some articles were not readable anymore. The email address does not answer. The official Twitter account and Facebook no longer exist.
As of 2023 the site appears defunct with its last content posted on May 1, 2022
It has been mentioned by many media sources, newspapers or websites such as The St. Petersburg Times, Softpedia, and The Daily Telegraph.
Footnotes
External links
Photoblogs
Russian websites | EnglishRussia.com | [
"Technology"
] | 218 | [
"Computing stubs",
"World Wide Web stubs"
] |
21,659,403 | https://en.wikipedia.org/wiki/Sequenom | Sequenom, Inc. is an American company based in San Diego, California. It develops enabling molecular technologies, and highly sensitive laboratory genetic tests for NIPT. Sequenom's wholly owned subsidiary, Sequenom Center for Molecular Medicine (SCMM), offers multiple clinical molecular genetics tests to patients, including MaterniT21, plus a noninvasive prenatal test for trisomy 21, trisomy 18, and trisomy 13, and the SensiGene RHD Fetal RHD genotyping test.
The company went public via an initial public offering in 2000. In June 2014 the company sold its biosciences unit to Agena Bioscience for up to $35.8 million. In July 2016, it was announced that diagnostic and testing giant LabCorp will acquire Sequenom, paying $2.40 for every outstanding share of Sequenom stock. The acquisition was completed in September 2016.
Competition
Companies also offering non-invasive prenatal genetic testing include Ariosa, Ravgen, Illumina (Verinata Health), PerkinElmer and Natera (The Panorama Prenatal Test). Other companies and universities that are working towards developing non-invasive prenatal testing include Stanford University.
Patent litigation
In January 2012, Sequenom entered a patent battle with competing companies, Ariosa and Natera, accusing them of infringing the "540 patent" (). The cases are Sequenom Inc. v. Natera Inc. 12-cv-0184, Sequenom v. Ariosa Diagnostics Inc., 12-cv-0189, U.S. District Court, Southern District of California (San Diego), and Ariosa v. Sequenom.
Verinatal Health and Stanford University later filed suit against Sequenom in a dispute over the 'Quake patent'. Verinata claims that Sequenom's lawyers sent it a letter in 2010 alleging that "'the practice of non-invasive prenatal diagnostics, including diagnosis of the Down Syndrome and other genetic disorders, using cell-free nucleic acids in a sample of maternal blood infringes' the '540 patent, as well as the claims of a pending United States Patent Application." The '540 patent was invented by Isis Ltd. and expires in 2017.
Stanford University owns the Quake patents and licensing rights; Verinata is its exclusive licensee.
In April 2012, Sequenom acquired two pending patents from Helicos Biosciences. In consideration for the sale and transfer of the purchased assets, Sequenom paid Helicos $1.3 million. The Helicos patent applications (US Patent application 12/709,057 and 12/727,824) cover methods for detecting fetal nucleic acids and diagnosing fetal abnormalities.
In July 2012, The United States District Court denied Sequenom's motion for a preliminary injunction motion against Ariosa Diagnostics.
In August 2013, The Court of Appeals for the Federal Circuit vacated the District Court decision and remanded that case to the District Court.
In the Ariosa litigation, the District Court (N.D.Cal.) held that the '540 patent was invalid because it claimed a natural phenomenon, the presence of cell-free fetal DNA fragments in maternal blood. On June 13, 2015, the CAFC affirmed the District Court's judgment. Finally, on December 2, 2015, the Federal Circuit declined to rehear en banc.
SEQureDx scandal
In 2009, Sequenom Center for Molecular Medicine (SCMM) was expected to launch the SEQureDx prenatal screening tests for Down syndrome and Rhesus D. Subsequent investigation revealed significant flaws in the studies of the test's effectiveness. As a result, the board of directors of Sequenom fired CEO Harry Stylli, senior vice president of research and development Elizabeth Dragon and three other employees after a probe discovered that the company had failed to adequately supervise its Down syndrome test. CFO Paul Hawran also resigned. Board chairman Harry F. Hixson Jr. was named interim CEO and director Ronald M. Lindsay was appointed to replace Dragon. Dragon has since been charged by the Securities and Exchange Commission (SEC) because she "lied to the public about the accuracy of Sequenom's prenatal screening test for Down syndrome". She died on February 26, 2011.
In 2010, Sequenom paid $14 million to settle a shareholder class-action lawsuit that arose from the errors in the development of the Down syndrome test. Sequenom executives are under investigation by the SEC for insider trading before announcement of problems with the test.
On September 1, 2011, Sequenom entered into a cease-and-desist order with SEC.
MaterniT21 PLUS
MaterniT21 PLUS is Sequenom Center for Molecular Medicine's prenatal test for trisomy 21 (Down syndrome), trisomy 18 (Edwards syndrome) and trisomy 13 (Patau syndrome). The test operates by sampling cell-free DNA in the mother's blood, which contains some DNA from the fetus. The proportions of DNA from sequences from chromosome 21, 18, or 13 can indicate whether the fetus has trisomy in that chromosome. In a randomized controlled trial of 1,696 pregnancies at high risk for Down syndrome, the test correctly identified 98.6% of the actual cases of Down syndrome (209 out of 212), with a false positive rate of 0.2% (3 of 1471 pregnancies without Down); the test gave no result in 0.8% of the cases tested (13 of 1696).
The primary advantage of MaterniT21 PLUS over the other major high accuracy tests for Down syndrome, Amniocentesis and Chorionic villus sampling, is that MaterniT21 PLUS is noninvasive. Because amniocentesis and chorionic villus sampling are invasive, they have a chance of causing miscarriage.
History
On August 4, 2011, Sequenom said it would call its new blood test for Down syndrome in pregnancy MaterniT21 when the product went on sale in the United States.
On August 11, 2011, Sequenom announced a European licensing agreement with LifeCodexx. The companies agreed to collaborate in the development and launch of a trisomy 21 laboratory-developed test and other aneuploidies testing in Germany, Austria, Switzerland, and Liechtenstein, with the potential for additional launches in other countries. Under the initial five year licensing agreement, Sequenom granted LifeCodexx licenses to key patent rights, including European Patent EP0994963B1 and pending application EP2183693A1 that enable the development and commercialization of a non-invasive aneuploidy test utilizing circulating cell-free fetal DNA in maternal plasma.
On October 24, 2011 International Society of Prenatal Diagnostics (ISPD) issued a rapid response statement in response to the launch of Sequenom non-invasive Trisomy 21 (MaterniT21) test.
On October 17, 2011 Sequenom announced that a clinical validation study leading to the introduction of the MaterniT21 LDT had been published in the journal Genetics in Medicine. On October 17, 2011 Sequenom Center for Molecular Medicine announced the launch of MaterniT21 Noninvasive Prenatal Test for Down Syndrome.
MassARRAY Analyzer 4
Sequenom Oncomap Version 3 – "core" set interrogates ~450 mutations in 35 genes. An "extended" set interrogates ~700 mutations in 113 genes.
Sequenom OncoCarta(OncoMap) identifies 396 unique "druggable" or "actionable" mutations in 33 cancer genes. In total, 417 mutations are identified.
MassARRAY spectrometry is more sensitive than PreTect HPV-Proofer and Consensus PCR for type-specific detection of high-risk oncogenic human papillomavirus genotypes in cervical cancer.
iPLEX ADME PGx Panel on MassARRAY System
On October 4, 2011 Sequenom introduced iPLEX ADME PGx Panel on MassARRAY System, developed to genotype polymorphisms in genes associated with drug absorption, distribution, metabolism, and excretion (ADME). This Research Use Only (RUO) panel contains a set of pre-designed single nucleotide polymorphisms (SNP), insertions and deletions (INDELS) and copy number variation (CNV) assays for use in the investigation of variants with demonstrated relevance to drug metabolism. After detection on the MassARRAY (RUO) system, a proprietary software solution is then used to score and qualify polymorphisms to create a unique haplotype report.
References
External links
Sequenom's Company Website
Biotechnology companies of the United States
Genomics companies
Microarrays
Companies formerly listed on the Nasdaq
2016 mergers and acquisitions
Companies based in San Diego
2000 initial public offerings | Sequenom | [
"Chemistry",
"Materials_science",
"Biology"
] | 1,914 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques"
] |
21,660,626 | https://en.wikipedia.org/wiki/Cisgenesis | Cisgenesis is a product designation for a category of genetically engineered plants. A variety of classification schemes have been proposed that order genetically modified organisms based on the nature of introduced genotypical changes, rather than the process of genetic engineering.
Cisgenesis (etymology: cis = same side; and genesis = origin) is one term for organisms that have been engineered using a process in which genes are artificially transferred between organisms that could otherwise be conventionally bred. Genes are only transferred between closely related organisms. Nucleic acid sequences must be isolated and introduced using the same technologies that are used to produce transgenic organisms, making cisgenesis similar in nature to transgenesis. The term was first introduced in 2000 by Henk J. Schouten and Henk Jochemsen, and in 2004 a PhD thesis by Jan Schaart of Wageningen University in 2004, discussing making strawberries less susceptible to Botrytis cinerea.
In Europe, currently, this process is governed by the same laws as transgenesis. While researchers at Wageningen University in the Netherlands feel that this should be changed and regulated in the same way as conventionally bred plants, other scientists, writing in Nature Biotechnology, have disagreed. In 2012 the European Food Safety Authority (EFSA) issued a report with their risk assessment of cisgenic and intragenic plants. They compared the hazards associated with plants produced by cisgenesis and intragenesis with those obtained either by conventional plant breeding techniques or transgenesis. The EFSA concluded that "similar hazards can be associated with cisgenic and conventionally bred plants, while novel hazards can be associated with intragenic and transgenic plants."
Cisgenesis has been applied to transfer of natural resistance genes to the devastating disease Phytophthora infestans in potato and scab (Venturia inaequalis) in apple.
Cisgenesis and transgenesis use artificial gene transfer, which results in less extensive change to an organism's genome than mutagenesis, which was widely used before genetic engineering was developed.
Some people believe that cisgenesis should not face as much regulatory oversight as genetic modification created through transgenesis as it is possible, if not practical, to transfer alleles among closely related species even by traditional crossing. The primary biological advantage of cisgenesis is that it does not disrupt favorable heterozygous states, particularly in asexually propagated crops such as potato, which do not breed true to seed. One application of cisgenesis is to create blight resistant potato plants by transferring known resistance loci wild genotypes into modern, high yielding varieties.
The Dutch government has proposed to exclude cisgenic plants from the European GMO Regulation, in view of the safety of cisgenic plants compared to classically bred plants, and their contribution to durable food production.
Related classification scheme
A related classification scheme proposed by Kaare Nielsen is:
Diagram
.
References
Genetic engineering | Cisgenesis | [
"Chemistry",
"Engineering",
"Biology"
] | 588 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
31,717,950 | https://en.wikipedia.org/wiki/Kinetic%20scheme | In physics, chemistry and related fields, a kinetic scheme is a network of states and connections between them representing a dynamical process. Usually a kinetic scheme represents a Markovian process, while for non-Markovian processes generalized kinetic schemes are used. Figure 1 illustrates a kinetic scheme.
A Markovian kinetic scheme
Mathematical description
A kinetic scheme is a network (a directed graph) of distinct states (although repetition of states may occur and this depends on the system), where each pair of states i and j are associated with directional rates, (and ). It is described with a master equation: a first-order differential equation for the probability of a system to occupy each one its states at time t (element i represents state i). Written in a matrix form, this states: , where is the matrix of connections (rates) .
In a Markovian kinetic scheme the connections are constant with respect to time (and any jumping time probability density function for state i is an exponential, with a rate equal the value of all the exiting connections).
When detailed balance exists in a system, the relation holds for every connected states i and j. The result represents the fact that any closed loop in a Markovian network in equilibrium does not have a net flow.
Matrix can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, where then, the process is not in equilibrium. These terms are different than a birth–death process, where there is simply a linear kinetic scheme.
Specific Markovian kinetic schemes
A birth–death process is a linear one-dimensional Markovian kinetic scheme.
Michaelis–Menten kinetics are a type of a Markovian kinetic scheme when solved with the steady state assumption for the creation of intermediates in the reaction pathway.
Generalizations of Markovian kinetic schemes
A kinetic scheme with time dependent rates: When the connections depend on the actual time (i.e. matrix depends on the time, ), the process is not Markovian, and the master equation obeys, . The reason for a time dependent rates is, for example, a time dependent external field applied on a Markovian kinetic scheme (thus making the process a not Markovian one).
A semi-Markovian kinetic scheme: When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation: .
An example for such a process is a reduced dimensions form.
The Fokker Planck equation: when expanding the master equation of the kinetic scheme in a continuous space coordinate, one finds the Fokker Planck equation.
See also
Markov process
Continuous-time Markov process
Master equation
Detailed balance
Graph theory
Semi-Markov process
State transition system
References
Biophysics
Physical chemistry
Theoretical physics
Statistical mechanics
Stochastic processes
Dynamical systems | Kinetic scheme | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 582 | [
"Applied and interdisciplinary physics",
"Theoretical physics",
"Biophysics",
"Mechanics",
"nan",
"Statistical mechanics",
"Physical chemistry",
"Dynamical systems"
] |
31,720,794 | https://en.wikipedia.org/wiki/ProtCID | The Protein Common Interface Database (ProtCID) is a database of similar protein-protein interfaces in crystal structures of homologous proteins.
Its main goal is to identify and cluster homodimeric and heterodimeric interfaces observed in multiple crystal forms of homologous proteins. Such interfaces, especially of non-identical proteins or protein complexes, have been associated with biologically relevant interactions.
A common interface in ProtCID indicates chain-chain or domain-domain interactions that occur in different crystal forms. All protein sequences of known structure in the Protein Data Bank (PDB) are assigned a ”Pfam chain architecture”, which denotes the ordered Pfam assignments for that sequence, e.g. (Pkinase) or (Cyclin_N)_(Cyclin_C). Homodimeric interfaces in all crystals that contain particular domain or chain architectures are compared, regardless of whether there are other protein types in the crystals. All interfaces between two different Pfam domains or Pfam architectures in all PDB entries that contain them are also compared (e.g., (Pkinase) and (Cyclin_N)_(Cyclin_C) ). For both homodimers and heterodimers, the interfaces are clustered into common interfaces based on a similarity score.
ProtCID reports the number of crystal forms that contain a common interface, the number of PDB entries, the number of PDB and PISA biological assembly annotations that contain the same interface, the average surface area, and the minimum sequence identity of proteins that contain the interface. ProtCID provides an independent check on publicly available annotations of biological interactions for PDB entries.
ProtCID also contains interface clusters between protein domains and peptides, nucleic acids, and ligands.
See also
Protein-protein interaction
References
External links
http://dunbrack2.fccc.edu/protcid
Protein Interfaces, Surfaces and Assemblies (PISA)
http://www.rcsb.org
Protein databases
Systems biology
Crystallographic databases
Protein structure | ProtCID | [
"Chemistry",
"Materials_science",
"Biology"
] | 432 | [
"Crystallographic databases",
"Crystallography",
"Structural biology",
"Protein structure",
"Systems biology"
] |
31,723,231 | https://en.wikipedia.org/wiki/Cantharidic%20acid | Cantharidic acid is a selective inhibitor of PP2A (protein phosphatase 2) and PP1 (protein phosphatase 1).
It is the hydrolysis product of cantharidin.
See also
Endothall
References
Phosphatase inhibitors | Cantharidic acid | [
"Chemistry"
] | 58 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
38,754,240 | https://en.wikipedia.org/wiki/Hydrothermal%20liquefaction | Hydrothermal liquefaction (HTL) is a thermal depolymerization process used to convert wet biomass, and other macromolecules, into crude-like oil under moderate temperature and high pressure. The crude-like oil has high energy density with a lower heating value of 33.8-36.9 MJ/kg and 5-20 wt% oxygen and renewable chemicals. The process has also been called hydrous pyrolysis.
The reaction usually involves homogeneous and/or heterogeneous catalysts to improve the quality of products and yields. Carbon and hydrogen of an organic material, such as biomass, peat or low-ranked coals (lignite) are thermo-chemically converted into hydrophobic compounds with low viscosity and high solubility. Depending on the processing conditions, the fuel can be used as produced for heavy engines, including marine and rail or upgraded to transportation fuels, such as diesel, gasoline or jet-fuels.
The process may be significant in the creation of fossil fuels. Simple heating without water, anhydrous pyrolysis has long been considered to take place naturally during the catagenesis of kerogens to fossil fuels. In recent decades it has been found that water under pressure causes more efficient breakdown of kerogens at lower temperatures than without it. The carbon isotope ratio of natural gas also suggests that hydrogen from water has been added during creation of the gas.
History
As early as the 1920s, the concept of using hot water and alkali catalysts to produce oil out of biomass was proposed. In 1939, U.S. patent 2,177,557, described a two-stage process in which a mixture of water, wood chips, and calcium hydroxide is heated in the first stage at temperatures in a range of , with the pressure "higher than that of saturated steam at the temperature used." This produces "oils and alcohols" which are collected. The materials are then subjected in a second stage to what is called "dry distillation", which produces "oils and ketones". Temperatures and pressures for this second stage are not disclosed.
These processes were the foundation of later HTL technologies that attracted research interest especially during the 1970s oil embargo. It was around that time that a high-pressure (hydrothermal) liquefaction process was developed at the Pittsburgh Energy Research Center (PERC) and later demonstrated (at the 100 kg/h scale) at the Albany Biomass Liquefaction Experimental Facility at Albany, Oregon, US. In 1982, Shell Oil developed the HTU™ process in the Netherlands. Other organizations that have previously demonstrated HTL of biomass include Hochschule für Angewandte Wissenschaften Hamburg, Germany, SCF Technologies in Copenhagen, Denmark, EPA’s Water Engineering Research Laboratory, Cincinnati, Ohio, USA, and Changing World Technology Inc. (CWT), Philadelphia, Pennsylvania, USA. Today, technology companies such as Licella/Ignite Energy Resources (Australia), Arbios Biotech, a Licella/Canfor joint venture, Altaca Energy (Turkey), Circlia Nordic (Denmark), Steeper Energy (Denmark, Canada) continue to explore the commercialization of HTL. Construction has begun in Teesside, UK, for a catalytic hydrothermal liquefaction plant that aims to process 80,000 tonnes per year of mixed plastic waste by 2022.
Chemical reactions
In hydrothermal liquefaction processes, long carbon chain molecules in biomass are thermally cracked and oxygen is removed in the form of H2O (dehydration) and CO2 (decarboxylation). These reactions result in the production of high H/C ratio bio-oil. Simplified descriptions of dehydration and decarboxylation reactions can be found in the literature (e.g. Asghari and Yoshida (2006) and Snåre et al. (2007).
Process
Most applications of hydrothermal liquefaction operate at temperatures between 250-550 °C and high pressures of 5-25 MPa as well as catalysts for 20–60 minutes, although higher or lower temperatures can be used to optimize gas or liquid yields, respectively. At these temperatures and pressures, the water present in the biomass becomes either subcritical or supercritical, depending on the conditions, and acts as a solvent, reactant, and catalyst to facilitate the reaction of biomass to bio-oil.
The exact conversion of biomass to bio-oil is dependent on several variables:
Feedstock composition
Temperature and heating rate
Pressure
Solvent
Residence time
Catalysts
Feedstock
Theoretically, any biomass can be converted into bio-oil using hydrothermal liquefaction regardless of water content, and various different biomasses have been tested, from forestry and agriculture residues, sewage sludges, food process wastes, to emerging non-food biomass such as algae. The composition of cellulose, hemicellulose, protein, and lignin in the feedstock influence the yield and quality of the oil from the process.
Zhang et al., at the University of Illinois, report on a hydrous pyrolysis process in which swine manure is converted to oil by heating the swine manure and water in the presence of carbon monoxide in a closed container. For that process they report that a temperatures of at least is required to convert the swine manure to oil, and temperatures above about reduces the amount of oil produced. The Zhang et al. process produces pressures of about 7 to 18 Mpa (1000 to 2600 psi - 69 to 178 atm), with higher temperatures producing higher pressures. Zhang et al. used a retention time of 120 minutes for the reported study, but report at higher temperatures a time of less than 30 minutes results in significant production of oil.
Barbero-López et al., tested in the University of Eastern Finland the use of spent mushroom substrate and tomato plant residues as feedstock for hydrothermal liquefaction. They focused in the hydrothermal liquids produced, rich in many different constituents, and found that they are potential antifungals against several fungi causing decay on wood, but their ecotoxicity was lower than that of the commercial Cu-based wood preservative. The effectiveness of the antifungal activity of the hydrothermal liquids varied mostly due to liquid concentration and strain sensitivity, while the different feedstocks did not have such a significant effect.
A commercialized process using hydrous pyrolysis (see the article Thermal depolymerization) used by Changing World Technologies, Inc. (CWT) and its subsidiary Renewable Environmental Solutions, LLC (RES) to convert turkey offal. As a two-stage process, the first stage to convert the turkey offal to hydrocarbons at a temperature of and a second stage to crack the oil into light hydrocarbons at a temperature of near . Adams et al. report only that the first stage heating is "under pressure"; Lemley, in a non-technical article on the CWT process, reports that for the first stage (for conversion) a temperature of about and a pressure of about 600 psi, with a time for the conversion of "usually about 15 minutes". For the second stage (cracking), Lemley reports a temperature of about .
Temperature and heating rate
Temperature plays a major role in the conversion of biomass to bio-oil. The temperature of the reaction determines the depolymerization of the biomass to bio-oil, as well as the repolymerization into char. While the ideal reaction temperature is dependent on the feedstock used, temperatures above ideal lead to an increase in char formation and eventually increased gas formation, while lower than ideal temperatures reduce depolymerization and overall product yields.
Similarly to temperature, the rate of heating plays a critical role in the production of the different phase streams, due to the prevalence of secondary reactions at non-optimum heating rates. Secondary reactions become dominant in heating rates that are too low, leading to the formation of char. While high heating rates are required to form liquid bio-oil, there is a threshold heating rate and temperature where liquid production is inhibited and gas production is favored in secondary reactions.
Pressure
Pressure (along with temperature) determines the super- or subcritical state of solvents as well as overall reaction kinetics and the energy inputs required to yield the desirable HTL products (oil, gas, chemicals, char etc.).
Residence Time
Hydrothermal liquefaction is a fast process, resulting in low residence times for depolymerization to occur. Typical residence times are measured in minutes (15 to 60 minutes); however, the residence time is highly dependent on the reaction conditions, including feedstock, solvent ratio and temperature. As such, optimization of the residence time is necessary to ensure a complete depolymerization without allowing further reactions to occur.
Catalysts
While water acts as a catalyst in the reaction, other catalysts can be added to the reaction vessel to optimize the conversion. Previously used catalysts include water-soluble inorganic compounds and salts, including KOH and Na2CO3, as well as transition metal catalysts using nickel, palladium, platinum and ruthenium supported on either carbon, silica or alumina. The addition of these catalysts can lead to an oil yield increase of 20% or greater, due to the catalysts converting the protein, cellulose, and hemicellulose into oil. This ability for catalysts to convert biomaterials other than fats and oils to bio-oil allows for a wider range of feedstock to be used.
Environmental Impact
Biofuels that are produced through hydrothermal liquefaction are carbon neutral, meaning that there are no net carbon emissions produced when burning the biofuel. The plant materials used to produce bio-oils use photosynthesis to grow, and as such consume carbon dioxide from the atmosphere. The burning of the biofuels produced releases carbon dioxide into the atmosphere, but is nearly completely offset by the carbon dioxide consumed from growing the plants, resulting in a release of only 15-18 g of CO2 per kWh of energy produced. This is substantially lower than the releases rate of fossil fuel technologies, which can range from releases of 955 g/kWh (coal), 813 g/kWh (oil), and 446 g/kWh (natural gas). Recently, Steeper Energy announced that the carbon intensity (CI) of its Hydrofaction™ oil is 15 CO2eq/MJ according to GHGenius model (version 4.03a), while diesel fuel is 93.55 CO2eq/MJ.
Hydrothermal liquefaction is a clean process that doesn't produce harmful compounds, such as ammonia, NOx, or SOx. Instead the heteroatoms, including nitrogen, sulfur, and chlorine, are converted into harmless byproducts such as N2 and inorganic acids that can be neutralized with bases.
Comparison with pyrolysis and other biomass to liquid technologies
The HTL process differs from pyrolysis as it can process wet biomass and produce a bio-oil that contains approximately twice the energy density of pyrolysis oil. Pyrolysis is a related process to HTL, but biomass must be processed and dried in order to increase the yield. The presence of water in pyrolysis drastically increases the heat of vaporization of the organic material, increasing the energy required to decompose the biomass. Typical pyrolysis processes require a water content of less than 40% to suitably convert the biomass to bio-oil. This requires considerable pretreatment of wet biomass such as tropical grasses, which contain a water content as high as 80-85%, and even further treatment for aquatic species, which can contain higher than 90% water content.
The HTL oil can contain up to 80% of the feedstock carbon content (single pass). HTL oil has good potential to yield bio-oil with "drop-in" properties that can be directly distributed in existing petroleum infrastructure.
The energy returned on energy invested (EROEI) of these processes is uncertain and/or has not been measured. Furthermore, products of hydrous pyrolysis might not meet current fuel standards. Further processing may be required to produce fuels.
See also
Gasification
Pyrolysis
Thermal decomposition
Thermal depolymerization
References
External links
A Possible Deep-Basin High-Rank Gas Machine Via Water Organic-Matter Redox Reactions, Leigh C. Price
Surreptitiously converting dead matter into oil and coal - Water, Water Everywhere, Science News, February 20, 1993, Elizabeth Pennisi
Hydrogen isotope systematics of thermally generated natural gases, Chris Clayton
Organic reactions
Chemical processes
Industrial processes
Biodegradable waste management
Waste treatment technology | Hydrothermal liquefaction | [
"Chemistry",
"Engineering"
] | 2,654 | [
"Water treatment",
"Biodegradable waste management",
"Organic reactions",
"Chemical processes",
"Biodegradation",
"nan",
"Environmental engineering",
"Chemical process engineering",
"Waste treatment technology"
] |
38,754,853 | https://en.wikipedia.org/wiki/Initial%20and%20final%20state%20radiation | In quantum field theory, initial and final state radiation refers to certain kinds of radiative emissions that are not due to particle annihilation. It is important in experimental and theoretical studies of interactions at particle colliders.
Explanation of initial and final states
Particle accelerators and colliders produce collisions (interactions) of particles (like the electron or the proton). In the terminology of the quantum state, the colliding particles form the Initial State. In the collision, particles can be annihilated or/and exchanged, producing possibly different sets of particles, the Final States. The Initial and Final States of the interaction relate through the so-called scattering matrix (S-matrix).
The probability amplitude for a transition of a quantum system from the initial state having state vector to the final state vector is given by the scattering matrix element
where is the S-matrix.
Electron-positron annihilation example
The electron-positron annihilation interaction:
has a contribution from the second order Feynman diagram shown adjacent:
In the initial state (at the bottom; early time) there is one electron (e−) and one positron (e+) and in the final state (at the top; late time) there are two photons (γ).
Other states are possible. For example, at LEP, , or are processes where the initial state is an electron and a positron colliding to produce an electron and a positron or two muons of opposite charge: the final states.
Phenomenology
In the case of initial-state radiation, one of the incoming particles emit radiation (such as a photon, wlog) before the interaction with the others, so reduces the beam energy prior to the momentum transfer; while for final-state radiation, the scattered particles emit radiation, and since the momentum transfer has already occurred, the resulting beam energy decreases.
In analogy with bremsstrahlung, if the radiation is electromagnetic it is sometimes called beam-strahlung, and similarly can have gluon-strahlung (as shown in the Feynman figure with the gluon) as well in the case of QCD.
Computational issues
In these simple cases, no automatic calculation software packages are needed and the cross-section analytical expression can be easily derived at least for the lowest approximation: the Born approximation also called the leading order or the tree level (as Feynman diagrams have only trunk and branches, no loops). Interactions at higher energies open a large spectrum of possible final states and consequently increase the number of processes to compute, however.
The calculation of probability amplitudes in theoretical particle physics requires the use of rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented graphically as Feynman diagrams. A Feynman diagram is a contribution of a particular class of particle paths, which join and split as described by the diagram. More precisely, and technically, a Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory. Within the canonical formulation of quantum field theory, a Feynman diagram represents a term in the Wick's expansion of the perturbative S-matrix. Alternatively, the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible histories of the system from the initial to the final state, in terms of either particles or fields. The transition amplitude is then given as the matrix element of the S-matrix between the initial and the final states of the quantum system.
References
External links
Initial and final state radiation in Z production, A Quantum Diaries Survivor.
Beam-Beam Interaction, D. Schulte
ISR and Beamstrahlung
Quantum field theory | Initial and final state radiation | [
"Physics"
] | 793 | [
"Quantum field theory",
"Quantum mechanics"
] |
38,758,064 | https://en.wikipedia.org/wiki/Order-3%20apeirogonal%20tiling | In geometry, the order-3 apeirogonal tiling is a regular tiling of the hyperbolic plane. It is represented by the Schläfli symbol {∞,3}, having three regular apeirogons around each vertex. Each apeirogon is inscribed in a horocycle.
The order-2 apeirogonal tiling represents an infinite dihedron in the Euclidean plane as {∞,2}.
Images
Each apeirogon face is circumscribed by a horocycle, which looks like a circle in a Poincaré disk model, internally tangent to the projective circle boundary.
Uniform colorings
Like the Euclidean hexagonal tiling, there are 3 uniform colorings of the order-3 apeirogonal tiling, each from different reflective triangle group domains:
Symmetry
The dual to this tiling represents the fundamental domains of [(∞,∞,∞)] (*∞∞∞) symmetry. There are 15 small index subgroups (7 unique) constructed from [(∞,∞,∞)] by mirror removal and alternation. Mirrors can be removed if its branch orders are all even, and cuts neighboring branch orders in half. Removing two mirrors leaves a half-order gyration point where the removed mirrors met. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. The symmetry can be doubled as ∞∞2 symmetry by adding a mirror bisecting the fundamental domain. Dividing a fundamental domain by 3 mirrors creates a ∞32 symmetry.
A larger subgroup is constructed [(∞,∞,∞*)], index 8, as (∞*∞∞) with gyration points removed, becomes (*∞∞).
Related polyhedra and tilings
This tiling is topologically related as a part of sequence of regular polyhedra with Schläfli symbol {n,3}.
See also
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
Hexagonal tiling honeycomb, similar {6,3,3} honeycomb in H3.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Apeirogonal tilings
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-3 tilings
Regular tilings | Order-3 apeirogonal tiling | [
"Physics"
] | 504 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
38,759,671 | https://en.wikipedia.org/wiki/Truncated%20order-3%20apeirogonal%20tiling | In geometry, the truncated order-3 apeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of t{∞,3}.
Dual tiling
The dual tiling, the infinite-order triakis triangular tiling, has face configuration V3.∞.∞.
Related polyhedra and tiling
This hyperbolic tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry.
See also
List of uniform planar tilings
Tilings of regular polygons
Uniform tilings in hyperbolic plane
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Apeirogonal tilings
Hyperbolic tilings
Isogonal tilings
Order-3 tilings
Truncated tilings
Uniform tilings | Truncated order-3 apeirogonal tiling | [
"Physics"
] | 209 | [
"Truncated tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,760,393 | https://en.wikipedia.org/wiki/Haplotype%20estimation | In genetics, haplotype estimation (also known as "phasing") refers to the process of statistical estimation of haplotypes from genotype data. The most common situation arises when genotypes are collected at a set of polymorphic sites from a group of individuals. For example in human genetics, genome-wide association studies collect genotypes in thousands of individuals at between 200,000-5,000,000 SNPs using microarrays. Haplotype estimation methods are used in the analysis of these datasets and allow genotype imputation of alleles from reference databases such as the HapMap Project and the 1000 Genomes Project.
Genotypes and haplotypes
Genotypes measure the unordered combination of alleles at each locus, whereas haplotypes represent the genetic information on multiple loci that have been inherited together from an individual's parents. Theoretically the number of possible haplotypes equals to the product of allele numbers of each locus in consideration. Specially, most of the SNPs are bi-allelic; Therefore when considering heterozygous bi-allelic loci, there will be possible pairs of haplotypes that could underlie the genotypes. For example, when considering two bi-allelic loci A and B (), of which the genotypes are a1 and a2, b1 and b2, respectively, we will have the following haplotypes: a1_b1, a1_b2, a2_b1, and a2_b2 ("_" indicates that the alleles are on the same chromosome).
Haplotype estimation methods
Many statistical methods have been proposed for estimation of haplotypes. Some of the earliest approaches used a simple multinomial model in which each possible haplotype consistent with the sample was given an unknown frequency parameter and these parameters were estimated with an Expectation–maximization algorithm. These approaches were only able to handle small numbers of sites at once, although sequential versions were later developed, specifically the SNPHAP method.
The most accurate and widely used methods for haplotype estimation utilize some form of hidden Markov model (HMM) to carry out inference. For a long time PHASE was the most accurate method. PHASE was the first method to utilize ideas from coalescent theory concerning the joint distribution of haplotypes. This method used a Gibbs sampling approach in which each individuals haplotypes were updated conditional upon the current estimates of haplotypes from all other samples. Approximations to the distribution of a haplotype conditional upon a set of other haplotypes were used for the conditional distributions of the Gibbs sampler. PHASE was used to estimate the haplotypes from the HapMap Project. PHASE was limited by its speed and was not applicable to datasets from genome-wide association studies.
The fastPHASE and BEAGLE methods introduced haplotype cluster models applicable to GWAS-sized datasets. Subsequently the IMPUTE2 and MaCH methods were introduced that were similar to the PHASE approach but much faster. These methods iteratively update the haplotype estimates of each sample conditional upon a subset of K haplotype estimates of other samples. IMPUTE2 introduced the idea of carefully choosing which subset of haplotypes to condition on to improve accuracy. Accuracy increases with K but with quadratic computational complexity.
The SHAPEIT1 method made a major advance by introducing a linear complexity method that operates only on the space of haplotypes consistent with an individual’s genotypes. The HAPI-UR method subsequently proposed a very similar method. SHAPEIT2 combines the best features of SHAPEIT1 and IMPUTE2 to improve efficiency and accuracy.
See also
List of haplotype estimation and genotype imputation software
imputation: predict missing genotypes using known haplotypes
References
Genetics techniques | Haplotype estimation | [
"Engineering",
"Biology"
] | 806 | [
"Genetics techniques",
"Genetic engineering"
] |
38,763,807 | https://en.wikipedia.org/wiki/Infinite-order%20square%20tiling | In geometry, the infinite-order square tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {4,∞}. All vertices are ideal, located at "infinity", seen on the boundary of the Poincaré hyperbolic disk projection.
Uniform colorings
There is a half symmetry form, , seen with alternating colors:
Symmetry
This tiling represents the mirror lines of *∞∞∞∞ symmetry. The dual to this tiling defines the fundamental domains of (*2∞) orbifold symmetry.
Related polyhedra and tiling
This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (4n).
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
References
External links
Hyperbolic and Spherical Tiling Gallery
Hyperbolic tilings
Infinite-order tilings
Isogonal tilings
Isohedral tilings
Regular tilings
Square tilings | Infinite-order square tiling | [
"Physics"
] | 201 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
38,764,006 | https://en.wikipedia.org/wiki/Snub%20tetraapeirogonal%20tiling | In geometry, the snub tetraapeirogonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of sr{∞,4}.
Images
Drawn in chiral pairs, with edges missing between black triangles:
Related polyhedra and tiling
The snub tetrapeirogonal tiling is last in an infinite series of snub polyhedra and tilings with vertex figure 3.3.4.3.n.
See also
Square tiling
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
Chiral figures
Hyperbolic tilings
Infinite-order tilings
Isogonal tilings
Snub tilings
Uniform tilings | Snub tetraapeirogonal tiling | [
"Physics",
"Chemistry"
] | 201 | [
"Snub tilings",
"Isogonal tilings",
"Tessellation",
"Chirality",
"Hyperbolic tilings",
"Uniform tilings",
"Chiral figures",
"Symmetry"
] |
38,768,125 | https://en.wikipedia.org/wiki/Shallow%20minor | In graph theory, a shallow minor or limited-depth minor is a restricted form of a graph minor in which the subgraphs that are contracted to form the minor have small diameter. Shallow minors were introduced by , who attributed their invention to Charles E. Leiserson and Sivan Toledo.
Definition
One way of defining a minor of an undirected graph G is by specifying a subgraph H of G, and a collection of disjoint subsets Si of the vertices of G, each of which forms a connected induced subgraph Hi of H. The minor has a vertex vi for each subset Si, and an edge
vivj whenever there exists an edge from Si to Sj that belongs to H.
In this formulation, a d-shallow minor (alternatively called a shallow minor of depth d) is a minor that can be defined in such a way that each of the subgraphs Hi has radius at most d, meaning that it contains a central vertex ci that is within distance d of all the other vertices of Hi. Note that this distance is measured by hop count in Hi, and because of that it may be larger than the distance in G.
Special cases
Shallow minors of depth zero are the same thing as subgraphs of the given graph. For sufficiently large values of d (including all values at least as large as the number of vertices), the d-shallow minors of a given graph coincide with all of its minors.
Classification of graph families
use shallow minors to partition the families of finite graphs into two types. They say that a graph family F is somewhere dense if there exists a finite value of d for which the d-shallow minors of graphs in F consist of every finite graph. Otherwise, they say that a graph family is nowhere dense. This terminology is justified by the fact that, if F is a nowhere dense class of graphs, then (for every ε > 0) the n-vertex graphs in F have O(n1 + ε) edges; thus, the nowhere dense graphs are sparse graphs.
A more restrictive type of graph family, described similarly, are the graph families of bounded expansion. These are graph families for which there exists a function f such that the ratio of edges to vertices in every d-shallow minor is at most f(d). If this function exists and is bounded by a polynomial, the graph family is said to have polynomial expansion.
Separator theorems
As showed, graphs with excluded shallow minors can be partitioned analogously to the planar separator theorem for planar graphs. In particular, if the complete graph Kh is not a d-shallow minor of an n-vertex graph G, then there exists a subset S of G, with size such that each connected component of G\S has at most 2n/3 vertices. The result is constructive: there exists a polynomial time algorithm that either finds such a separator, or a d-shallow Kh minor.
As a consequence they showed that every minor-closed graph family obeys a separator theorem almost as strong as the one for planar graphs.
Plotkin et al. also applied this result to the partitioning of finite element method meshes in higher dimensions; for this generalization, shallow minors are necessary, as (with no depth restriction) the family of meshes in three or more dimensions has all graphs as minors. extends these results to a broader class of high-dimensional graphs.
More generally, a hereditary graph family has a separator theorem where the separator size is a sublinear power of n if and only if it has polynomial expansion.
Notes
References
.
.
.
.
.
Graph minor theory | Shallow minor | [
"Mathematics"
] | 744 | [
"Mathematical relations",
"Graph minor theory",
"Graph theory"
] |
1,466,225 | https://en.wikipedia.org/wiki/Allee%20effect | The Allee effect is a phenomenon in biology characterized by a correlation between population size or density and the mean individual fitness (often measured as per capita population growth rate) of a population or species.
History and background
Although the concept of Allee effect had no title at the time, it was first described in the 1930s by its namesake, Warder Clyde Allee. Through experimental studies, Allee was able to demonstrate that goldfish have a greater survival rate when there are more individuals within the tank. This led him to conclude that aggregation can improve the survival rate of individuals, and that cooperation may be crucial in the overall evolution of social structure. The term "Allee principle" was introduced in the 1950s, a time when the field of ecology was heavily focused on the role of competition among and within species. The classical view of population dynamics stated that due to competition for resources, a population will experience a reduced overall growth rate at higher density and increased growth rate at lower density. In other words, individuals in a population would be better off when there are fewer individuals around due to a limited amount of resources (see logistic growth). However, the concept of the Allee effect introduced the idea that the reverse holds true when the population density is low. Individuals within a species often require the assistance of another individual for more than simple reproductive reasons in order to persist. The most obvious example of this is observed in animals that hunt for prey or defend against predators as a group.
Definition
The generally accepted definition of Allee effect is positive density dependence, or the positive correlation between population density and individual fitness. It is sometimes referred to as "undercrowding" and it is analogous (or even considered synonymous by some) to "depensation" in the field of fishery sciences. Listed below are a few significant subcategories of the Allee effect used in the ecology literature.
Component vs. demographic Allee effects
The component Allee effect is the positive relationship between any measurable component of individual fitness and population density. The demographic Allee effect is the positive relationship between the overall individual fitness and population density.
The distinction between the two terms lies on the scale of the Allee effect: the presence of a demographic Allee effect suggests the presence of at least one component Allee effect, while the presence of a component Allee effect does not necessarily result in a demographic Allee effect. For example, cooperative hunting and the ability to more easily find mates, both influenced by population density, are component Allee effects, as they influence individual fitness of the population. At low population density, these component Allee effects would add up to produce an overall demographic Allee effect (increased fitness with higher population density). When population density reaches a high number, negative density dependence often offsets the component Allee effects through resource competition, thus erasing the demographic Allee effect. Allee effects might occur even at high population density for some species.
Strong vs. weak Allee effects
The strong Allee effect is a demographic Allee effect with a critical population size or density. The weak Allee effect is a demographic Allee effect without a critical population size or density.
The distinction between the two terms is based on whether or not the population in question exhibits a critical population size or density. A population exhibiting a weak Allee effect will possess a reduced per capita growth rate (directly related to individual fitness of the population) at lower population density or size. However, even at this low population size or density, the population will always exhibit a positive per capita growth rate. Meanwhile, a population exhibiting a strong Allee effect will have a critical population size or density under which the population growth rate becomes negative. Therefore, when the population density or size hits a number below this threshold, the population will be destined for extinction without any further aid. A strong Allee effect is often easier to demonstrate empirically using time series data, as one can pinpoint the population size or density at which per capita growth rate becomes negative.
Mechanisms
Due to its definition as the positive correlation between population density and average fitness, the mechanisms for which an Allee effect arises are therefore inherently tied to survival and reproduction. In general, these Allee effect mechanisms arise from cooperation or facilitation among individuals in the species. Examples of such cooperative behaviors include better mate finding, environmental conditioning, and group defense against predators. As these mechanisms are more-easily observable in the field, they tend to be more commonly associated with the Allee effect concept. Nevertheless, mechanisms of Allee effect that are less conspicuous such as inbreeding depression and sex ratio bias should be considered as well.
Ecological mechanism
Although numerous ecological mechanisms for Allee effects exist, the list of most commonly cited facilitative behaviors that contribute to Allee effects in the literature include: mate limitation, cooperative defense, cooperative feeding, and environmental conditioning. While these behaviors are classified in separate categories, they can overlap and tend to be context dependent (will operate only under certain conditions – for example, cooperative defense will only be useful when there are predators or competitors present).
Mate limitation Mate limitation refers to the difficulty of finding a compatible and receptive mate for sexual reproduction at lower population size or density. This is generally a problem encountered by species that utilize passive reproduction and possess low mobility, such as plankton, plants and sessile invertebrates. For example, wind-pollinated plants would have a lower fitness in sparse populations due to the lower likelihood of pollen successfully landing on a conspecific.
Cooperative defense Another possible benefit of aggregation is to protect against predation by group anti-predator behavior. Many species exhibit higher rates of predator vigilance behavior per individual at lower density. This increased vigilance might result in less time and energy spent on foraging, thus reducing the fitness of an individual living in smaller groups. One striking example of such shared vigilance is exhibited by meerkats. Meanwhile, other species move in synchrony to confuse and avoid predators such as schools of sardines and flocks of starlings. The confusion effect that this herding behavior would have on predators will be more effective when more individuals are present.
Cooperative feeding Certain species also require group foraging in order to survive. As an example, species that hunt in packs, such as the African wild dogs, would not be able to locate and capture prey as efficiently in smaller groups.
Environmental conditioning / habitat alteration Environmental conditioning generally refers to the mechanism in which individuals work together in order to improve their immediate or future environment for the benefit of the species. This alteration could involve changes in both abiotic (temperature, turbulence, etc.) or biotic (toxins, hormones, etc.) environmental factors. Pacific salmon present a potential case of such component Allee effects, where the density of spawning individuals can affect the survivability of the following generations. Spawning salmon carry marine nutrients they acquired from the ocean as they migrate to freshwater streams to reproduce, which in turn fertilize the surrounding habitat when they die, thus creating a more suitable habitat for the juveniles that would hatch in the following months. While compelling, this case of environmental conditioning by salmon has not been rigorously supported by empirical evidence.
Human induced
Classic economic theory predicts that human exploitation of a population is unlikely to result in species extinction because the escalating costs to find the last few individuals will exceed the fixed price one achieves by selling the individuals on the market. However, when rare species are more desirable than common species, prices for rare species can exceed high harvest costs. This phenomenon can create an "anthropogenic" Allee effect where rare species go extinct but common species are sustainably harvested. The anthropogenic Allee effect has become a standard approach for conceptualizing the threat of economic markets on endangered species. However, the original theory was posited using a one dimensional analysis of a two dimensional model. It turns out that a two dimensional analysis yields an Allee curve in human exploiter and biological population space and that this curve separating species destined to extinction vs persistence can be complicated. Even very high population sizes can potentially pass through the originally proposed Allee thresholds on predestined paths to extinction.
Genetic mechanisms
Declines in population size can result in a loss of genetic diversity, and owing to genetic variation's role in the evolutionary potential of a species, this could in turn result in an observable Allee effect. As a species' population becomes smaller, its gene pool will be reduced in size as well. One possible outcome from this genetic bottleneck is a reduction in fitness of the species through the process of genetic drift, as well as inbreeding depression. This overall fitness decrease of a species is caused by an accumulation of deleterious mutations throughout the population. Genetic variation within a species could range from beneficial to detrimental. Nevertheless, in a smaller sized gene pool, there is a higher chance of a stochastic event in which deleterious alleles become fixed (genetic drift). While evolutionary theory states that expressed deleterious alleles should be purged through natural selection, purging would be most efficient only at eliminating alleles that are highly detrimental or harmful. Mildly deleterious alleles such as those that act later in life would be less likely to be removed by natural selection, and conversely, newly acquired beneficial mutations are more likely to be lost by random chance in smaller genetic pools than larger ones.
Although the long-term population persistence of several species with low genetic variation has recently prompted debate on the generality of inbreeding depression, there are various empirical evidences for genetic Allee effects. One such case was observed in the endangered Florida panther (Puma concolor coryi). The Florida panther experienced a genetic bottleneck in the early 1990s where the population was reduced to ≈25 adult individuals. This reduction in genetic diversity was correlated with defects that include lower sperm quality, abnormal testosterone levels, cowlicks, and kinked tails. In response, a genetic rescue plan was put in motion and several female pumas from Texas were introduced into the Florida population. This action quickly led to the reduction in the prevalence of the defects previously associated with inbreeding depression. Although the timescale for this inbreeding depression is larger than of those more immediate Allee effects, it has significant implications on the long-term persistence of a species.
Demographic stochasticity
Demographic stochasticity refers to variability in population growth arising from sampling random births and deaths in a population of finite size. In small populations, demographic stochasticity will decrease the population growth rate, causing an effect similar to the Allee effect, which will increase the risk of population extinction. Whether or not demographic stochasticity can be considered a part of Allee effect is somewhat contentious however. The most current definition of Allee effect considers the correlation between population density and mean individual fitness. Therefore, random variation resulting from birth and death events would not be considered part of Allee effect as the increased risk of extinction is not a consequence of the changing fates of individuals within the population.
Meanwhile, when demographic stochasticity results in fluctuations of sex ratios, it arguably reduces the mean individual fitness as population declines. For example, a fluctuation in small population that causes a scarcity in one sex would in turn limit the access of mates for the opposite sex, decreasing the fitness of the individuals within the population. This type of Allee effect will likely be more prevalent in monogamous species than polygynous species.
Effects on range-expanding populations
Demographic and mathematical studies demonstrate that the existence of an Allee effect can reduce the speed of range expansion of a population and can even prevent biological invasions.
Recent results based on spatio-temporal models show that the Allee effect can also promote genetic diversity in expanding populations. These results counteract commonly held notions that the Allee effect possesses net adverse consequences. Reducing the growth rate of the individuals ahead of the colonization front simultaneously reduces the speed of colonization and enables a diversity of genes coming from the core of the population to remain on the front. The Allee effect also affects the spatial distribution of diversity. Whereas spatio-temporal models which do not include an Allee effect lead to a vertical pattern of genetic diversity (i.e., a strongly structured spatial distribution of genetic fractions), those including an Allee effect lead to a "horizontal pattern" of genetic diversity (i.e., an absence of genetic differentiation in space).
Mathematical models
A simple mathematical example of an Allee effect is given by the cubic growth model.
where the population has a negative growth rate for , and
a positive growth rate for (assuming ).
This is a departure from the logistic growth equation
where
N = population size;
r = intrinsic rate of increase;
K = carrying capacity;
A = critical point; and
dN/dt = rate of increase of the population.
After dividing both sides of the equation by the population size N, in the logistic growth the left hand side of the equation represents the per capita population growth rate, which is dependent on the population size N, and decreases with increasing N throughout the entire range of population sizes. In contrast, when there is an Allee effect the per-capita growth rate increases with increasing N over some range of population sizes [0, N].
The Allee effect can be explicitly modeled using birth and death rates. For instance, the equation
has a locally stable equilibrium at when . Here, are positive constants and and represent the per capita birth and death rates, respectively. This formulation is especially useful when demographic data is employed to identify parameters or when extending the model to stochastic differential equations.
Spatio-temporal models can take Allee effect into account as well. A simple example is given by the reaction-diffusion model
where
D = diffusion coefficient;
one-dimensional Laplace operator.
When a population is made up of small sub-populations additional factors to the Allee effect arise.
If the sub-populations are subject to different environmental variations (i.e. separated enough that a disaster could occur at one sub-population site without affecting the other sub-populations) but still allow individuals to travel between sub-populations, then the individual sub-populations are more likely to go extinct than the total population. In the case of a catastrophic event decreasing numbers at a sub-population, individuals from another sub-population site may be able to repopulate the area.
If all sub-populations are subject to the same environmental variations (i.e. if a disaster affected one, it would affect them all) then fragmentation of the population is detrimental to the population and increases extinction risk for the total population. In this case, the species receives none of the benefits of a small sub-population (loss of the sub-population is not catastrophic to the species as a whole) and all of the disadvantages (inbreeding depression, loss of genetic diversity and increased vulnerability to environmental instability) and the population would survive better unfragmented.
Allee principles of aggregation
Clumping results due to individuals aggregating in response to: local habitat or landscape differences, daily and seasonal weather changes, reproductive processes, or as the result of social attractions.
References
Further reading
External links
Berryman, AA (1997). Underpopulation (Allee) effects, Entomology Department, Washington State University. Retrieved 19 May 2008.
Allee effect, Warner College of Natural Resources, Colorado State University. Retrieved 19 May 2008.
Stephens, PA, Sutherland, WJ and Freckleton, RP (1999). "What is the Allee effect?" (summary), Oikos, 87, 185–90, at Evolutionary Biology Group, Department of Zoology, University of Oxford. Updated 22 November 2005. Retrieved 19 May 2008
Classics: the Allee effect
Population dynamics
Ethology
Mathematical and theoretical biology | Allee effect | [
"Mathematics",
"Biology"
] | 3,225 | [
"Behavior",
"Mathematical and theoretical biology",
"Applied mathematics",
"Behavioural sciences",
"Ethology"
] |
1,466,952 | https://en.wikipedia.org/wiki/Advanced%20glycation%20end-product | Advanced glycation end products (AGEs) are proteins or lipids that become glycated as a result of exposure to sugars. They are a bio-marker implicated in aging and the development, or worsening, of many degenerative diseases, such as diabetes, atherosclerosis, chronic kidney disease, and Alzheimer's disease.
Dietary sources
Animal-derived foods that are high in fat and protein are generally AGE-rich and are prone to further AGE formation during cooking. However, only low molecular weight AGEs are absorbed through diet, and vegetarians have been found to have higher concentrations of overall AGEs compared to non-vegetarians. Therefore, it is unclear whether dietary AGEs contribute to disease and aging, or whether only endogenous AGEs (those produced in the body) matter. This does not free diet from potentially negatively influencing AGE, but potentially implies that dietary AGE may deserve less attention than other aspects of diet that lead to elevated blood sugar levels and formation of AGEs.
Effects
AGEs affect nearly every type of cell and molecule in the body and are thought to be one factor in aging and some age-related chronic diseases. They are also believed to play a causative role in the vascular complications of diabetes mellitus.
AGEs arise under certain pathologic conditions, such as oxidative stress due to hyperglycemia in patients with diabetes. AGEs play a role as proinflammatory mediators in gestational diabetes as well.
In the context of cardiovascular disease, AGEs can induce crosslinking of collagen, which can cause vascular stiffening and entrapment of low-density lipoprotein particles (LDL) in the artery walls. AGEs can also cause glycation of LDL which can promote its oxidation. Oxidized LDL is one of the major factors in the development of atherosclerosis. Finally, AGEs can bind to RAGE (receptor for advanced glycation end products) and cause oxidative stress as well as activation of inflammatory pathways in vascular endothelial cells.
In other diseases
AGEs have been implicated in Alzheimer's Disease, cardiovascular disease, and stroke. The mechanism by which AGEs induce damage is through a process called cross-linking that causes intracellular damage and apoptosis. They form photosensitizers in the crystalline lens, which has implications for cataract development. Reduced muscle function is also associated with AGEs.
Pathology
AGEs have a range of pathological effects, such as:
Increased vascular permeability.
Increased arterial stiffness.
Inhibition of vascular dilation by interfering with nitric oxide.
Oxidizing LDL.
Binding cells—including macrophage, endothelial, and mesangial—to induce the secretion of a variety of cytokines.
Enhanced oxidative stress.
Hemoglobin-AGE levels are elevated in diabetic individuals and other AGE proteins have been shown in experimental models to accumulate with time, increasing from 5-50 fold over periods of 5–20 weeks in the retina, lens and renal cortex of diabetic rats. The inhibition of AGE formation reduced the extent of nephropathy in diabetic rats. Therefore, substances that inhibit AGE formation may limit the progression of disease and may offer new tools for therapeutic interventions in the therapy of AGE-mediated disease.
AGEs have specific cellular receptors; the best-characterized are those called RAGE. The activation of cellular RAGE on endothelium, mononuclear phagocytes, and lymphocytes triggers the generation of free radicals and the expression of inflammatory gene mediators. Such increases in oxidative stress lead to the activation of the transcription factor NF-κB and promote the expression of NF-κB regulated genes that have been associated with atherosclerosis.
Reactivity
Proteins are usually glycated through their lysine residues. In humans, histones in the cell nucleus are richest in lysine, and therefore form the glycated protein N(6)-Carboxymethyllysine (CML).
A receptor nicknamed RAGE, from receptor for advanced glycation end products, is found on many cells, including endothelial cells, smooth muscle, cells of the immune system from tissue such as lung, liver, and kidney. This receptor, when binding AGEs, contributes to age- and diabetes-related chronic inflammatory diseases such as atherosclerosis, asthma, arthritis, myocardial infarction, nephropathy, retinopathy, periodontitis and neuropathy. The pathogenesis of this process hypothesized to activation of the nuclear factor kappa B (NF-κB) following AGE binding. NF-κB controls several genes which are involved in inflammation. AGEs can be detected and quantified using bioanalytical and immunological methods.
Clearance
In clearance, or the rate at which a substance is removed or cleared from the body, it has been found that the cellular proteolysis of AGEs—the breakdown of proteins—produces AGE peptides and "AGE free adducts" (AGE adducts bound to single amino acids). These latter, after being released into the plasma, can be excreted in the urine.
Nevertheless, the resistance of extracellular matrix proteins to proteolysis renders their advanced glycation end products less conducive to being eliminated. While the AGE free adducts are released directly into the urine, AGE peptides are endocytosed by the epithelial cells of the proximal tubule and then degraded by the endolysosomal system to produce AGE amino acids. It is thought that these acids are then returned to the kidney's inside space, or lumen, for excretion.
AGE free adducts are the major form through which AGEs are excreted in urine, with AGE-peptides occurring to a lesser extent but accumulating in the plasma of patients with chronic kidney failure.
Larger, extracellularly derived AGE proteins cannot pass through the basement membrane of the renal corpuscle and must first be degraded into AGE peptides and AGE free adducts. Peripheral macrophage as well as liver sinusoidal endothelial cells and Kupffer cells
have been implicated in this process, although the real-life involvement of the liver has been disputed.
Large AGE proteins unable to enter the Bowman's capsule are capable of binding to receptors on endothelial and mesangial cells and to the mesangial matrix. Activation of RAGE induces production of a variety of cytokines, including TNFβ, which mediates an inhibition of metalloproteinase and increases production of mesangial matrix, leading to glomerulosclerosis and decreasing kidney function in patients with unusually high AGE levels.
Although the only form suitable for urinary excretion, the breakdown products of AGE—that is, peptides and free adducts—are more aggressive than the AGE proteins from which they are derived, and they can perpetuate related pathology in diabetic patients, even after hyperglycemia has been brought under control.
Some AGEs have an innate catalytic oxidative capacity, while activation of NAD(P)H oxidase through activation of RAGE and damage to mitochondrial proteins leading to mitochondrial dysfunction can also induce oxidative stress. A 2007 study found that AGEs could significantly increase expression of TGF-β1, CTGF, Fn mRNA in NRK-49F cells through enhancement of oxidative stress, and suggested that inhibition of oxidative stress might underlie the effect of ginkgo biloba extract in diabetic nephropathy. The authors suggested that antioxidant therapy might help prevent the accumulation of AGEs and induced damage. In the end, effective clearance is necessary, and those suffering AGE increases because of kidney dysfunction might require a kidney transplant.
In diabetics who have an increased production of an AGE, kidney damage reduces the subsequent urinary removal of AGEs, forming a positive feedback loop that increases the rate of damage. In a 1997 study, diabetic and healthy subjects were given a single meal of egg white (56 g protein), cooked with or without 100 g of fructose; there was a greater than 200-fold increase in AGE immunoreactivity from the meal with fructose.
Potential therapy
AGEs are the subject of ongoing research. There are three therapeutic approaches: preventing the formation of AGEs, breaking crosslinks after they are formed and preventing their negative effects.
Compounds that have been found to inhibit AGE formation in the laboratory include Vitamin C, Agmatine, benfotiamine, pyridoxamine, alpha-lipoic acid, taurine, pimagedine, aspirin, carnosine, metformin, pioglitazone, and pentoxifylline. Activation of the TRPA-1 receptor by lipoic acid or podocarpic acid has been shown to reduce the levels of AGES by enhancing the detoxification of methylglyoxal, a major precursor of several AGEs.
Studies in rats and mice have found that natural phenols such as resveratrol and curcumin can prevent the negative effects of the AGEs.
Compounds that are thought to break some existing AGE crosslinks include Alagebrium (and related ALT-462, ALT-486, and ALT-946) and N-phenacyl thiazolium bromide. One in vitro study shows that rosmarinic acid out performs the AGE breaking potential of ALT-711.
There is, however, no agent known that can break down the most common AGE, glucosepane, which appears 10 to 1,000 times more common in human tissue than any other cross-linking AGE.
Some chemicals, on the other hand, like aminoguanidine, might limit the formation of AGEs by reacting with 3-deoxyglucosone.
See also
Glycosylation
Glyoxalase system
Methylglyoxal
Raw foodism
N(6)-Carboxymethyllysine
Lipofuscin
References
Biomolecules
Post-translational modification
Senescence
Advanced glycation end-products | Advanced glycation end-product | [
"Chemistry",
"Biology"
] | 2,104 | [
"Carbohydrates",
"Natural products",
"Biochemistry",
"Gene expression",
"Biochemical reactions",
"Senescence",
"Organic compounds",
"Post-translational modification",
"Cellular processes",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Advanced glycation end-products",
"Metabo... |
1,467,246 | https://en.wikipedia.org/wiki/Solar%20still | A solar still distills water with substances dissolved in it by using the heat of the Sun to evaporate water so that it may be cooled and collected, thereby purifying it. They are used in areas where drinking water is unavailable, so that clean water is obtained from dirty water or from plants by exposing them to sunlight.
Still types include large scale concentrated solar stills and condensation traps. In a solar still, impure water is contained outside the collector, where it is evaporated by sunlight shining through a transparent collector. The pure water vapour condenses on the cool inside surface and drips into a tank.
Distillation replicates the way nature makes rain. The sun's energy heats water to the point of evaporation. As the water evaporates, its vapour rises, condensing into water again as it cools. This process leaves behind impurities, such as salts and heavy metals, and eliminates microbiological organisms. The result is pure (potable) water.
History
Condensation traps have been in use since the pre-Incan peoples inhabited the Andes.
In 1952, the United States military developed a portable solar still for pilots stranded in the ocean. It featured an inflatable floating plastic ball, with a flexible tube in the side. An inner bag hangs from attachment points on the outer bag. Seawater is poured into the inner bag from an opening in the ball's neck. Fresh water is taken out using the side tube. Output ranged from to of fresh water per day. Similar stills are included in some life raft survival kits, though manual reverse osmosis desalinators have mostly replaced them.
Today, a method for gathering water in moisture traps is taught within the Argentinian Army for use by specialist units expected to conduct extended patrols of more than a week's duration in the Andes' arid border areas.
Methods
Pit still
A collector is placed at the bottom of a pit. Branches are placed vertically in the pit. The branches are long enough to extend over the edge of the pit and form a funnel to direct the water into the collector. A lid is then built over this funnel, using more branches, leaves, grasses, etc. Water is collected each morning.
This method relies on the formation of dew or frost on the receptacle, funnel, and lid. Forming dew collects on and runs down the outside of the funnel and into the receptacle. This water would typically evaporate with the morning sun and thus vanish, but the lid traps the evaporating water and raises the humidity within the trap, reducing the amount of lost water. The shade produced by the lid also reduces the temperature within the trap, which further reduces the rate of water loss to evaporation.
A solar still can be constructed with two–four stones, plastic film or transparent glass, a central weight to make the funnel and a container for the condensate. Better materials improve efficiency. A single sheet of plastic can replace the branches and leaves. Greater efficiency arises because the plastic is waterproof, preventing water vapour from escaping. The sheet is attached to the ground on all sides with stones or earth. Weighting the centre of the sheet forms the funnel. Condensate runs down it into the receptacle. One study of pit distillation found that angling the lid at 30 degrees angle captured the most water. The optimal water depth was about .
Transpiration
During photosynthesis plants release water through transpiration. Water can be obtained by enclosing a leafy tree branch in clear plastic, capturing water vapour released by the tree. The plastic allows photosynthesis to continue.
In a 2009 study, variations to the angle of plastic and increasing the internal temperature versus the outside temperature improved output volumes.
Unless relieved the vapour pressure around the branch can rise so high that the leaves can no longer transpire, requiring the water to be removed frequently.
Alternatively, clumps of grass or small bushes can be placed inside the bag. The foliage must be replaced at regular intervals, particularly if the foliage is uprooted.
Efficiency is greatest when the bag receives maximum sunshine. Soft, pulpy roots yield the greatest amount of liquid for the least amount of effort.
Wick
The wick type solar still is a vapour-tight glass-topped box with an angled roof. Water is poured in from the top. It is heated by sunlight and evaporates. It condenses on the underside of the glass and runs into the connecting pipe at the bottom. Wicks separate the water into banks to increase surface area. The more wicks, the more heat reaches the water.
To aid in absorbing more heat, wicks can be blackened. Glass absorbs less heat than plastic at higher temperatures, although glass is not as flexible.
A plastic net can catch the water before it falls into the container and give it more time to heat.
Additives
When distilling brine or other polluted water, adding a dye can increase the amount of solar radiation absorbed.
Reverse still
A reverse still uses the temperature difference between solar-heated ambient air and the device to condense ambient water vapour. One such device produces water without external power. It features an inverted cone on top to deflect ambient heat in the air, and to keep sunlight off the upper surface of the box. This surface is a sheet of glass coated with multiple layers of a polymer and silver.
It reflects sunlight to reduce surface heating. Residual heat that is not reflected is reemitted in a specific (infrared) wavelength so that it passes through the atmosphere into space. The box can be as much as 15 °C (27 °F) cooler than the ambient temperature. That stimulates condensation, which gathers on the ceiling. This ceiling is coated in a superhydrophobic material, so that the condensate forms into droplets and falls into a collector. A test system yielded of water per day, using a surface or approximately 1.3 L/m2 (0.28 gal/ft2) per day.
Efficiency
Condensation traps are sources for extending or supplementing existing water sources or supplies. A trap measuring in diameter by deep yields around per day.
Urinating into the pit before adding the receptacle allows some of the urine's water content to be recovered.
A pit still may be too inefficient as a survival still, because of the energy/water required for construction. In desert environments water needs can exceed per day for a person at rest, while still production may average only . Several days of water collection may be required to equal the water lost during construction.
Applications
Remote sites
Solar stills are used in cases where rain, piped, or well water is impractical, such as in remote homes or during power outages. In subtropical hurricane target areas that can lose power for days, solar distillation can provide an alternative source of clean water.
Solar-powered desalination systems can be installed in remote locations where there is little or no infrastructure or energy grid. Solar is still affordable, eco-friendly, and considered an effective method amongst other conventional distillation techniques. Solar still is very effective, especially for supplying fresh water for islanders. This makes them ideal for use in rural areas or developing countries where access to clean water is limited.
Survival
Solar stills have been used by ocean-stranded pilots and included in life raft emergency kits.
Using a condensation trap to distill urine will remove the urea and salt, recycling the body's water.
Wastewater treatment
Solar stills have also been used for the treatment of municipal wastewater, the dewatering of sewage sludge as well as for olive mill wastewater management.
See also
Concentrated solar still
Desalination
Freshwater
Mária Telkes
Solar cooker
Solar water disinfection
Steven Callahan
Watermaker
Wikiversity:Solar Seawater Still
References
Patents
External links
Making a solar still
Solar distillation
Distillation
Water technology
Water treatment
Solar power
Survival skills | Solar still | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,629 | [
"Separation processes",
"Water treatment",
"Water pollution",
"Distillation",
"Environmental engineering",
"Water technology"
] |
1,468,744 | https://en.wikipedia.org/wiki/Top%20quark%20condensate | In particle physics, the top quark condensate theory (or top condensation) is an alternative to the Standard Model fundamental Higgs field, where the Higgs boson is a composite field, composed of the top quark and its antiquark. The top quark-antiquark pairs are bound together by a new force called topcolor, analogous to the binding of Cooper pairs in a BCS superconductor, or mesons in the strong interactions. The top quark is very heavy, with a measured mass of approximately 174 GeV (comparable to the electroweak scale), and so its Yukawa coupling is of order unity, suggesting the possibility of strong coupling dynamics at high energy scales. This model attempts to explain how the electroweak scale may match the top quark mass.
History
The idea was described by Yoichiro Nambu and subsequently developed by Miransky, Tanabashi, and Yamawaki (1989) and William A. Bardeen, Christopher T. Hill, and Manfred Lindner (1990), who connected the theory to the renormalization group, and improved its predictions.
The renormalization group reveals that top quark condensation is fundamentally based upon the infrared fixed point for the top quark Higgs-Yukawa coupling, proposed by Pendleton and Ross (1981) and Hill. The "infrared" fixed point originally predicted that the top quark would be heavy, contrary to the prevailing view of the early 1980s. Indeed, the top quark was discovered in 1995 at the large mass of 174 GeV. The infrared-fixed point implies that it is strongly coupled to the Higgs boson at very high energies, corresponding to the Landau pole of the Higgs-Yukawa coupling. At this high scale a bound-state Higgs forms, and in the "infrared", the coupling relaxes to its measured value of order unity by the renormalization group. The Standard Model renormalization group fixed point prediction is about 220 GeV, and the observed top mass is roughly 20% lower than this prediction. The simplest top condensation models are now ruled out by the LHC discovery of the Higgs boson at a mass scale of 125 GeV. However, extended versions of the theory, introducing more particles, can be consistent with the observed top quark and Higgs boson masses.
Future
The composite Higgs boson arises "naturally" in Topcolor models, that are extensions of the standard model using a hypothetical force analogous to quantum chromodynamics. To be "natural", that is, without excessive fine-tuning (i.e. to stabilize the Higgs mass from large radiative corrections), the hypothesis requires new physics at a relatively low energy scale. Placing new physics at 10 TeV, for instance, the model predicts the top quark to be significantly heavier than observed (at about 600 GeV vs. 171 GeV). Top Seesaw models, also based upon Topcolor, circumvent this difficulty.
The predicted top quark mass would come into improved agreement with the fixed point if there are many additional Higgs scalars beyond the standard model. This may be indicating a rich spectroscopy of new composite Higgs fields at energy scales that can be probed with the LHC and its upgrades.
See also
Bose–Einstein condensation
Fermion condensate
Hierarchy problem
Technicolor (physics)
References
Physics beyond the Standard Model | Top quark condensate | [
"Physics"
] | 711 | [
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
1,469,133 | https://en.wikipedia.org/wiki/Preferred%20number | In industrial design, preferred numbers (also called preferred values or preferred series) are standard guidelines for choosing exact product dimensions within a given set of constraints.
Product developers must choose numerous lengths, distances, diameters, volumes, and other characteristic quantities. While all of these choices are constrained by considerations of functionality, usability, compatibility, safety or cost, there usually remains considerable leeway in the exact choice for many dimensions.
Preferred numbers serve two purposes:
Using them increases the probability of compatibility between objects designed at different times by different people. In other words, it is one tactic among many in standardization, whether within a company or within an industry, and it is usually desirable in industrial contexts (unless the goal is vendor lock-in or planned obsolescence)
They are chosen such that when a product is manufactured in many different sizes, these will end up roughly equally spaced on a logarithmic scale. They therefore help to minimize the number of different sizes that need to be manufactured or kept in stock.
Preferred numbers represent preferences of simple numbers (such as 1, 2, and 5) multiplied by the powers of a convenient basis, usually 10.
Renard numbers
In 1870 Charles Renard proposed a set of preferred numbers. His system was adopted in 1952 as international standard ISO 3. Renard's system divides the interval from 1 to 10 into 5, 10, 20, or 40 steps, leading to the R5, R10, R20 and R40 scales, respectively. The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence. This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10. Example: 1.0, 1.6, 2.5, 4.0, 6.3
E series
The E series is another system of preferred numbers. It consists of the E1, E3, E6, E12, E24, E48, E96 and E192 series. Based on some of the existing manufacturing conventions, the International Electrotechnical Commission (IEC) began work on a new international standard in 1948. The first version of this IEC 63 (renamed into IEC 60063 in 2007) was released in 1952.
It works similarly to the Renard series, except that it subdivides the interval from 1 to 10 into 3, 6, 12, 24, 48, 96 or 192 steps. These subdivisions ensure that when some arbitrary value is replaced with the nearest preferred number, the maximum relative error will be on the order of 40%, 20%, 10%, 5%, etc.
Use of the E series is mostly restricted to electronic parts like resistors, capacitors, inductors and Zener diodes. Commonly produced dimensions for other types of electrical components are either chosen from the Renard series instead or are defined in relevant product standards (for example wires).
1–2–5 series
In applications for which the R5 series provides a too fine graduation, the 1–2–5 series
is sometimes used as a cruder alternative. It is effectively an E3 series rounded to one significant digit:
… 0.1 0.2 0.5 1 2 5 10 20 50 100 200 500 1000 …
This series covers a decade (1:10 ratio) in three steps. Adjacent values differ by factors 2 or 2.5. Unlike the Renard series, the 1–2–5 series has not been formally adopted as an international standard. However, the Renard series R10 can be used to extend the 1–2–5 series to a finer graduation.
This series is used to define the scales for graphs and for instruments that display in a two-dimensional form with a graticule, such as oscilloscopes.
The denominations of most modern currencies, notably the euro and sterling, follow a 1–2–5 series. The United States and Canada follow the approximate 1–2–5 series 1, 5, 10, 25, 50 (cents), $1, $2, $5, $10, $20, $50, $100. The ––1 series (... 0.1 0.25 0.5 1 2.5 5 10 ...) is also used by currencies derived from the former Dutch gulden (Aruban florin, Netherlands Antillean gulden, Surinamese dollar), some Middle Eastern currencies (Iraqi and Jordanian dinars, Lebanese pound, Syrian pound), and the Seychellois rupee. However, newer notes introduced in Lebanon and Syria due to inflation follow the standard 1–2–5 series instead.
Convenient numbers
In the 1970s the National Bureau of Standards (NBS) defined a set of convenient numbers to ease metrication in the United States. This system of metric values was described as 1–2–5 series in reverse, with assigned preferences for those numbers which are multiples of 5, 2, and 1 (plus their powers of 10), excluding linear dimensions above 100 mm.
Audio frequencies
ISO 266, Acoustics—Preferred frequencies, defines two different series of audio frequencies for use in acoustical measurements. Both series are referred to the standard reference frequency of 1000 Hz, and use the R10 Renard series from ISO 3, with one using powers of 10, and the other related to the definition of the octave as the frequency ratio 1:2.
For example, a set of nominal center frequencies for use in audio tests and audio test equipment is:
Computer engineering
When dimensioning computer components, the powers of two are frequently used as preferred numbers:
1 2 4 8 16 32 64 128 256 512 1024 ...
Where a finer grading is needed, additional preferred numbers are obtained by multiplying a power of two with a small odd integer:
1 2 4 8 16 32 64 128 256 512 1024 ...
(×3) 3 6 12 24 48 96 192 384 768 1536 3072 ...
(×5) 5 10 20 40 80 160 320 640 1280 2560 5120 ...
(×7) 7 14 28 56 112 224 448 896 1792 3584 7168 ...
In computer graphics, widths and heights of raster images are preferred to be multiples of 16, as many compression algorithms (JPEG, MPEG) divide color images into square blocks of that size. Black-and-white JPEG images are divided into 8×8 blocks. Screen resolutions often follow the same principle.
Preferred aspect ratios have also an important influence here, e.g., 2:1, 3:2, 4:3, 5:3, 5:4, 8:5, 16:9.
Paper documents, envelopes, and drawing pens
Standard metric paper sizes use the square root of two () as factors between neighbouring dimensions rounded to the nearest mm (Lichtenberg series, ISO 216). An A4 sheet for example has an aspect ratio very close to and an area very close to 1/16 square metre. An A5 is almost exactly half an A4, and has the same aspect ratio. The factor also appears between the standard pen thicknesses for technical drawings in ISO 9175-1: 0.13, 0.18, 0.25, 0.35, 0.50, 0.70, 1.00, 1.40, and 2.00 mm. This way, the right pen size is available to continue a drawing that has been magnified to a different standard paper size.
Photography
In photography, aperture, exposure, and film speed generally follow powers of 2:
The aperture size controls how much light enters the camera. It is measured in f-stops: , , , , etc. Full f-stops are a square root of 2 apart. Camera lens settings are often set to gaps of successive thirds, so each f-stop is a sixth root of 2, rounded to two significant digits: 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.5, 2.8, 3.2, 3.5, 4.0, etc. The spacing is referred to as "one-third of a stop". (Rounding is not exact in the cases of , , , , etc.)
The film speed is a measure of the film's sensitivity to light. It is expressed as ISO values such as "ISO 100". An earlier standard, occasionally still in use, uses the term "ASA" rather than "ISO", referring to the (former) American Standards Association. Measured film speeds are rounded to the nearest preferred number from a modified Renard series including 100, 125, 160, 200, 250, 320, 400, 500, 640, 800... This is the same as the R10′ rounded Renard series, except for the use of 6.4 instead of 6.3, and for having more aggressive rounding below ISO 16. Film marketed to amateurs, however, uses a restricted series including only powers of two multiples of ISO 100: 25, 50, 100, 200, 400, 800, 1600 and 3200. Some low-end cameras can only reliably read these values from DX encoded film cartridges because they lack the extra electrical contacts that would be needed to read the complete series. Some digital cameras extend this binary series to values like 12800, 25600, etc. instead of the modified Renard values 12500, 25000, etc.
The shutter speed controls how long the camera lens is open to receive light. These are expressed as fractions of a second, roughly but not exactly based on powers of 2: 1 second, , , , , , , , , , of a second.
Retail packaging
In some countries, consumer-protection laws restrict the number of different prepackaged sizes in which certain products can be sold, in order to make it easier for consumers to compare prices.
An example of such a regulation is the European Union directive on the volume of certain prepackaged liquids (75/106/EEC). It restricts the list of allowed wine-bottle sizes to 0.1, 0.25 (), 0.375 (), 0.5 (), 0.75 (), 1, 1.5, 2, 3, and 5 litres. Similar lists exist for several other types of products. They vary and often deviate significantly from any geometric series in order to accommodate traditional sizes when feasible. Adjacent package sizes in these lists differ typically by factors or , in some cases even , , or some other ratio of two small integers.
See also
Convenient number
Nominal impedance
Nominal size
Preferred metric sizes
References
Further reading
(NB. This 1943 publication already shows a list of new "preferred values of resistance" following what was adopted by the IEC for standardization since 1948 and standardized as the E series of preferred numbers in IEC 63:1952. For comparison, it also lists "old standard resistance values" as follows: 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, 600, 750, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , 1 Meg, 1.5 Meg, 2.0 Meg, 3.0 Meg, 4.0 Meg, 5.0 Meg, 6.0 Meg, 7.0 Meg, 8.0 Meg, 9.0 Meg, 10.00 Meg.)
(NB. Shows a list of "old standard resistance values" vs. new "preferred values of resistance" following the later standardized E series of preferred numbers.)
(Replaced: (1955) and )
(Replaced: )
(9 pages) (Replaced: Reaffirmed as USASI Z17.1-1958 in 1966 and named ANSI Z17.1-1958 since 1969.)
(340 pages)
(135 pages)
(191 pages)
(95 pages)
Numbers
Industrial design
Logarithmic scales of measurement | Preferred number | [
"Physics",
"Mathematics",
"Engineering"
] | 2,534 | [
"Industrial design",
"Design engineering",
"Physical quantities",
"Quantity",
"Mathematical objects",
"Logarithmic scales of measurement",
"Arithmetic",
"Design",
"Numbers"
] |
1,469,331 | https://en.wikipedia.org/wiki/AU%20Microscopii | AU Microscopii (AU Mic) is a young red dwarf star located away – about 8 times as far as the closest star after the Sun. The apparent visual magnitude of AU Microscopii is 8.73, which is too dim to be seen with the naked eye. It was given this designation because it is in the southern constellation Microscopium and is a variable star. Like β Pictoris, AU Microscopii has a circumstellar disk of dust known as a debris disk and at least two exoplanets, with the presence of an additional two planets being likely.
Stellar properties
AU Mic is a young star at only 22 million years old; less than 1% of the age of the Sun. With a stellar classification of M1 Ve, it is a red dwarf star with a physical radius of 75% that of the Sun. Despite being half the Sun's mass, it is radiating only 9% as much luminosity as the Sun. This energy is being emitted from the star's outer atmosphere at an effective temperature of 3,700 K, giving it the cool orange-red hued glow of an M-type star. AU Microscopii is a member of the β Pictoris moving group. AU Microscopii may be gravitationally bound to the binary star system AT Microscopii.
AU Microscopii has been observed in every part of the electromagnetic spectrum from radio to X-ray and is known to undergo flaring activity at all these wavelengths. Its flaring behaviour was first identified in 1973. Underlying these random outbreaks is a nearly sinusoidal variation in its brightness with a period of 4.865 days. The amplitude of this variation changes slowly with time. The V band brightness variation was approximately 0.3 magnitudes in 1971; by 1980 it was merely 0.1 magnitudes.
Planetary system
AU Microscopii's debris disk has an asymmetric structure and an inner gap or hole cleared of debris, which has led a number of astronomers to search for planets orbiting AU Microscopii. By 2007, no searches had led to any detections of planets. However, in 2020 the discovery of a Neptune-sized planet was announced based on transit observations by TESS. Its rotation axis is well aligned with the rotation axis of the parent star, with the misalignment being equal to 5°.
Since 2018, a second planet, AU Microscopii c, was suspected to exist. It was confirmed in December 2020, after additional transit events were documented by the TESS observatory. A 2024 study which performed measurements of Rossiter–McLaughlin effect for the planet c revealed that the planet is possibly misaligned with the star's rotation axis, returning a poorly constrained value of projected obliquity λc = .
A third planet in the system was suspected since 2022 based on transit-timing variations, and "validated" in 2023, although several possible orbital periods of planet d cannot be ruled out yet. This planet has a mass comparable to that of Earth. Radial velocity observations have also found evidence for a fourth, outer planet as of 2023. Observations of the AU Microscopii system with the James Webb Space Telescope were unable to confirm the presence of previously unknown companions.
Debris disk
All-sky observations with the Infrared Astronomy Satellite revealed faint infrared emission from AU Microscopii. This emission is due to a circumstellar disk of dust which first resolved at optical wavelengths in 2003 by Paul Kalas and collaborators using the University of Hawaii 2.2-m telescope on Mauna Kea, Hawaii. This large debris disk faces the earth edge-on at nearly 90 degrees, and measures at least 200 AU in radius. At these large distances from the star, the lifetime of dust in the disk exceeds the age of AU Microscopii. The disk has a gas to dust mass ratio of no more than 6:1, much lower than the usually assumed primordial value of 100:1. The debris disk is therefore referred to as "gas-poor", as the primordial gas within the circumstellar system has been mostly depleted. The total amount of dust visible in the disk is estimated to be at least a lunar mass, while the larger planetesimals from which the dust is produced are inferred to have at least six lunar masses.
The spectral energy distribution of AU Microscopii's debris disk at submillimetre wavelengths indicate the presence of an inner hole in the disk extending to 17 AU, while scattered light images estimate the inner hole to be 12 AU in radius. Combining the spectral energy distribution with the surface brightness profile yields a smaller estimate of the radius of the inner hole, 1 - 10 AU. The inner part of the disk is asymmetric and shows structure in the inner 40 AU. The inner structure has been compared with that expected to be seen if the disk is influenced by larger bodies or has undergone recent planet formation. The surface brightness (brightness per area) of the disk in the near infrared as a function of projected distance from the star follows a characteristic shape. The inner of the disk appear approximately constant in density and the brightness is unchanging, more-or-less flat. Around the density and surface brightness begins to decrease: first it decreases slowly in proportion to distance as ; then outside , the density and brightness drops much more steeply, as . This "broken power-law" shape is similar to the shape of the profile of β Pic's disk.
In October 2015 it was reported that astronomers using the Very Large Telescope (VLT) had detected very unusual outward-moving features in the disk. By comparing the VLT images with those taken by the Hubble Space Telescope in 2010 and 2011 it was found that the wave-like structures are moving away from the star at speeds of up to 10 kilometers per second (22,000 miles per hour). The waves farther away from the star seem to be moving faster than those close to it, and at least three of the features are moving fast enough to escape the gravitational pull of the star. Follow-up observations with the SPHERE instrument on the Very Large Telescope were able to confirm the presence of the fast-moving features, and James Webb Space Telescope observations found similar features within the disk in two NIRCam filters; however, these features have not been detected in the radio with Atacama Large Millimeter Array observations. These fast-moving features have been described as "dust avalanches", where dust particles catastrophically collide into planetesimals within the disk.
Methods of observation
AU Mic's disk has been observed at a variety of different wavelengths, giving humans different types of information about the system. The light from the disk observed at optical wavelengths is stellar light that has reflected (scattered) off dust particles into Earth's line of sight. Observations at these wavelengths utilize a coronagraphic spot to block the bright light coming directly from the star. Such observations provide high-resolution images of the disk. Because light having a wavelength longer than the size of a dust grain is scattered only poorly, comparing images at different wavelengths (visible and near-infrared, for example) gives humans information about the sizes of the dust grains in the disk.
Optical observations have been made with the Hubble Space Telescope and Keck Telescopes. The system has also been observed at infrared and sub-millimeter wavelengths with the James Clerk Maxwell Telescope, Spitzer Space Telescope, and the James Webb Space Telescope. This light is emitted directly by dust grains as a result of their internal heat (modified blackbody radiation). The disk cannot be resolved at these wavelengths, so such observations are measurements of the amount of light coming from the entire system. Observations at increasingly longer wavelengths give information about dust particles of larger sizes and at larger distances from the star.
Gallery
See also
List of exoplanets discovered in 2020 - AU Microscopii b and c
List of exoplanets discovered in 2023 - AU Microscopii d
References
External links
Microscopium
M-type main-sequence stars
Circumstellar disks
Microscopii, AU
197481
Articles containing video clips
0803
102409
Durchmusterung objects
Planetary systems with two confirmed planets
Beta Pictoris moving group | AU Microscopii | [
"Astronomy"
] | 1,700 | [
"Microscopium",
"Constellations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.