id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
2,928,775
https://en.wikipedia.org/wiki/Hofstadter%27s%20butterfly
In condensed matter physics, Hofstadter's butterfly is a graph of the spectral properties of non-interacting two-dimensional electrons in a perpendicular magnetic field in a lattice. The fractal, self-similar nature of the spectrum was discovered in the 1976 Ph.D. work of Douglas Hofstadter and is one of the early examples of modern scientific data visualization. The name reflects the fact that, as Hofstadter wrote, "the large gaps [in the graph] form a very striking pattern somewhat resembling a butterfly." The Hofstadter butterfly plays an important role in the theory of the integer quantum Hall effect and the theory of topological quantum numbers. History The first mathematical description of electrons on a 2D lattice, acted on by a perpendicular homogeneous magnetic field, was studied by Rudolf Peierls and his student R. G. Harper in the 1950s. Hofstadter first described the structure in 1976 in an article on the energy levels of Bloch electrons in perpendicular magnetic fields. It gives a graphical representation of the spectrum of Harper's equation at different frequencies. One key aspect of the mathematical structure of this spectrum – the splitting of energy bands for a specific value of the magnetic field, along a single dimension (energy) – had been previously mentioned in passing by Soviet physicist Mark Azbel in 1964 (in a paper cited by Hofstadter), but Hofstadter greatly expanded upon that work by plotting all values of the magnetic field against all energy values, creating the two-dimensional plot that first revealed the spectrum's uniquely recursive geometric properties. Written while Hofstadter was at the University of Oregon, his paper was influential in directing further research. It predicted on theoretical grounds that the allowed energy level values of an electron in a two-dimensional square lattice, as a function of a magnetic field applied perpendicularly to the system, formed what is now known as a fractal set. That is, the distribution of energy levels for small-scale changes in the applied magnetic field recursively repeats patterns seen in the large-scale structure. "Gplot", as Hofstadter called the figure, was described as a recursive structure in his 1976 article in Physical Review B, written before Benoit Mandelbrot's newly coined word "fractal" was introduced in an English text. Hofstadter also discusses the figure in his 1979 book Gödel, Escher, Bach. The structure became generally known as "Hofstadter's butterfly". David J. Thouless and his team discovered that the butterfly's wings are characterized by Chern integers, which provide a way to calculate the Hall conductance in Hofstadter's model. Confirmation In 1997 the Hofstadter butterfly was reproduced in experiments with a microwave guide equipped with an array of scatterers. The similarity between the mathematical description of the microwave guide with scatterers and Bloch's waves in the magnetic field allowed the reproduction of the Hofstadter butterfly for periodic sequences of the scatterers. In 2001, Christian Albrecht, Klaus von Klitzing, and coworkers realized an experimental setup to test Thouless et al.'s predictions about Hofstadter's butterfly with a two-dimensional electron gas in a superlattice potential. In 2013, three separate groups of researchers independently reported evidence of the Hofstadter butterfly spectrum in graphene devices fabricated on hexagonal boron nitride substrates. In this instance the butterfly spectrum results from the interplay between the applied magnetic field and the large-scale moiré pattern that develops when the graphene lattice is oriented with near zero-angle mismatch to the boron nitride. In September 2017, John Martinis's group at Google, in collaboration with the Angelakis group at CQT Singapore, published results from a simulation of 2D electrons in a perpendicular magnetic field using interacting photons in 9 superconducting qubits. The simulation recovered Hofstadter's butterfly, as expected. In 2021 the butterfly was observed in twisted bilayer graphene at the second magic angle. Theoretical model In his original paper, Hofstadter considers the following derivation: a charged quantum particle in a two-dimensional square lattice, with a lattice spacing , is described by a periodic Schrödinger equation, under a perpendicular static homogeneous magnetic field restricted to a single Bloch band. For a 2D square lattice, the tight binding energy dispersion relation is , where is the energy function, is the crystal momentum, and is an empirical parameter. The magnetic field , where the magnetic vector potential, can be taken into account by using Peierls substitution, replacing the crystal momentum with the canonical momentum , where is the particle momentum operator and is the charge of the particle ( for the electron, is the elementary charge). For convenience we choose the gauge . Using that is the translation operator, so that , where and is the particle's two-dimensional wave function. One can use as an effective Hamiltonian to obtain the following time-independent Schrödinger equation: Considering that the particle can only hop between points in the lattice, we write , where are integers. Hofstadter makes the following ansatz: , where depends on the energy, in order to obtain Harper's equation (also known as almost Mathieu operator for ): where and , is proportional to the magnetic flux through a lattice cell and is the magnetic flux quantum. The flux ratio can also be expressed in terms of the magnetic length , such that . Hofstadter's butterfly is the resulting plot of as a function of the flux ratio , where is the set of all possible that are a solution to Harper's equation. Solutions to Harper's equation and Wannier treatment Due to the cosine function's properties, the pattern is periodic on with period 1 (it repeats for each quantum flux per unit cell). The graph in the region of between 0 and 1 has reflection symmetry in the lines and . Note that is necessarily bounded between -4 and 4. Harper's equation has the particular property that the solutions depend on the rationality of . By imposing periodicity over , one can show that if (a rational number), where and are distinct prime numbers, there are exactly energy bands. For large , the energy bands converge to thin energy bands corresponding to the Landau levels. Gregory Wannier showed that by taking into account the density of states, one can obtain a Diophantine equation that describes the system, as where where and are integers, and is the density of states at a given . Here counts the number of states up to the Fermi energy, and corresponds to the levels of the completely filled band (from to ). This equation characterizes all the solutions of Harper's equation. Most importantly, one can derive that when is an irrational number, there are infinitely many solution for . The union of all forms a self-similar fractal that is discontinuous between rational and irrational values of . This discontinuity is nonphysical, and continuity is recovered for a finite uncertainty in or for lattices of finite size. The scale at which the butterfly can be resolved in a real experiment depends on the system's specific conditions. Phase diagram, conductance and topology The phase diagram of electrons in a two-dimensional square lattice, as a function of a perpendicular magnetic field, chemical potential and temperature, has infinitely many phases. Thouless and coworkers showed that each phase is characterized by an integral Hall conductance, where all integer values are allowed. These integers are known as Chern numbers. See also Aubry–André model References Fractals Condensed matter physics 1976 introductions Hall effect
Hofstadter's butterfly
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,586
[ "Physical phenomena", "Functions and mappings", "Mathematical analysis", "Hall effect", "Phases of matter", "Electric and magnetic fields in matter", "Fractals", "Mathematical objects", "Materials science", "Electrical phenomena", "Mathematical relations", "Condensed matter physics", "Solid ...
2,930,612
https://en.wikipedia.org/wiki/Roberval%20balance
The Roberval balance is a weighing scale presented to the French Academy of Sciences by the French mathematician Gilles Personne de Roberval in 1669. In this scale, two identical horizontal beams are attached, one directly above the other, to a vertical column, which is attached to a stable base. On each side, both horizontal beams are attached to a vertical beam. The six attachment points are pivots. Two horizontal plates, suitable for placing objects to be weighed, are fixed to the top of the two vertical beams. An arrow on the lower horizontal beam (and perpendicular to it) and a mark on the vertical column may be added to aid in leveling the scale. The object to be weighed is placed on one plate, and calibrated masses are added to and subtracted from the other plate until level is reached. The mass of the object is equal to the mass of the calibrated masses regardless of where on the plates the items are placed. Since the vertical beams are always vertical, and the weighing platforms always horizontal, the potential energy lost by a weight as its platform goes down a certain distance will always be the same, so it makes no difference where the weight is placed. For maximum accuracy, Roberval balances require that their top fulcrum be placed on the line between the left and right pivot so that tipping will not result in the net transfer of weight to either the left or right side of the scale: a fulcrum placed below the ideal pivot point will tend to cause a net shift in the direction of any downward-moving vertical column (in a kind of positive feedback loop); likewise, a fulcrum placed above this point will tend to level out the arms of the balance rather than respond to small changes in weight (in a negative feedback loop). An off-center weight on the plate exerts a downward force and a torque on the vertical column supporting the plate. The downward force is carried by the bearing at the top beam in most balance scales, the lower beam just being supported horizontally at midpoint by the body of the scales by a simple peg-in-slot arrangement, so it effectively hangs beneath the top beam and stops the platforms from rotating. The torque on the column is taken by a pair of equal and opposite forces in the horizontal beams. If the offset weight sits toward the outside of the platform, further from the centre of the scales, the top beam will be in tension and the bottom beam will be in compression. These tensions and compressions are carried by horizontal reactions from the central supports; the other side of the scales is not affected at all, nor is the balance of the scales. Principles of operation Certain presumptions are made in a theoretical Roberval balance. In order for such a balance to appear level in its natural state and be able to balance theoretical masses, the following must be true: All six pivot points must move without producing friction (since Roberval balances often actually require twice this number, a total of 12 pivot points would need to be friction free) The lengths of the arms (left and right of the fulcrum) must be exactly equal unless The weights of the arms themselves are unequal, or The weights of the vertical columns and/or pans is unequal The vertical distance between each vertical set of pivot points must be exactly the same In order to be balanced front-to-back the balance must either have two sets of two arms located around a central fulcrum or must have two fulcra supporting a single set of arms The weight of the arms on each side of the fulcrum must be equal (unless see above) The arms must be rigid and inflexible Gravitational force or rotational G force must be acting uniformly on the balance If the weight of the pan above either vertical column is itself greater than zero and any weight placed on that pan is off-center then that pan's tendency to tilt will cause the balance to exist in a state of tension at the pivot points below that pan. This tension will manifest as an increase in static friction. The longer the arms generally, the more sensitive the balance, though longer arms usually entail greater arm weight, which tends to decrease sensitivity Heavier pans and vertical columns also tend to decrease sensitivity Sensitivity lost by increases in either arm weight or pan/column weight can be counteracted only through decreased static friction in the pivot points The upper pivot point of the central supporting column is prevented from moving left–right and front–back by the fulcrum itself; it is prevented from moving up–down by gravitational pull. The lower pivot point of this column must be held in place so that it cannot sway left–right and, to a lesser extent, front–back as the arms move, but it experiences no up–down movement forces — in this arrangement, the entire pivot process takes place on the upper central pivot point, which acts as the single fulcrum for the entire balance; it is possible to reverse this, so that the lower pivot point acts as the fulcrum and the upper is only held in place so that it cannot sway left–right or front/back. Roberval balances are frequently depicted with the "pan" as a plate or peg protruding from the center of each vertical column— this is so that the balance can have a center of gravity that is in the actual center of the parallelogram and so that adding weights to these plates does not change that center of gravity. This produces the somewhat odd result that a correctly balanced Roberval balance, unlike a beam balance, can be "balanced" in any arm position: so long as the masses of the objects on both sides are equal or the pans are empty, it will balance with the right arm up and the left arm down, as well as the left up and the right down, as well as any position in between, and all of these positions will be "correctly balanced". As a corollary, because no actual two masses can have exactly the same weight, a highly precise Roberval balance measuring two such imprecise masses should always tip either completely to the left or completely to the right— it does not measure degree of difference, it indicates the existence of difference. These effects must be distinguished from the feedback loops and the friction of the pivot points mentioned above, as those are undesirable effects caused by design weaknesses or flaws. The correct method of using a precise but real Roberval balance, then, is to place one of masses (either the known or the unknown) on one plate/pan and then add only enough of the other mass to the other pan until the balance just barely tips completely in the direction of the second added mass. If the arms finalize in a horizontal position, this only indicates friction in the pivot points somewhere. A well-made and precise Roberval balance with a centralized center of gravity never actually "balances". Accuracy The Roberval balance is arguably less accurate and more difficult to manufacture than a beam balance with suspended plates. The beam balance, however, has the significant disadvantage of requiring suspensory strings, chains, or rods. For over three hundred years the Roberval balance has instead been popular for applications requiring convenience and only moderate accuracy, notably in retail trade. Manufacturers Well known manufacturers of Roberval balances include W & T Avery Ltd. and George Salter & Co. Ltd. in the United Kingdom and Trayvou in France. Henry Troemner, who designed scales for the United States Department of Treasury, was the first American to use the design. Notes Bibliography J. T. Graham, Scales and Balances, Shire Publications, Aylesbury (1981) Bruno Kisch, Scales and Weights. A Historical Outline, Yale University Press, New Haven (1966) External links The Roberval Balance Physics Demonstration showing surprising paradox of simple Roberval Balance design Weighing instruments Classical mechanics
Roberval balance
[ "Physics", "Technology", "Engineering" ]
1,615
[ "Weighing instruments", "Mass", "Classical mechanics", "Measuring instruments", "Mechanics", "Matter" ]
2,931,020
https://en.wikipedia.org/wiki/Pendulum-and-hydrostat%20control
Pendulum-and-hydrostat control is a control mechanism developed originally for depth control of the Whitehead torpedo. It is an early example of what is now known as proportional and derivative control. The hydrostat is a mechanism that senses pressure; the torpedo's depth is proportional to pressure. However, with only a hydrostat controlling the depth fins in a negative feedback loop, the torpedo tends to oscillate around the desired depth rather than settling to the desired depth. The addition of a pendulum allows the torpedo to sense the pitch of the torpedo. The pitch information is combined with the depth information to set the torpedo's depth control fins. The pitch information provides a damping term to the depth control response and suppresses the depth oscillations. Operation In control theory the effect of the addition of the pendulum can be explained as turning the simple proportional controller into a proportional-derivative controller since the depth keeping is not controlled by the depth alone anymore but also by the derivative (rate of change) of the depth which is roughly proportional to the angle of the machine. The relative gain of the proportional and derivative functions could be altered by adjusting the linkages. It was mainly used to control the depth of torpedoes until the end of the Second World War, and it reduced depth errors from ±40 feet (12 meters) to as little as ±6 inches (0.15 m). The pendulum and hydrostat control was invented by Robert Whitehead. It was an important advance in torpedo technology, and it was nicknamed "The Secret". References External links https://archive.today/20120530070555/http://www.btinternet.com/~philipr/torps.htm Control devices Mechanisms (engineering) Pendulums Torpedoes
Pendulum-and-hydrostat control
[ "Engineering" ]
365
[ "Control devices", "Control engineering", "Mechanical engineering", "Mechanisms (engineering)" ]
2,931,691
https://en.wikipedia.org/wiki/Geomechanics
Geomechanics (from the Greek γεός, i.e. prefix geo- meaning "earth"; and "mechanics") is the study of the mechanical state of the Earth's crust and the processes occurring in it under the influence of natural physical factors. It involves the study of the mechanics of soil and rock. Background The two main disciplines of geomechanics are soil mechanics and rock mechanics. Former deals with the soil behaviour from a small scale to a landslide scale. The latter deals with issues in geosciences related to rock mass characterization and rock mass mechanics, such as applied to petroleum, mining and civil engineering problems, such as borehole stability, tunnel design, rock breakage, slope stability, foundations, and rock drilling. Many aspects of geomechanics overlap with parts of geotechnical engineering, engineering geology, and geological engineering. Modern developments relate to seismology, continuum mechanics, discontinuum mechanics, transport phenomena, numerical methods etc. Reservoir Geomechanics In the petroleum industry geomechanics is used to: predict pore pressure establish the integrity of the cap rock evaluate reservoir properties determine in-situ rock stress evaluate the wellbore stability calculate the optimal trajectory of the borehole predict and control sand occurrence in the well analyze the validity of drilling on depression characterize fractured reservoirs increase the efficiency of the development of fractured reservoirs evaluate hydraulic fractures stability study the reactivation of natural fractures and structural faults evaluate the effect of liquid and steam injection into the reservoir analyze surface subsidence determine the degree of the reservoir compaction quantify production loss due to the reservoir rock deformation evaluate shear deformation and casing collapse To put into practice the geomechanics capabilities mentioned above, it is necessary to create a Geomechanical Model of the Earth (GEM) which consists of six key components that can be both calculated and estimated using field data: Vertical stress, δv (often called geostatic pressure or overburden stress) Maximum horizontal stress, δHmax Minimum horizontal stress, δHmin Stress orientation Pore pressure, Pp Elastic properties and rock strength: Young's modulus, Poisson's ratio, friction angle, UCS (unconfined compressive strength) and TSTR (tensile strength) Geotechnical engineers rely on various techniques to obtain reliable data for geomechanical models. These techniques include coring and core testing, seismic data and log analysis, well testing methods such as transient pressure analysis and hydraulic fracturing stress testing, and geophysical methods such as acoustic emission. See also Earthquake engineering Geotechnics Rock mechanics References Additional sources Mechanics Earth sciences Geotechnical engineering
Geomechanics
[ "Physics", "Engineering" ]
540
[ "Classical mechanics stubs", "Classical mechanics", "Geotechnical engineering", "Civil engineering", "Mechanics", "Civil engineering stubs", "Mechanical engineering" ]
2,931,866
https://en.wikipedia.org/wiki/Nordtvedt%20effect
In theoretical astrophysics, the Nordtvedt effect refers to the relative motion between the Earth and the Moon that would be observed if the gravitational self-energy of a body contributed differently to its gravitational mass than to its inertial mass. If observed, the Nordtvedt effect would violate the strong equivalence principle, which indicates that an object's movement in a gravitational field does not depend on its mass or composition. No evidence of the effect has been found. The effect is named after Kenneth L. Nordtvedt, who first demonstrated that some theories of gravity suggest that massive bodies should fall at different rates, depending upon their gravitational self-energy. Nordtvedt then observed that if gravity did in fact violate the strong equivalence principle, then the more-massive Earth should fall towards the Sun at a slightly different rate than the Moon, resulting in a polarization of the lunar orbit. To test for the existence (or absence) of the Nordtvedt effect, scientists have used the Lunar Laser Ranging experiment, which is capable of measuring the distance between the Earth and the Moon with near-millimetre accuracy. Thus far, the results have failed to find any evidence of the Nordtvedt effect, demonstrating that if it exists, the effect is exceedingly weak. Subsequent measurements and analysis to even higher precision have improved constraints on the effect. Measurements of Mercury's orbit by the MESSENGER Spacecraft have further refined the Nordvedt effect to be below an even smaller scale. A wide range of scalar–tensor theories have been found to naturally lead to a tiny effect only, at present epoch. This is due to a generic attractive mechanism that takes place during the cosmic evolution of the universe. Other screening mechanisms (chameleon, pressuron, Vainshtein etc.) could also be at play. See also Galileo's Leaning Tower of Pisa experiment References Theoretical physics Astrophysics Effects of gravity
Nordtvedt effect
[ "Physics", "Astronomy" ]
389
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics" ]
40,050,529
https://en.wikipedia.org/wiki/Resilience%20%28engineering%20and%20construction%29
In the fields of engineering and construction, resilience is the ability to absorb or avoid damage without suffering complete failure and is an objective of design, maintenance and restoration for buildings and infrastructure, as well as communities. A more comprehensive definition is that it is the ability to respond, absorb, and adapt to, as well as recover in a disruptive event. A resilient structure/system/community is expected to be able to resist to an extreme event with minimal damages and functionality disruptions during the event; after the event, it should be able to rapidly recovery its functionality similar to or even better than the pre-event level. The concept of resilience originated from engineering and then gradually applied to other fields. It is related to that of vulnerability. Both terms are specific to the event perturbation, meaning that a system/infrastructure/community may be more vulnerable or less resilient to one event than another one. However, they are not the same. One obvious difference is that vulnerability focuses on the evaluation of system susceptibility in the pre-event phase; resilience emphasizes the dynamic features in the pre-event, during-event, and post-event phases. Resilience is a multi-facet property, covering four dimensions: technical, organization, social and economic. Therefore, using one metric may not be representative to describe and quantify resilience. In engineering, resilience is characterized by four Rs: robustness, redundancy, resourcefulness, and rapidity. Current research studies have developed various ways to quantify resilience from multiple aspects, such as functionality- and socioeconomic- related aspects. The built environment need resilience to existing and emerging threats such as severe wind storms or earthquakes and creating robustness and redundancy in building design. New implications of changing conditions on the efficiency of different approaches to design and planning can be addressed in the following term. Engineering resilience has inspired other fields and influenced the way how they interpret resilience, e.g. supply chain resilience. Etymology According to the dictionary, resilience means "the ability to recover from difficulties or disturbance." The root of the term resilience is found in the Latin term 'resilio' which means to go back to a state or to spring back. In the 1640s the root term provided a resilience in the field of the mechanics of materials as "the ability of a material to absorb energy when it is elastically deformed and to release that energy upon unloading". By 1824, the term had developed to encompass the meaning of ‘elasticity’. 19th century Thomas Tredgold was the first to introduce the concept of resilience in 1818 in England. The term was used to describe a property in the strength of timber, as beams were bent and deformed to support heavy load. Tredgold found the timber durable and did not burn readily, despite being planted in bad soil conditions and exposed climates. Resilience was then refined by Mallett in 1856 in relation to the capacity of specific materials to withstand specific disturbances. These definitions can be used in engineering resilience due to the application of a single material that has a stable equilibrium regime rather than the complex adaptive stability of larger systems. 20th century In the 1970s, researchers studied resilience in relation to child psychology and the exposure to certain risks. Resilience was used to describe people who have “the ability to recover from adversity.” One of the many researchers was Professor Sir Michael Rutter, who was concerned with a combination of risk experiences and their relative outcomes. In his paper Resilience and Stability of Ecological systems (1973), C.S. Holling first explored the topic of resilience through its application to the field of ecology. Ecological resilience was defined as a "measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between state variables." Holling found that such a framework can be applied to other forms of resilience. The application to ecosystems was later used to draw into other manners of human, cultural and social applications. The random events described by Holling are not only climatic, but instability to neutral systems can occur through the impact of fires, the changes in forest community or the process of fishing. Stability, on the other hand, is the ability of a system to return to an equilibrium state after a temporary disturbance. Multiple state systems rather than objects should b studied as the world is a heterogeneous space with various biological, physical and chemical characteristics. Unlike material and engineering resilience, Ecological and social resilience focus on the redundancy and persistence of multi-equilibrium states to maintain existence of function. Engineering resilience Engineering resilience refers to the functionality of a system in relation to hazard mitigation. Within this framework, resilience is calculated based on the time it takes a system to return to a single state equilibrium. Researchers at the MCEER (Multi-Hazard Earthquake Engineering research center) have identified four properties of resilience: Robustness, resourcefulness, redundancy and rapidity. Robustness: the ability of systems to withstand a certain level of stress without suffering loss of function. Resourcefulness: the ability to identify problems and resources when threats may disrupt the system. Redundancy: the ability to have various paths in a system by which forces can be transferred to enable continued function Rapidity: the ability to meet priorities and goals in time to prevent losses and future disruptions. Social-ecological resilience Social-ecological resilience, also known as adaptive resilience, is a new concept that shifts the focus to combining the social, ecological and technical domains of resilience. The adaptive model focuses on the transformable quality of the stable state of a system. In adaptive buildings, both short term and long term resilience are addressed to ensure that the system can withstand disturbances with social and physical capacities. Buildings operate at multiple scale and conditions, therefore it is important to recognize that constant changes in architecture are expected. Laboy and Fannon recognize that the resilience model is shifting, and have applied the MCEER four properties of resilience to the planning, designing and operating phases of architecture. Rather than using four properties to describe resilience, Laboy and Fannon suggest a 6R model that adds Recovery for the operation phase of a building and Risk Avoidance for the planning phase of the building. In the planning phase of a building, site selection, building placement and site conditions are crucial for the risk avoidance. Early planning can help prepare and design for the built environment based on forces that we understand and perceive. In the operation phase of the building, a disturbance does not mark the end of resilience, but should propose a recovery plan for future adaptations. Disturbances should be used as a learning opportunity to assess mistakes and outcomes, and reconfigure for future needs. Applications International Building Code The international building code provides minimum requirements for buildings using performative based standards. The most recent International Building Code (IBC)was released in 2018 by the International Code Council (ICC), focusing on standards that protect public health, safety and welfare, without restricting use of certain building methods. The code addresses several categories, which are updated every three years to incorporate new technologies and changes. Building codes are fundamental to the resilience of communities and their buildings, as “Resilience in the built environment starts with strong, regularly adopted and properly administered building codes” Benefits occur due to the adoption of codes as the National Institute of Building Sciences (NIBS) found that the adoption of the International Building Code provides an $11 benefit for every $1 invested. The International Code Council is focused on assuming the community's buildings support the resilience of communities ahead of disasters. The process presented by the ICC includes understanding the risks, identifying strategies for the risks, and implementing those strategies. Risks vary based on communities, geographies and other factors. The American Institute of Architects created a list of shocks and stresses that are related to certain community characteristics. Shocks are natural forms of hazards (floods, earthquakes), while stresses are more chronic events that can develop over a longer period of time (affordability, drought). It is important to understand the application of resilient design on both shocks and stresses as buildings can play a part in contributing to their resolution. Even though the IBC is a model code, it is adopted by various state and governments to regulate specific building areas. Most of the approaches to minimizing risks are organized around building use and occupancy. In addition, the safety of a structure is determined by material usage, frames, and structure requirements can provide a high level of protection for occupants. Specific requirements and strategies are provided for each shock or stress such as with tsunamis, fires and earthquakes. U.S Resiliency Council The U.S Resiliency Council (USRC), a non-profit organization, created the USRC Rating system which describes the expected impacts of a natural disaster on new and existing buildings. The rating considers the building prior to its use through its structure, Mechanical-Electrical systems and material usage. Currently, the program is in its pilot stage, focusing primarily on earthquake preparedness and resilience. For earthquake hazards, the rating relies heavily on the requirements set by the Building codes for design. Buildings can obtain one of the Two types of USRC rating systems: USRC Verified Rating System The verified Rating system is used for marketing and publicity purposes using badges. The rating is easy to understand, credible and transparent at is awarded by professionals. The USRC building rating system rates buildings with stars ranging from one to five stars based on the dimensions used in their systems. The three dimensions that the USRC uses are Safety, Damage and Recovery. Safety describes the prevention of potential harm for people after an event. Damage describes the estimated repair required due to replacements and losses. Recovery is calculated based on the time it takes for the building to regain function after a shock. The following types of Rating certification can be achieved: USRC Platinum: less than 5% of expected damage USRC Gold: less than 10% of expected damage USRC Silver: less than 20% of expected damage USRC Certified: less than 40% of expected damage Earthquake Building rating system can be obtained through hazard evaluation and seismic testing. In addition to the technical review provided by the USRC, A CRP seismic analysis applies for a USRC rating with the required documentation. The USRC is planning on creating similar standards for other natural hazards such as floods, storms and winds. USRC Transaction Rating System Transaction rating system provides a building with a report for risk exposure, possibly investments and benefits. This rating remains confidential with the USRC and is not used to publicize or market the building. Disadvantages of the USRC rating system Due to the current focus on seismic interventions, the USRC does not take into consideration several parts of a building. The USRC building rating system does not take into consideration any changes to the design of the building that might occur after the rating is awarded. Therefore, changes that might impede the resilience of a building would not affect the rating that the building was awarded. In addition, changes in the uses of the building after certification might include the use of hazardous materials would not affect the rating certification of the building. The damage rating does not include damage caused by pipe breakage, building upgrades and damage to furnishings. The recovery rating does not include fully restoring all building function and all damages but only a certain amount. The 100 Resilient Cities Program In 2013, The 100 Resilient Cities Program was initiated by the Rockefeller foundation, with the goal to help cities become more resilient to physical, social and economic shocks and stresses. The program helps facilitate the resilience plans in cities around the world through access to tools, funding and global network partners such as ARUP and the AIA. Of 1,000 cities that applied to join the program, only 100 cities were selected with challenges ranging from aging populations, cyber attacks, severe storms and drug abuse. There are many cities that are members of the program, but in the article, Building up resilience in cities worldwide, Spaans and Waterhot focus on the city of Rotterdam to compare the city's resilience before and after the participation in the program. The authors found that the program broadens the scope and improved the Resilience plan of Rotterdam by including access to water, data, clean air, cyber robustness, and safe water. The program addresses other social stresses that can weaken the resilience of cities such as violence and unemployment. Therefore, cities are able to reflect on their current situation and plan to adapt to new shocks and stresses. The findings of the article can support the understanding of resiliency at a larger urban scale that requires an integrated approach with coordination across multiple government scales, time scales and fields. In addition to integrating resiliency into building code and building certification programs, the 100 resilience Cities program provides other support opportunities that can help increase awareness through non-profit organizations. After more than six years of growth and change, the existing 100 Resilient Cities organization concluded on July 31, 2019. RELi Rating System RELi is a design criteria used to develop resilience in multiple scales of the built environment such as buildings, neighborhoods and infrastructure. It was developed by the Institute for Market Transformation to Sustainability (MTS) to help designers plan for hazards. RELi is very similar to LEED but with a focus on resilience. RELi is now owned by the U.S Green Building Council (USGBC) and available to projects seeking LEED certification. The first version of RELi was released in 2014, it is currently still in the pilot phase, with no points allocated for specific credits. RELi accreditation is not required, and the use of the credit information is voluntary. Therefore, the current point system is still to be determined and does not have a tangible value. RELi provides a credit catalog that is used a s a reference guide for building design and expands on the RELi definition of resilience as follows: Resilient Design pursues Buildings + Communities that are shock resistant, healthy, adaptable and regenerative through a combination of diversity, foresight and the capacity for self-organization and learning. A Resilient Society can withstand shocks and rebuild itself when necessary. It requires humans to embrace their capacity to anticipate, plan and adapt for the future. RELi Credit Catalog The RELi Catalog considers multiple scales of intervention with requirements for a panoramic approach, risk adaptation & mitigation for acute events and a comprehensive adaptation & mitigation for the present and future. RELi's framework highly focuses on social issues for community resilience such as providing community spaces and organisations. RELi also combines specific hazard designs such as flood preparedness with general strategies for energy and water efficiency. The following categories are used to organize the RELi credit list: Panoramic approach to Planning, design, Maintenance and Operations Hazard Preparedness Hazard adaptation and mitigation Community cohesion, social and economic vitality Productivity, health and diversity Energy, water, food Materials and artifacts Applied creativity, innovation and exploration The RELI Program complements and expands on other popular rating systems such as LEED, Envision, and Living Building Challenge. The menu format of the catalog allows users to easily navigate the credits and recognize the goals achieved by RELI. References to other rating systems that have been used can help increase awareness on RELi and its credibility of its use. The reference for each credit is listed in the catalog for ease of access. LEED Pilot Credits In 2018, three new LEED pilot credits were released to increase awareness on specific natural and man-made disasters. The pilot credits are found in the Integrative Process category and are applicable to all Building Design and Construction rating systems. The first credit IPpc98: Assessment and Planning for Resilience, includes a prerequisite for a hazard assessment of the site. It is crucial to take into account the site conditions and how they change with variations in the climate. Projects can either choose to do a climate-related risk plan or can complete planning forms presented by the Red Cross. The second credit IPpc99: Assessment and Planning for Resilience, requires projects to prioritize three top hazards based on the assessments made in the first credit. specific mitigation strategies for each hazard have to be identified and implemented. Reference to other resilience programs such as the USRC should be made to support the choice of hazards. The third credit IPpc100: Passive Survivability and Functionality During Emergencies, focuses on maintaining livable and functional conditions during a disturbance. Projects can demonstrate the ability to provide emergency power for high priority functions, can maintain livable temperatures for a certain period of time, and provide access to water. For thermal resistance, reference to thermal modeling of the comfort tool's psychrometric chart should be made to support the thermal qualities of the building during a certain time. As for emergency power, backup power must last based on the critical loads and needs of the building use type. LEED credits overlap with RELi rating system credits, the USGBC has been refining RELi to better synthesize with the LEED resilient design pilot credits. Design based on climate change It is important to assess current climate data and design in preparation of changes or threats to the environment. Resilience plans and passive design strategies can differ based on climates that are too hot. Here are general climate responsive design strategies based on three different climatic conditions: Too wet Use of Natural solutions: mangroves and other shoreline plants can act as barriers to flooding. Creating a Dike system: in areas with extreme floods, dikes can be integrated into the urban landscape to protect buildings. Using permeable paving: porous pavement surfaces absorb runoff in parking lots, roads and sidewalks. Rain Harvesting methods: collect and store rainwater for domestic or landscape purposes. Too dry Use of drought-tolerant plants: save water usage in landscaping methods Filtration of wastewater: recycling wastewater for landscaping or toilet usage. Use of courtyard layout: minimize the area affected by solar radiation and use water and plants for evaporative cooling. Too hot Use of vegetation: Trees can help cool the environment by reducing the urban heat island effect through evapotranspiration. Use of passive solar-design strategies: operable windows and thermal mass can cool the building down naturally. Window Shading strategies: control the amount of sunlight that enters the building to minimize heat gains during the day. Reduce or shade external adjacent thermal masses that will re-radiate into the building (e.g. pavers) Design based on hazards Hazard assessment Determining and assessing vulnerabilities to the built environment based on specific locations is crucial for creating a resilience plan. Disasters lead to a wide range of consequences such as damaged buildings, ecosystems and human losses. For example, earthquakes that took place in the Wenchuan County in 2008, lead to major landslides which relocated entire city district such as Old Beichuan. Here are some natural hazards and potential strategies for resilience assessment. Fire use of fire rated materials provide fire-resistant stairwells for evacuation universal escape methods to also help those with disabilities. Hurricanes There are multiple strategies for protecting structures against hurricanes, based on wind and rain loads. Openings should be protected from flying debris Structures should be elevated from possible water intrusion and flooding Building enclosures should be sealed with specific nailing patterns use of materials such as metal, tile or masonry to resist wind loads. Earthquakes Earthquakes can also result in the structural damage and collapse of buildings due to high stresses on building frames. Secure appliances such as heaters and furniture to prevent injury and fires expansion joints should be used in building structure to respond to seismic shaking. create flexible systems with base isolation to minimize impact provide earthquake preparedness kit with necessary resources during event Sustainability It is difficult to discuss the concepts of resilience and sustainability in comparison due to the various scholarly definitions that have been used in the field over the years. Many policies and academic publications on both topics either provide their own definitions of both concepts or lack a clear definition of the type of resilience they seek. Even though sustainability is a well established term, there are generic interpretations of the concept and its focus. Sanchez et al. proposed a new characterization of the term ‘sustainable resilience’ which expands the social-ecological resilience to include more sustained and long-term approaches. Sustainable resilience focuses not only on the outcomes, but also on the processes and policy structures in the implementation. Both concepts share essential assumptions and goals such as passive survivability and persistence of a system operation over time and in response to disturbances. There is also a shared focus on climate change mitigation as they both appear in larger frameworks such as Building Code and building certification programs. Holling and Walker argue that “a resilient sociol-ecological system is synonymous with a region that is ecological, economically and socially sustainable.” Other scholars such as Perrings state that “a development strategy is not sustainable if it is not resilient.” Therefore, the two concepts are intertwined and cannot be successful individually as they are dependent on one another. For example, in RELi and in LEED and other building certifications, providing access to safe water and an energy source is crucial before, during and after a disturbance. Some scholars argue that resilience and sustainability tactics target different goals. Paula Melton argues that resilience focuses on the design for unpredictable, while sustainability focuses on the climate responsive designs. Some forms of resilience such as adaptive resilience focus on designs that can adapt and change based on a shock event, on the other hand, sustainable design focuses on systems that are efficient and optimized. Quantification The first influential quantitative resilience metric based on the functionality recovery curve was proposed by Bruneau et al., where resilience is quantified as the resilience loss as follows. where is the functionality at time ; is the time when the event strikes; is the time when the functionality full recovers. The resilience loss is a metric of only positive value. It has the advantage of being easily generalized to different structures, infrastructures, and communities. This definition assumes that the functionality is 100% pre-event and will eventually be recovered to a full functionality of 100%. This may not be true in practice. A system may be partially functional when a hurricane strikes and may not be fully recovered due to uneconomic cost-benefit ratio. Resilience index is a normalized metric between 0 and 1, computed from the functionality recovery curve. where is the functionality at time ; is the time when the event strikes; is the time horizon of interest. See also Chemically strengthened glass Durability Durable good Fireproofing Green infrastructure Maintainability Polymer degradation Radiation hardening Residual stress Rot-proof Rustproofing Sustainability Sustainable design Thermal conductivity and resistivity Toughness Urban resilience USGBC Waste minimisation Waterproofing Notes and references External links Mobilizing Building Adaptation and Resilience project Engineering concepts Construction Design Architecture
Resilience (engineering and construction)
[ "Engineering" ]
4,768
[ "Construction", "Design", "nan", "Architecture" ]
40,054,681
https://en.wikipedia.org/wiki/Electronic%20skin
Electronic skin refers to flexible, stretchable and self-healing electronics that are able to mimic functionalities of human or animal skin. The broad class of materials often contain sensing abilities that are intended to reproduce the capabilities of human skin to respond to environmental factors such as changes in heat and pressure. Advances in electronic skin research focuses on designing materials that are stretchy, robust, and flexible. Research in the individual fields of flexible electronics and tactile sensing has progressed greatly; however, electronic skin design attempts to bring together advances in many areas of materials research without sacrificing individual benefits from each field. The successful combination of flexible and stretchable mechanical properties with sensors and the ability to self-heal would open the door to many possible applications including soft robotics, prosthetics, artificial intelligence and health monitoring. Recent advances in the field of electronic skin have focused on incorporating green materials ideals and environmental awareness into the design process. As one of the main challenges facing electronic skin development is the ability of the material to withstand mechanical strain and maintain sensing ability or electronic properties, recyclability and self-healing properties are especially critical in the future design of new electronic skins. Rehealable electronic skin Self-healing abilities of electronic skin are critical to potential applications of electronic skin in fields such as soft robotics. Proper design of self-healing electronic skin requires not only healing of the base substrate but also the reestablishment of any sensing functions such as tactile sensing or electrical conductivity. Ideally, the self-healing process of electronic skin does not rely upon outside stimulation such as increased temperature, pressure, or solvation. Self-healing, or rehealable, electronic skin is often achieved through a polymer-based material or a hybrid material. Polymer-based materials In 2018, Zou et al. published work on electronic skin that is able to reform covalent bonds when damaged. The group looked at a polyimine-based crosslinked network, synthesized as seen in Figure 1. The e-skin is considered rehealable because of "reversible bond exchange," meaning that the bonds holding the network together are able to break and reform under certain conditions such as solvation and heating. The rehealable and reusable aspect of such a thermoset material is unique because many thermoset materials irreversibly form crosslinked networks through covalent bonds. In the polymer network the bonds formed during the healing process are indistinguishable from the original polymer network. Dynamic non-covalent crosslinking has also been shown to form a polymer network that is rehealable. In 2016, Oh et al. looked specifically at semiconducting polymers for organic transistors. They found that incorporating 2,6-pyridine dicarboxamide (PDCA) into the polymer backbone could impart self-healing abilities based on the network of hydrogen bonds formed between groups. With incorporation of PDCA in the polymer backbone, the materials was able to withstand up to 100% strain without showing signs of microscale cracking. In this example, the hydrogen bonds are available for energy dissipation as the strain increases. Hybrid materials Polymer networks are able to facilitate dynamic healing processes through hydrogen bonds or dynamic covalent chemistry. However, the incorporation of inorganic particles can greatly expand the functionality of polymer-based materials for electronic skin applications. The incorporation of micro-structured nickel particles into a polymer network (Figure 2) has been shown to maintain self-healing properties based on the reformation of hydrogen bonding networks around the inorganic particles. The material is able to regain its conductivity within 15 seconds of breakage, and the mechanical properties are regained after 10 minutes at room temperature without added stimulus. This material relies on hydrogen bonds formed between urea groups when they align. The hydrogen atoms of urea functional groups are ideally situated to form a hydrogen-bonding network because they are near an electron-withdrawing carbonyl group. This polymer network with embedded nickel particles demonstrates the possibility of using polymers as supramolecular hosts to develop self-healing conductive composites. Flexible and porous graphene foams that are interconnected in a 3D manner have also been shown to have self-healing properties. Thin film with poly(N,N-dimethylacrylamide)-poly(vinyl alcohol) (PDMAA) and reduced graphene oxide have shown high electrical conductivity and self-healing properties. The healing abilities of the hybrid composite are suspected to be due to the hydrogen bonds between the PDMAA chains, and the healing process is able to restore initial length and recover conductive properties. Recyclable electronic skin Zou et al. presents an interesting advance in the field of electronic skin that can be used in robotics, prosthetics, and many other applications in the form of a fully recyclable electronic skin material. The e-skin developed by the group consists of a network of covalently bound polymers that are thermoset, meaning cured at a specific temperature. However, the material is also recyclable and reusable. Because the polymer network is thermoset, it is chemically and thermally stable. However, at room temperature, the polyimine material, with or without silver nanoparticles, can be dissolved on the timescale of a few hours. The recycling process allows devices, which are damaged beyond self-healing capabilities, to be dissolved and formed into new devices (Figure 3). This advance opens the door for lower cost production and greener approaches to e-skin development. Flexible and stretchy electronic skin The ability of electronic skin to withstand mechanical deformation including stretching and flexing without losing functionality is crucial for its applications as prosthetics, artificial intelligence, soft robotics, health monitoring, biocompatibility, and communication devices. Flexible electronics are often designed by depositing electronic materials on flexible polymer substrates, thereby relying on an organic substrate to impart favorable mechanical properties. Stretchable e-skin materials have been approached from two directions. Hybrid materials can rely on an organic network for stretchiness while embedding inorganic particles or sensors, which are not inherently stretchable. Other research has focused on developing stretchable materials that also have favorable electronic or sensing capabilities. Zou et al. studied the inclusion of linkers that are described as "serpentine" in their polyimine matrix. These linkers make the e-skin sensors able to flex with movement and distortion. The incorporation of alkyl spacers in polymer-based materials has also been shown to increase flexibility without decreasing charge transfer mobility. Oh et al. developed a stretchable and flexible material based on 3,6-di(thiophen-2-yl)-2,5-dihydropyrrolo[3,4-c]pyrrole-1,4-dione (DPP) and non-conjugated 2,6-pyridine dicarboxamide (PDCA) as a source of hydrogen bonds (Figure 4). Graphene has also been shown to be a suitable material for electronic skin applications as well due to its stiffness and tensile strength. Graphene is an appealing material because its synthesis to flexible substrates is scalable and cost-efficient. Mechanical properties of skin Skin is composed of collagen, keratin, and elastin fibers, which provide robust mechanical strength, low modulus, tear resistance, and softness. The skin can be considered as a bilayer of epidermis and dermis. The epidermal layer has a modulus of about 140–600 kPa and a thickness of 0.05–1.5 mm. Dermis has a modulus of 2–80 kPa and a thickness of 0.3–3 mm. This bilayer skin exhibits an elastic linear response for strains less than 15% and a non linear response at larger strains. To achieve conformability, it is preferable for devices to match the mechanical properties of the epidermis layer when designing skin-based stretchy electronics. Tuning mechanical properties Conventional high performance electronic devices are made of inorganic materials such as silicon, which is rigid and brittle in nature and exhibits poor biocompatibility due to mechanical mismatch between the skin and the device, making skin integrated electronics applications difficult. To solve this challenge, researchers employed the method of constructing flexible electronics in the form of ultrathin layers. The resistance to bending of a material object (Flexural rigidity) is related to the third power of the thickness, according to the Euler-Bernoulli equation for a beam. It implies that objects with less thickness can bend and stretch more easily. As a result, even though the material has a relatively high Young's modulus, devices manufactured on ultrathin substrates exhibit a decrease in bending stiffness and allow bending to a small radius of curvature without fracturing. Thin devices have been developed as a result of significant advancements in the field of nanotechnology, fabrication, and manufacturing. The aforementioned approach was used to create devices composed of 100–200 nm thick Si nano membranes deposited on thin flexible polymeric substrates. Furthermore, structural design considerations can be used to tune the mechanical stability of the devices. Engineering the original surface structure allows us to soften the stiff electronics. Buckling, island connection, and the Kirigami concept have all been employed successfully to make the entire system stretchy. Mechanical buckling can be used to create wavy structures on elastomeric thin substrates. This feature improves the device's stretchability. The buckling approach was used to create Si nanoribbons from single crystal Si on an elastomeric substrate. The study demonstrated the device could bear a maximum strain of 10% when compressed and stretched. In the case of island interconnect, the rigid material connects with flexible bridges made from different geometries, such as zig-zag, serpentine-shaped structures, etc., to reduce the effective stiffness, tune the stretchability of the system, and elastically deform under applied strains in specific directions. It has been demonstrated that serpentine-shaped structures have no significant effect on the electrical characteristics of epidermal electronics. It has also been shown that the entanglement of the interconnects, which oppose the movement of the device above the substrate, causes the spiral interconnects to stretch and deform significantly more than the serpentine structures. CMOS inverters constructed on a PDMS substrate employing 3D island interconnect technologies demonstrated 140% strain at stretching. Kirigami is built around the concept of folding and cutting in 2D membranes. This contributes to an increase in the tensile strength of the substrate, as well as its out-of-plane deformation and stretchability. These 2D structures can subsequently be turned to 3D structures with varied topography, shape, and size controllability via the Buckling process, resulting in interesting properties and applications. Conductive electronic skin The development of conductive electronic skin is of interest for many electrical applications. Research into conductive electronic skin has taken two routes: conductive self-healing polymers or embedding conductive inorganic materials in non-conductive polymer networks. The self-healing conductive composite synthesized by Tee et al. (Figure 2) investigated the incorporation of micro-structured nickel particles into a polymer host. The nickel particles adhere to the network though favorable interactions between the native oxide layer on the surface of the particles and the hydrogen-bonding polymer. Nanoparticles have also been studied for their ability to impart conductivity on electronic skin materials. Zou et al. embedded silver nanoparticles (AgNPs) into a polymer matrix, making the e-skin conductive. The healing process for this material is noteworthy because it not only restores the mechanical properties of the polymer network, but also restores the conductive properties when silver nanoparticles have been embedded in the polymer network. Sensing ability of electronic skin Some of the challenges that face electronic skin sensing abilities include the fragility of sensors, the recovery time of sensors, repeatability, overcoming mechanical strain, and long-term stability. Tactile sensors Applied pressure can be measured by monitoring changes in resistance or capacitance. Coplanar interdigitated electrodes embedded on single-layer graphene have been shown to provide pressure sensitivity for applied pressure as low as 0.11 kPa through measuring changes in capacitance. Piezoresistive sensors have also shown high levels of sensitivity. Ultrathin molybdenum disulfide sensing arrays integrated with graphene have demonstrated promising mechanical properties capable of pressure sensing. Modifications of organic field effect transistors (OFETs) have shown promise in electronic skin applications. Microstructured polydimethylsiloxane thin films can elastically deform when pressure is applied. The deformation of the thin film allows for storage and release of energy. Visual representation of applied pressure has been one area of interest in development of tactile sensors. The Bao Group at Stanford University have designed an electrochromically active electronic skin that changes color with different amounts of applied pressure. Applied pressure can also be visualized by incorporation of active-matrix organic light-emitting diode displays which emit light when pressure is applied. Prototype e-skins include a printed synaptic transistor–based electronic skin giving skin-like haptic sensations and touch/pain-sensitivity to a robotic hand, and a multilayer tactile sensor repairable hydrogel-based robot skin. Other sensing applications Humidity sensors have been incorporated in electronic skin design with sulfurized tungsten films. The conductivity of the film changes with different levels of humidity. Silicon nanoribbons have also been studied for their application as temperature, pressure, and humidity sensors. Scientists at the University of Glasgow have made inroads in developing an e-skin that feels pain real-time, with applications in prosthetics and more life-like humanoids. A system of an electronic skin and a human-machine interface that can enable remote sensed tactile perception, and wearable or robotic sensing of many hazardous substances and pathogens. See also Artificial cell Artificial muscle Electronic nose Electronic tongue Robotic sensing Sensitive skin Soft robotics References Robotic sensing Smart materials Synthetic biology Skin
Electronic skin
[ "Materials_science", "Engineering", "Biology" ]
2,941
[ "Synthetic biology", "Biological engineering", "Materials science", "Bioinformatics", "Molecular genetics", "Smart materials" ]
35,887,507
https://en.wikipedia.org/wiki/Representer%20theorem
For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data. Formal statement The following Representer Theorem and its proof are due to Schölkopf, Herbrich, and Smola: Theorem: Consider a positive-definite real-valued kernel on a non-empty set with a corresponding reproducing kernel Hilbert space . Let there be given a training sample , a strictly increasing real-valued function , and an arbitrary error function , which together define the following regularized empirical risk functional on : Then, any minimizer of the empirical risk admits a representation of the form: where for all . Proof: Define a mapping (so that is itself a map ). Since is a reproducing kernel, then where is the inner product on . Given any , one can use orthogonal projection to decompose any into a sum of two functions, one lying in , and the other lying in the orthogonal complement: where for all . The above orthogonal decomposition and the reproducing property together show that applying to any training point produces which we observe is independent of . Consequently, the value of the error function in (*) is likewise independent of . For the second term (the regularization term), since is orthogonal to and is strictly monotonic, we have Therefore, setting does not affect the first term of (*), while it strictly decreases the second term. Consequently, any minimizer in (*) must have , i.e., it must be of the form which is the desired result. Generalizations The Theorem stated above is a particular example of a family of results that are collectively referred to as "representer theorems"; here we describe several such. The first statement of a representer theorem was due to Kimeldorf and Wahba for the special case in which for . Schölkopf, Herbrich, and Smola generalized this result by relaxing the assumption of the squared-loss cost and allowing the regularizer to be any strictly monotonically increasing function of the Hilbert space norm. It is possible to generalize further by augmenting the regularized empirical risk functional through the addition of unpenalized offset terms. For example, Schölkopf, Herbrich, and Smola also consider the minimization i.e., we consider functions of the form , where and is an unpenalized function lying in the span of a finite set of real-valued functions . Under the assumption that the matrix has rank , they show that the minimizer in admits a representation of the form where and the are all uniquely determined. The conditions under which a representer theorem exists were investigated by Argyriou, Micchelli, and Pontil, who proved the following: Theorem: Let be a nonempty set, a positive-definite real-valued kernel on with corresponding reproducing kernel Hilbert space , and let be a differentiable regularization function. Then given a training sample and an arbitrary error function , a minimizer of the regularized empirical risk admits a representation of the form where for all , if and only if there exists a nondecreasing function for which Effectively, this result provides a necessary and sufficient condition on a differentiable regularizer under which the corresponding regularized empirical risk minimization will have a representer theorem. In particular, this shows that a broad class of regularized risk minimizations (much broader than those originally considered by Kimeldorf and Wahba) have representer theorems. Applications Representer theorems are useful from a practical standpoint because they dramatically simplify the regularized empirical risk minimization problem . In most interesting applications, the search domain for the minimization will be an infinite-dimensional subspace of , and therefore the search (as written) does not admit implementation on finite-memory and finite-precision computers. In contrast, the representation of afforded by a representer theorem reduces the original (infinite-dimensional) minimization problem to a search for the optimal -dimensional vector of coefficients ; can then be obtained by applying any standard function minimization algorithm. Consequently, representer theorems provide the theoretical basis for the reduction of the general machine learning problem to algorithms that can actually be implemented on computers in practice. The following provides an example of how to solve for the minimizer whose existence is guaranteed by the representer theorem. This method works for any positive definite kernel , and allows us to transform a complicated (possibly infinite dimensional) optimization problem into a simple linear system that can be solved numerically. Assume that we are using a least squares error function and a regularization function for some . By the representer theorem, the minimizer has the form for some . Noting that we see that has the form where and . This can be factored out and simplified to Since is positive definite, there is indeed a single global minimum for this expression. Let and note that is convex. Then , the global minimum, can be solved by setting . Recalling that all positive definite matrices are invertible, we see that so the minimizer may be found via a linear solve. See also Mercer's theorem Kernel methods References Computational learning theory Theoretical computer science Hilbert spaces
Representer theorem
[ "Physics", "Mathematics" ]
1,102
[ "Theoretical computer science", "Hilbert spaces", "Quantum mechanics", "Applied mathematics" ]
35,887,610
https://en.wikipedia.org/wiki/Kim%20reformer
The Kim reformer is a type of syngas plant invented by Hyun Yong Kim. It is a high temperature furnace (as shown in figure 1), filled with steam and/or carbon dioxide gas and maintaining a thermal equilibrium at a temperature just above 1200 °C, in which the reforming reaction is at its thermodynamic equilibrium and carbonaceous substance is reformed with the highest efficiency. In December 2000, Kim discovered that the reforming reaction (C + H2O ↔ CO + H2) proceeds at a temperature just above 1200 °C, but not below it. This work was published in International Journal[1] and registered in KR patent, US patent, CN patent, and JP patent. Overview The reformer reforms all carbon atoms of carbonaceous feedstock to produce just syngas, no other hydrocarbons. The high temperature furnace is packed with castables to minimize heat loss in such a way as to maintain the inner temperature of a reduction reactor filled with steam and carbon dioxide (CO2) gas at a temperature just above 1200 °C (aka Kim temperature, see figure 2), and it reforms all carbonaceous substances most efficiently to produce syngas. The produced syngas exits from the reduction reactor at a temperature of 1200 °C. The reduction chamber is heated by super-hot gases (steam and CO2) generated in the syngas burner with oxygen gas. The reduction chamber must be constructed to withstand, physically and chemically, the reforming reaction at 1200 °C. Advantages Both steam reforming and dry reforming are carried out in this reformer; therefore, it is possible to configure the H2/CO ratio by adjusting the H2O/CO2 ratio in the reduction chamber. The reforming reaction is a very specific elementary reaction; all carbon atoms on the left are reformed into carbon monoxide and all hydrogen atoms are reduced to hydrogen gas. The mixture of two product gases is called syngas. These reforming reactions are an endothermic reduction reaction. In contrast, the conventional gasification reaction is a combination of several reactions operating below 1200 °C and the product is a mixture of many gases. History of reforming reactions The process for producing water gas (C + H2O → CO + H2) has been known since the 19th century and it was later found that it is applicable to all carbonaceous substances. Reactions C + H2O ↔ CO + H2 and (-CH2) + H2O → CO + 2H2 are called steam reforming and reactions C + CO2 → 2CO and (-CH2) + CO2 → 2CO + H2, carbon dioxide or dry reforming. The oil industry has used the reforming reactions extensively for the cracking process and to generate hydrogen gas. References Chemical equipment Fuel gas Synthetic fuels
Kim reformer
[ "Chemistry", "Engineering" ]
560
[ "Chemical equipment", "nan" ]
35,888,611
https://en.wikipedia.org/wiki/Semibatch%20reactor
For both chemical and biological engineering, Semibatch (semiflow) reactors operate much like batch reactors in that they take place in a single stirred tank with similar equipment. However, they are modified to allow reactant addition and/or product removal in time. A normal batch reactor is filled with reactants in a single stirred tank at time and the reaction proceeds. A semi batch reactor, however, allows partial filling of reactants with the flexibility of adding more as time progresses. Stirring in both types is very efficient, which allows batch and semi batch reactors to assume a uniform composition and temperature throughout. Advantages The flexibility of adding more reactants over time through semi batch operation has several advantages over a batch reactor. These include: Improved selectivity of a reaction Sometimes a particular reactant can go through parallel paths that yield two different products, only one of which is desired. Consider the simple example below: A→U (desired product) A→W (undesired product) The rate expressions, considering the variability of the volume of reaction, are: = - = - = - Where is the molar rate of addition of the reactant A. Note that the presence of these addition terms, which could be negative in case of products removal (e.g. by fractional distillation) are the ones marking the difference of the semi batch reactor cases from the simpler batch cases. For standard batch reactors (no addition terms) the selectivity of the desired product is defined as: S = = S = = for constant volume (i.e. batch) reactions. If , the concentration of the reactant should be kept at a low level in order to maximize selectivity. This can be accomplished using a semibatch reactor. Better control of exothermic reactions Exothermic reactions release heat, and ones that are highly exothermic can cause safety concerns. Semibatch reactors allow for slow addition of reactants in order to control the heat released and thus, temperature, in the reactor. Product removal through a purge stream In order to minimize the reversibility of a reaction one must minimize the concentration of the product. This can be done in a semibatch reactor by using a purge stream to remove products and increase the net reaction rate by favoring the forward reaction. Reactor choice It is important to understand that these advantages are more applicable to the decision between using a batch, a semibatch or a continuous reactor in a certain process. Both batch and semibatch reactors are more suitable for liquid phase reactions and small scale production, because they usually require lower capital costs than a continuously stirred tank reactor operation (CSTR), but incur greater costs per unit if production needs to be scaled up. These per unit costs include labor, materials handling (filling, emptying, cleaning), protective measures, and nonproductive periods that result from changeovers when switching batches. Hence, the capital costs must be weighed against operating costs to determine the correct reactor design to be implemented. References Sources Hill, C. (1937). An Introduction to chemical engineering kinetics & reactor design. John Wiley & Sons, Inc. Heinzle, E. (n.d.). Semi-batch Reactor and Safety. Retrieved from http://www.uni-saarland.de/fak8/heinzle/de/teaching/Technische_Chemie_I/HE3_Semi-Batch Reactor_Text.pdf Wittrup, D. (2007). Reactor Size Comparisons For PFR and CSTR. Retrieved from http://ocw.mit.edu/courses/chemical-engineering/10-37-chemical-and-biological-reaction-engineering-spring-2007/lecture-notes/lec09_03072007_w.pdf Chemical reactors
Semibatch reactor
[ "Chemistry", "Engineering" ]
784
[ "Chemical reactors", "Chemical reaction engineering", "Chemical equipment" ]
35,891,052
https://en.wikipedia.org/wiki/Photoacoustic%20effect
The photoacoustic effect or optoacoustic effect is the formation of sound waves following light absorption in a material sample. In order to obtain this effect the light intensity must vary, either periodically (modulated light) or as a single flash (pulsed light). The photoacoustic effect is quantified by measuring the formed sound (pressure changes) with appropriate detectors, such as microphones or piezoelectric sensors. The time variation of the electric output (current or voltage) from these detectors is the photoacoustic signal. These measurements are useful to determine certain properties of the studied sample. For example, in photoacoustic spectroscopy, the photoacoustic signal is used to obtain the actual absorption of light in either opaque or transparent objects. It is useful for substances in extremely low concentrations, because very strong pulses of light from a laser can be used to increase sensitivity and very narrow wavelengths can be used for specificity. Furthermore, photoacoustic measurements serve as a valuable research tool in the study of the heat evolved in photochemical reactions (see: photochemistry), particularly in the study of photosynthesis. Most generally, electromagnetic radiation of any kind can give rise to a photoacoustic effect. This includes the whole range of electromagnetic frequencies, from gamma radiation and X-rays to microwave and radio. Still, much of the reported research and applications, utilizing the photoacoustic effect, is concerned with the near ultraviolet/visible and infrared spectral regions. History The discovery of the photoacoustic effect dates back to 1880, when Alexander Graham Bell was experimenting with long-distance sound transmission. Through his invention, called "photophone", he transmitted vocal signals by reflecting sun-light from a moving mirror to a selenium solar cell receiver. As a byproduct of this investigation, he observed that sound waves were produced directly from a solid sample when exposed to beam of sunlight that was rapidly interrupted with a rotating slotted wheel. He noticed that the resulting acoustic signal was dependent on the type of the material and correctly reasoned that the effect was caused by the absorbed light energy, which subsequently heats the sample. Later Bell showed that materials exposed to the non-visible (ultra-violet and infra-red) portions of the solar spectrum can also produce sounds and invented a device, which he called "spectrophone", to apply this effect for spectral identification of materials. Bell himself and later John Tyndall and Wilhelm Röntgen extended these experiments, demonstrating the same effect in liquids and gases. However, the results were too crude, dependent on ear detection, and this technique was soon abandoned. The application of the photoacoustic effect had to wait until the development of sensitive sensors and intense light sources. In 1938 Mark Leonidovitch Veingerov revived the interest in the photoacoustic effect, being able to use it in order to measure very small carbon dioxide concentration in nitrogen gas (as low as 0.2% in volume). Since then research and applications grew faster and wider, acquiring several fold more detection sensitivity. While the heating effect of the absorbed radiation was considered to be the prime cause of the photoacoustic effect, it was shown in 1978 that gas evolution resulting from a photochemical reaction can also cause a photoacoustic effect. Independently, considering the apparent anomalous behaviour of the photoacoustic signal from a plant leaf, which could not be explained solely by the heating effect of the exciting light, led to the cognition that photosynthetic oxygen evolution is normally a major contributor to the photoacoustic signal in this case. Physical mechanisms Photothermal mechanism Although much of the literature on the subject is concerned with just one mechanism, there are actually several different mechanisms that produce the photoacoustic effect. The primary universal mechanism is photothermal, based on the heating effect of the light and the consequent expansion of the light-absorbing material. In detail, the photothermal mechanism consists of the following stages: conversion of the absorbed pulsed or modulated radiation into heat energy. temporal changes of the temperatures at the loci where radiation is absorbed – rising as radiation is absorbed and falling when radiation stops and the system cools. expansion and contraction following these temperature changes, which are "translated" to pressure changes. The pressure changes, which occur in the region where light was absorbed, propagate within the sample body and can be sensed by a sensor coupled directly to it. Commonly, for the case of a condensed phase sample (liquid, solid), pressure changes are rather measured in the surrounding gaseous phase (commonly air), formed there by the diffusion of the thermal pulsations. The main physical picture, in this case, envisions the original temperature pulsations as origins of propagating temperature waves ("thermal waves"), which travel in the condensed phase, ultimately reaching the surrounding gaseous phase. The resulting temperature pulsations in the gaseous phase are the prime cause of the pressure changes there. The amplitude of the traveling thermal wave decreases strongly (exponentially) along its propagation direction, but if its propagation distance in the condensed phase is not too long, its amplitude near the gaseous phase is sufficient to create detectable pressure changes. This property of the thermal wave confers unique features to the detection of light absorption by the photoacoustic method. The temperature and pressure changes involved are minute, compared to everyday scale – typical order of magnitude for the temperature changes, using ordinary light intensities, is about micro- to millidegrees and for the resulting pressure changes is about nano- to microbars. The photothermal mechanism manifests itself, besides the photoacoustic effect, also by other physical changes, notably emission of infra-red radiation and changes in the refraction index. Correspondingly, it may be detected by various other means, described by terms such as "photothermal radiometry", "thermal lens" and "thermal beam deflection" (popularly also known as "mirage" effect, see Photothermal spectroscopy). These methods parallel the photoacoustic detection. However, each method has its special range of application. Other While the photothermal mechanism is universal, there could exist additional other mechanisms, superimposed on the photothermal mechanism, which may contribute significantly to the photoacoustic signal. These mechanisms are generally related to photophysical processes and photochemical reactions following light absorption: (1) change in the material balance of the sample or the gaseous phase around the sample; (2) change in the molecular organization, which results in molecular volume changes. Most prominent examples for these two kinds of mechanisms are in photosynthesis. The first mechanism above is mostly conspicuous in a photosynthesizing plant leaf. There, the light induced oxygen evolution causes pressure changes in the air phase, resulting in a photoacoustic signal, which is comparable in magnitude to that caused by the photothermal mechanism. This mechanism was tentatively named "photobaric". The second mechanism shows up in photosynthetically active sub-cell complexes in suspension (e.g. photosynthetic reaction centers). There, the electric field which is formed in the reaction center, following the light induced electron transfer process, causes a micro electrostriction effect with a change in the molecular volume. This, in turn, induces a pressure wave which propagates in the macroscopic medium. Another case for this mechanism is Bacteriorhodopsin proton pump. Here the light induced change in the molecular volume is caused by conformational changes that occur in this protein following light absorption. Detection of the photoacoustic effect In applying the photoacoustic effect there exist various modes of measurement. Gaseous samples or condensed phase samples, where the pressure is measured in the surrounding gaseous phase, are usually probed with a microphone. The useful applicable time-scale in this case is in the millisecond to sub-second scale. Most often, In this case, the exciting light is continuously chopped or modulated at a certain frequency (mostly in the range between ca. 10–10000 Hz) and the modulated photoacoustic signal is analyzed with a lock-in amplifier for its amplitude and phase, or for the inphase and quadrature components. When the pressure is measured within the condensed phase of the probed specimen, one utilizes piezoelectric sensors inserted into or coupled to the specimen itself. In this case the time scale is between less than nanoseconds to many microseconds The photoacoustic signal, obtained from the various pressure sensors, depends on the physical properties of the system, the mechanism that creates the photoacoustic signal, the light-absorbing material, the dynamics of the excited state relaxation and the modulation frequency or the pulse profile of the radiation, as well as the sensor properties. This calls for appropriate procedures to (i) separate between the signals due to different mechanisms and (ii) to obtain the time dependence of the heat evolution (in the case of the photothermal mechanism) or the oxygen evolution (in the case of the photobaric mechanism in photosynthesis) or the time dependence of the volume changes, from the time dependence of the resulting photoacoustic signal. Applications Considering the photothermal mechanism alone, the photoacoustic signal is useful in measuring the light absorption spectrum, particularly for transparent samples where the light absorption is very small. In this case the ordinary method of absorption spectroscopy, based on difference of the intensities of a light beam before and after its passage through the sample, is not practical. In photoacoustic spectroscopy there is no such limitation. the signal is directly related to the light absorption and the light intensity. Dividing the signal spectrum by the light intensity spectrum can give a relative percent absorption spectrum, which can be calibrated to yield absolute values. This is very useful to detect very small concentrations of various materials. Photoacoustic spectroscopy is also useful for the opposite case of opaque samples, where the absorption is essentially complete. In an arrangement where a sensor is placed in a gaseous phase above the sample and the light impinges the sample from above, the photoacoustic signal results from an absorption zone close to the surface. A typical parameter which governs the signal in this case is the "thermal diffusion length", which depends on the material and the modulation frequency and ordinarily is in the order of several micrometers. The signal is related to the light absorbed in the small distance of the thermal diffusion length, allowing the determination of the absorption spectrum. This allows also to separately analyze a surface that is distinct from the bulk. By varying the modulation frequency and wavelength of the probing radiation one essentially varies the probed depth, which results in the possibility of depth profiling and photoacoustic imaging, which discloses inhomogeneities within the sample. This analysis includes also the possibility to determine the thermal properties from the photoacoustic signal. Recently, the photoacoustic approach has been utilized to quantitatively measure macromolecules, such as proteins. The photoacoustic immunoassay labels and detects target proteins using nanoparticles that can generate strong acoustic signals. The photoacoustics-based protein analysis has also been applied for point-of-care testings. Another application of the photoacoustic effect is its ability to estimate the chemical energies stored in various steps of a photochemical reaction. Following light absorption photophysical and photochemical conversions occur, which store part of the light energy as chemical energy. Energy storage leads to less heat evolution. The resulting smaller photoacoustic signal thus gives a quantitative estimate of the extent of the energy storage. For transient species this requires the measurement of the signal in the relevant time scale and the capability to extract from the temporal part of the signal the time-dependent heat evolution, by proper deconvolution. There are numerous examples for this application. A similar application is the study of the conversion of light energy to electrical energy in solar cells. A special example is the application of the photoacoustic effect in photosynthesis research. Photoacoustic effect in photosynthesis Photosynthesis is a very suitable platform to be investigated by the photoacoustic effect, providing many examples to its various uses. As noted above, the photoacoustic signal from wet photosynthesizing specimens (e.g. microalgae in suspension, sea weed) is principally photothermal. The photoacoustic signal from spongy structures (leaves, lichens) is a combination of photothermal and photobaric (gas evolution or uptake) contributions. The photoacoustic signal from preparations which carry out the primary electron transfer reactions (e.g. reaction centers) is a combination of photothermal and molecular volume changes contributions. In each case, respectively, photoacoustic measurements provided information on Energy storage (i.e. the fraction of light energy which is converted to chemical energy in the photosynthetic process; The extent and dynamics of the gas evolution and uptake from leaves or lichens. Most usually it is photosynthetic oxygen evolution which contributes to the photoacoustic signal; Carbon dioxide uptake is a slow process and does not show up in photoacoustic measurements. Under very specific conditions, however, the photoacoustic signal becomes transiently negative, presumably reflecting oxygen uptake. However, this needs more verification; Molecular volume changes, which occur during the primary steps of photosynthetic electron transfer. These measurements provided information related to the mechanism of photosynthesis, as well as give indications on the intactness and health of the specimen. Examples are: (a) the energetics of the primary electron transfer processes, obtained from the energy storage and molecular volume change measured under sub-microsecond flashes; (b) The characteristics of the 4-step oxidation cycle in photosystem II, obtained for leaves by monitoring photoacoustic pulsed signals and their oscillatory behavior under repetitive exciting light flashes; (c) the characteristics of photosystem I and photosystem II of photosynthesis (absorption spectrum, light distribution to the two photosystems) and their interactions. This is obtained by using continuously modulated light of a certain specific wavelength to excite the photoacoustic signal and measure changes in energy storage and oxygen evolution caused by background light at various chosen wavelengths. In general, photoacoustic measurements of energy storage require a reference sample for comparison. It is a sample with exactly the same light absorption (at the given excitation wavelength) but which completely degrades all the absorbed light into heat within the time resolution of the measurement. It is lucky that photosynthetic systems are self-calibrating, providing such a reference in one sample, as follows: One compares two signals: one, which is obtained with the probing modulated/pulsed light alone and the other when a steady non-modulated light (referred to as background light), which is strong enough to drive photosynthesis into saturation, is added. The added steady light does not produce any photoacoustic effect by itself, but changes the photoacoustic response due to the modulated/pulsed probing light. The resulting signal serves as a reference to all other measurements in absence of the background light. The photothermal part of the reference signal is maximal, since at photosynthetic saturation no energy is stored. At the same time the contribution of the other mechanisms tends to zero at saturation. Thus the reference signal is proportional to the total absorbed light energy. In order to separate and define the photobaric and photothermal contributions in spongy samples (leaves, lichens) one uses the following properties of the photoacoustic signal: (1) At low frequencies (below roughly 100 Hz) the photobaric part of the photoacoustic signal may be quite large and the total signal decreases under the background light. The photobaric signal is obtained in principle from the difference of signals (the total signal minus the reference signal, after a correction to account for the energy storage). (2) At sufficiently high frequencies, however, the photobaric signal is very much attenuated in comparison with the photothermal component and can be neglected. Also, no photobaric signal can be observed even at low frequencies in a leaf with its inner air space filled with water. This is true also in live algal thalli, suspensions of microalgae and photosynthetic bacteria. This is because the photobaric signal depends on oxygen diffusion from the photosynthetic membranes to the air phase, and is largely attenuated as the diffusion distance in the aqueous medium increases. In all the above instances when no photobaric signal is observed one may determine the energy storage by comparing the photoacoustic signal obtained with the probing light alone, to the reference signal. The parameters obtained from the above measurements are used in a variety of ways. Energy storage and the intensity of the photobaric signal are related to the efficiency of photosynthesis and can be used to monitor and follow the health of photosynthesizing organisms. They are also used to obtain mechanistic insight on the photosynthetic process: light of different wavelengths allows one to obtain the efficiency spectrum of photosynthesis, the light distribution between the two photosystems of photosynthesis and to identify different taxa of phytoplankton. The use of pulsed lasers gives thermodynamic and kinetic information on the primary electron transfer steps of photosynthesis. See also Microwave auditory effect References Acoustics Spectroscopy Medical diagnosis
Photoacoustic effect
[ "Physics", "Chemistry" ]
3,636
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Classical mechanics", "Acoustics", "Spectroscopy" ]
35,891,416
https://en.wikipedia.org/wiki/SpiNNaker
SpiNNaker (spiking neural network architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors (specifically ARM968) and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain (see Human Brain Project). The completed design is housed in 10 19-inch racks, with each rack holding over 100,000 cores. The cards holding the chips are held in 5 blade enclosures, and each core emulates 1,000 neurons. In total, the goal is to simulate the behaviour of aggregates of up to a billion neurons in real time. This machine requires about 100 kW from a 240 V supply and an air-conditioned environment. SpiNNaker is being used as one component of the neuromorphic computing platform for the Human Brain Project. On 14 October 2018 the HBP announced that the million core milestone had been achieved. On 24 September 2019 HBP announced that an 8 million euro grant, that will fund construction of the second generation machine, (called SpiNNcloud) has been given to TU Dresden. References Cybernetics Supercomputers Computational neuroscience Computational fields of study AI accelerators Computer architecture Department of Computer Science, University of Manchester Science and technology in Greater Manchester
SpiNNaker
[ "Technology", "Engineering" ]
321
[ "Supercomputers", "Computational fields of study", "Supercomputing", "Computer architecture", "Computer engineering", "Computer hardware stubs", "Computing and society", "Computing stubs", "Computers" ]
35,892,076
https://en.wikipedia.org/wiki/C19H23NO2
{{DISPLAYTITLE:C19H23NO2}} The molecular formula C19H23NO2 (molar mass: 297.39 g/mol, exact mass: 297.1729 u) may refer to: HDEP-28, or ethylnaphthidate Minamestane Trepipam
C19H23NO2
[ "Chemistry" ]
70
[ "Isomerism", "Set index articles on molecular formulas" ]
35,892,078
https://en.wikipedia.org/wiki/Thermoanaerobacter%20italicus
Thermoanaerobacter italicus is a species of thermophilic, anaerobic, spore-forming bacteria. T. italicus was first isolated from hot springs in the north of Italy. The growth range for the organism is 45 to 78°C, with optimal growth conditions at 70°C and pH 7.0. The organism stains Gram-negative, although it has a Gram-positive cell structure. The species was named italicus in reference to the Italian hot springs in which it was first isolated. The organism was originally isolated because of its ability to digest pectin and pectate. References External links Type strain of Thermoanaerobacter italicus at BacDive - the Bacterial Diversity Metadatabase Thermoanaerobacterales Thermophiles Anaerobes Bacteria described in 1998
Thermoanaerobacter italicus
[ "Biology" ]
179
[ "Bacteria", "Anaerobes" ]
35,894,731
https://en.wikipedia.org/wiki/C8H10N6
{{DISPLAYTITLE:C8H10N6}} The molecular formula C8H10O6 (molar mass: 190.20 g/mol, exact mass: 190.0967 u) may refer to: Dihydralazine 4-Dimethylaminophenylpentazole Molecular formulas
C8H10N6
[ "Physics", "Chemistry" ]
72
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
35,895,516
https://en.wikipedia.org/wiki/Rule%20of%20mutual%20exclusion
The rule of mutual exclusion in molecular spectroscopy relates the observation of molecular vibrations to molecular symmetry. It states that no normal modes can be both Infrared and Raman active in a molecule that possesses a center of symmetry. This is a powerful application of group theory to vibrational spectroscopy, and allows one to easily detect the presence of this symmetry element by comparison of the IR and Raman spectra generated by the same molecule. The rule arises because in a centrosymmetric point group, IR active modes, which must transform according to the same irreducible representation generated by one of the components of the dipole moment vector (x, y or z), must be of ungerade (u) symmetry, i.e. their character under inversion is -1, while Raman active modes, which transform according to the symmetry of the polarizability tensor (product of two coordinates), must be of gerade (g) symmetry since their character under inversion is +1. Thus, in the character table there is no irreducible representation that spans both IR and Raman active modes, and so there is no overlap between the two spectra. This does not mean that a vibrational mode which is not Raman active must be IR active: in fact, it is still possible that a mode of a particular symmetry is neither Raman nor IR active. Such spectroscopically "silent" or "inactive" modes exist in molecules such as ethylene (C2H4), benzene (C6H6) and the tetrachloroplatinate ion (PtCl42−). References Raman spectroscopy Infrared spectroscopy Theoretical chemistry
Rule of mutual exclusion
[ "Physics", "Chemistry", "Astronomy" ]
334
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Theoretical chemistry", "Infrared spectroscopy", "nan", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
41,497,810
https://en.wikipedia.org/wiki/Hook%20length%20formula
In combinatorial mathematics, the hook length formula is a formula for the number of standard Young tableaux whose shape is a given Young diagram. It has applications in diverse areas such as representation theory, probability, and algorithm analysis; for example, the problem of longest increasing subsequences. A related formula gives the number of semi-standard Young tableaux, which is a specialization of a Schur polynomial. Definitions and statement Let be a partition of . It is customary to interpret graphically as a Young diagram, namely a left-justified array of square cells with rows of lengths . A (standard) Young tableau of shape is a filling of the cells of the Young diagram with all the integers , with no repetition, such that each row and each column form increasing sequences. For the cell in position , in the th row and th column, the hook is the set of cells such that and or and . The hook length is the number of cells in . The hook length formula expresses the number of standard Young tableaux of shape , denoted by or , as where the product is over all cells of the Young diagram. Examples The figure on the right shows hook lengths for the cells in the Young diagram , corresponding to the partition 9 = 4 + 3 + 1 + 1. The hook length formula gives the number of standard Young tableaux as: A Catalan number counts Dyck paths with steps going up (U) interspersed with steps going down (D), such that at each step there are never more preceding D's than U's. These are in bijection with the Young tableaux of shape : a Dyck path corresponds to the tableau whose first row lists the positions of the U-steps, while the second row lists the positions of the D-steps. For example, UUDDUD correspond to the tableaux with rows 125 and 346. This shows that , so the hook formula specializes to the well-known product formula History There are other formulas for , but the hook length formula is particularly simple and elegant. A less convenient formula expressing in terms of a determinant was deduced independently by Frobenius and Young in 1900 and 1902 respectively using algebraic methods. MacMahon found an alternate proof for the Young–Frobenius formula in 1916 using difference methods. The hook length formula itself was discovered in 1953 by Frame, Robinson, and Thrall as an improvement to the Young–Frobenius formula. Sagan describes the discovery as follows. Despite the simplicity of the hook length formula, the Frame–Robinson–Thrall proof is not very insightful and does not provide any intuition for the role of the hooks. The search for a short, intuitive explanation befitting such a simple result gave rise to many alternate proofs. Hillman and Grassl gave the first proof that illuminates the role of hooks in 1976 by proving a special case of the Stanley hook-content formula, which is known to imply the hook length formula. Greene, Nijenhuis, and Wilf found a probabilistic proof using the hook walk in which the hook lengths appear naturally in 1979. Remmel adapted the original Frame–Robinson–Thrall proof into the first bijective proof for the hook length formula in 1982. A direct bijective proof was first discovered by Franzblau and Zeilberger in 1982. Zeilberger also converted the Greene–Nijenhuis–Wilf hook walk proof into a bijective proof in 1984. A simpler direct bijection was announced by Pak and Stoyanovskii in 1992, and its complete proof was presented by the pair and Novelli in 1997. Meanwhile, the hook length formula has been generalized in several ways. R. M. Thrall found the analogue to the hook length formula for shifted Young Tableaux in 1952. Sagan gave a shifted hook walk proof for the hook length formula for shifted Young tableaux in 1980. Sagan and Yeh proved the hook length formula for binary trees using the hook walk in 1989. Proctor gave a poset generalization (see below). Probabilistic proof Knuth's heuristic argument The hook length formula can be understood intuitively using the following heuristic, but incorrect, argument suggested by D. E. Knuth. Given that each element of a tableau is the smallest in its hook and filling the tableau shape at random, the probability that cell will contain the minimum element of the corresponding hook is the reciprocal of the hook length. Multiplying these probabilities over all and gives the formula. This argument is fallacious since the events are not independent. Knuth's argument is however correct for the enumeration of labellings on trees satisfying monotonicity properties analogous to those of a Young tableau. In this case, the 'hook' events in question are in fact independent events. Probabilistic proof using the hook walk This is a probabilistic proof found by C. Greene, A. Nijenhuis, and H. S. Wilf in 1979. Define We wish to show that . First, where the sum runs over all Young diagrams obtained from by deleting one corner cell. (The maximal entry of the Young tableau of shape occurs at one of its corner cells, so deleting it gives a Young tableaux of shape .) We define and , so it is enough to show the same recurrence which would imply by induction. The above sum can be viewed as a sum of probabilities by writing it as We therefore need to show that the numbers define a probability measure on the set of Young diagrams with . This is done in a constructive way by defining a random walk, called the hook walk, on the cells of the Young diagram , which eventually selects one of the corner cells of (which are in bijection with diagrams for which ). The hook walk is defined by the following rules. Pick a cell uniformly at random from cells. Start the random walk from there. Successor of current cell is chosen uniformly at random from the hook . Continue until you reach a corner cell . Proposition: For a given corner cell of , we have where . Given this, summing over all corner cells gives as claimed. Connection to representations of the symmetric group The hook length formula is of great importance in the representation theory of the symmetric group , where the number is known to be equal to the dimension of the complex irreducible representation associated to . Detailed discussion The complex irreducible representations of the symmetric group are indexed by partitions of (see Specht module) . Their characters are related to the theory of symmetric functions via the Hall inner product: where is the Schur function associated to and is the power-sum symmetric function of the partition associated to the cycle decomposition of . For example, if then . Since the identity permutation has the form in cycle notation, , the formula says The expansion of Schur functions in terms of monomial symmetric functions uses the Kostka numbers: Then the inner product with is , because . Note that is equal to , so that from considering the regular representation of , or combinatorially from the Robinson–Schensted–Knuth correspondence. The computation also shows that: This is the expansion of in terms of Schur functions using the coefficients given by the inner product, since . The above equality can be proven also checking the coefficients of each monomial at both sides and using the Robinson–Schensted–Knuth correspondence or, more conceptually, looking at the decomposition of by irreducible modules, and taking characters. See Schur–Weyl duality. Proof of hook formula using Frobenius formula Source: By the above considerations so that where is the Vandermonde determinant. For the partition , define for . For the following we need at least as many variables as rows in the partition, so from now on we work with variables . Each term is equal to (See Schur function.) Since the vector is different for each partition, this means that the coefficient of in , denoted , is equal to . This is known as the Frobenius Character Formula, which gives one of the earliest proofs. It remains only to simplify this coefficient. Multiplying and we conclude that our coefficient as which can be written as The latter sum is equal to the following determinant which column-reduces to a Vandermonde determinant, and we obtain the formula Note that is the hook length of the first box in each row of the Young diagram, and this expression is easily transformed into the desired form : one shows , where the latter product runs over the th row of the Young diagram. Connection to longest increasing subsequences The hook length formula also has important applications to the analysis of longest increasing subsequences in random permutations. If denotes a uniformly random permutation of order , denotes the maximal length of an increasing subsequence of , and denotes the expected (average) value of , Anatoly Vershik and Sergei Kerov and independently Benjamin F. Logan and Lawrence A. Shepp showed that when is large, is approximately equal to . This answers a question originally posed by Stanislaw Ulam. The proof is based on translating the question via the Robinson–Schensted correspondence to a problem about the limiting shape of a random Young tableau chosen according to Plancherel measure. Since the definition of Plancherel measure involves the quantity , the hook length formula can then be used to perform an asymptotic analysis of the limit shape and thereby also answer the original question. The ideas of Vershik–Kerov and Logan–Shepp were later refined by Jinho Baik, Percy Deift and Kurt Johansson, who were able to achieve a much more precise analysis of the limiting behavior of the maximal increasing subsequence length, proving an important result now known as the Baik–Deift–Johansson theorem. Their analysis again makes crucial use of the fact that has a number of good formulas, although instead of the hook length formula it made use of one of the determinantal expressions. Related formulas The formula for the number of Young tableaux of shape was originally derived from the Frobenius determinant formula in connection to representation theory: Hook lengths can also be used to give a product representation to the generating function for the number of reverse plane partitions of a given shape. If is a partition of some integer , a reverse plane partition of with shape is obtained by filling in the boxes in the Young diagram with non-negative integers such that the entries add to and are non-decreasing along each row and down each column. The hook lengths can be defined as with Young tableaux. If denotes the number of reverse plane partitions of with shape , then the generating function can be written as Stanley discovered another formula for the same generating function. In general, if is any poset with elements, the generating function for reverse -partitions is where is a polynomial such that is the number of linear extensions of . In the case of a partition , we are considering the poset in its cells given by the relation . So a linear extension is simply a standard Young tableau, i.e. Proof of hook formula using Stanley's formula Combining the two formulas for the generating functions we have Both sides converge inside the disk of radius one and the following expression makes sense for It would be violent to plug in 1, but the right hand side is a continuous function inside the unit disk and a polynomial is continuous everywhere so at least we can say Applying L'Hôpital's rule times yields the hook length formula Semi-standard tableaux hook length formula The Schur polynomial is the generating function of semistandard Young tableaux with shape and entries in . Specializing this to gives the number of semi-standard tableaux, which can be written in terms of hook lengths:The Young diagram corresponds to an irreducible representation of the special linear group , and the Schur polynomial is also the character of the diagonal matrix acting on this representation. The above specialization is thus the dimension of the irreducible representation, and the formula is an alternative to the more general Weyl dimension formula. We may refine this by taking the principal specialization of the Schur function in the variables : where for the conjugate partition . Skew shape formula There is a generalization of this formula for skew shapes, where the sum is taken over excited diagrams of shape and boxes distributed according to . A variation on the same theme is given by Okounkov and Olshanski of the form where is the so-called shifted Schur function . Generalization to d-complete posets Young diagrams can be considered as finite order ideals in the poset , and standard Young tableaux are their linear extensions. Robert Proctor has given a generalization of the hook length formula to count linear extensions of a larger class of posets generalizing both trees and skew diagrams. References External links The Surprising Mathematics of Longest Increasing Subsequences by Dan Romik. Contains discussions of the hook length formula and several of its variants, with applications to the mathematics of longest increasing subsequences. Combinatorics
Hook length formula
[ "Mathematics" ]
2,701
[ "Discrete mathematics", "Combinatorics" ]
41,497,935
https://en.wikipedia.org/wiki/Cobalt%28II%29%20phosphate
Cobalt phosphate is the inorganic compound with the formula Co3(PO4)2. It is a commercial inorganic pigment known as cobalt violet. Thin films of this material are water oxidation catalysts. Preparation and structure The tetrahydrate Co3(PO4)2•4H2O precipitates as a solid upon mixing aqueous solutions of cobalt(II) and phosphate salts. Upon heating, the tetrahydrate converts to the anhydrous material. According to X-ray crystallography, the anhydrous Co3(PO4)2 consists of discrete phosphate () anions that link centres. The cobalt ions occupy both octahedral (six-coordinate) and pentacoordinate sites in a 1:2 ratio. See also List of inorganic pigments References Phosphates Cobalt(II) compounds Inorganic pigments
Cobalt(II) phosphate
[ "Chemistry" ]
181
[ "Inorganic pigments", "Phosphates", "Inorganic compounds", "Salts" ]
41,500,960
https://en.wikipedia.org/wiki/Fluorescent%20in%20situ%20sequencing
Fluorescent in situ sequencing (FISSEQ) is a method of sequencing a cell's RNA while it remains in tissue or culture using next-generation sequencing. Introduction FISSEQ combines the spatial context of RNA-FISH and the global transcriptome profiling of RNA-seq. FISSEQ preserves the tissue allowing single molecule in situ RNA localization. The foundation of the method is a novel nucleic acid sequencing library construction method that stably cross-links cDNA amplicons within biological samples. Sequencing data is then generated through an intensive interleaved microscopy and biochemistry protocol and subsequent image processing and bioinformatics. FISSEQ is compatible with diverse sample types including cell culture, tissue sections, and whole mount embryos. FISSEQ is an example of an extremely dense form of in-situ nucleic acid readout: every letter along the RNA chain is read. Thus, barcodes for FISSEQ can be packed into a short string of DNA, as short as 15-20 nucleotides long for the mouse brain or 5 nucleotides for targeted cancer gene panels. Methods In FISSEQ, a series of biochemical processing steps, such as DNA ligations or single-base DNA polymerase extensions, are performed on a block of fixed tissue, interlaced with fluorescent imaging steps. The process is conceptually identical to the mechanism of fluorescent sequencing by synthesis in a commercial bulk DNA sequencing machine, except that it is performed in fixed tissue. Each DNA or RNA molecule in the sample is first “amplified” (i.e., copied) in-situ via rolling-circle amplification to create a localized “rolling circle colony” (rolony) consisting of identical copies of the parent molecule. A series of biochemical steps are then carried out. In the kth cycle, a fluorescent tag is introduced, the color of which corresponds to the identity of the kth base along the rolony’s parent DNA strand. The system is then “paused” in this state for imaging. The entire sample can be imaged in each cycle. The fluorescent tags are then cleaved and washed away, and the next cycle is initiated. Each rolony – corresponding to a single “parent” DNA or RNA molecule in the tissue – thus appears across a series of fluorescent images, as a localized “spot” with a sequence of colors corresponding to the nucleotide sequence of the parent molecule. The nucleotide sequence of each DNA or RNA molecule is thus read out in-situ via fluorescent microscopy. History The development of FISSEQ began over a decade ago. The underlying principles are similar to those that ushered in the sequencing revolution. In both bulk high-throughput sequencing and FISSEQ, short sequences are locally amplified and then imaged one nucleotide at a time. However, the requirement to sequence RNA in intact tissue—rather than isolated and purified DNA, as in conventional bulk sequencing—posed additional challenges. These limitations were overcome, and FISSEQ allows the joint, high-throughput readout of sequence and spatial information. See also Nucleoside triphosphate Polony sequencing References DNA Molecular genetics
Fluorescent in situ sequencing
[ "Chemistry", "Biology" ]
647
[ "Molecular genetics", "Molecular biology" ]
41,502,288
https://en.wikipedia.org/wiki/Osmotic%20blistering
Osmotic blistering is a chemical phenomenon where two substances attempt to reach equilibrium through a semi-permeable membrane. The phenomenon appears as a bubble or "blister" in the coating. Overview Water will flow from one solution to another, trying to create equilibrium between both solutions. Usually, the two solutions are concrete and the coating application on top of the concrete. Concrete is very porous, so water beneath the concrete, will force itself through the concrete, typically through vapor transmission. The water will then try to break through the semi-permeable membrane (either the surface of the concrete or the primer). Most epoxies or urethanes or polymer applications are not permeable so the water will stop at the polymer coating. However, the pressure from the water does not stop, forcing the water to collect directly in between the concrete and the layer of epoxy/urethane. This collection creates the notorious “osmotic blister” that is commonly feared by coating specialists. For steel substrates: The presence of soluble salts (particularly sulfates and chlorides) at the metal/paint interface is known to have a detrimental effect on the integrity of most paint systems including fluorocoatings. The salts come from atmospheric pollution and contamination during blasting or other substrate preparation processes. These salts promote osmotic blistering of the coating and underfilm metallic corrosion. As a result, loss of adhesion, cathodic disbondment, and scribe creep can be observed. A coating behaves as a semi-impermeable membrane; allowing moisture to pass through but not salts4,5. When a paint coating is applied on a metallic surface contaminated with soluble salts, an osmotic blistering process takes place (Figure 8.10). Osmosis is the spontaneous net movement of solvent molecules (water) through a semipermeable membrane (coating film) into a region of higher solute concentration (the salt contaminated substrate). The process drives to equalize the solute concentrations on the two sides, but because salt cannot pass through the membrane (coating) it can never equalize. Water continues to permeate into the region. As the soluble substance dissolves under the paint layer, the pressure caused by the increase in volume can exert a greater force than the paint adhesion and cohesion forces, giving rise to the formation of a blister; the process called osmotic blistering. The blisters are first filled with water and later with corrosion products from the corrosion of the metallic substrate. See also Osmosis References Concrete Membrane technology
Osmotic blistering
[ "Chemistry", "Engineering" ]
540
[ "Structural engineering", "Concrete", "Membrane technology", "Separation processes" ]
29,190,336
https://en.wikipedia.org/wiki/Fort%20des%20Ayvelles
The Fort des Ayvelles, also known as the Fort Dubois-Crancé, is a fortification near the French communes of Villers-Semeuse and Les Ayvelles in the Ardennes, just to the south of Charleville-Mézières. As part of the Séré de Rivières system of fortifications, the fort was planned as part of a new ring of forts replacing the older citadel of Mézières with dispersed fortifications. With advances in the range and destructive power of artillery, the city's defensive perimeter had to be pushed away from the city center to the limits of artillery range. The Fort des Ayvelles was the only such fortification to be completed of the ensemble, as resources were diverted elsewhere. At the time of its construction the fort controlled the Meuse and the railway line linking Reims, Montmédy, Givet and Hirson. The Fort des Ayvelles was reduced in status in 1899, its masonry construction rendered obsolete by the advent of high-explosive artillery shells. However, it was re-manned for the First World War before it was captured by the Germans on 29 August 1914. The fort was partly destroyed in 1918. During the Battle of France in 1940 the fort was bombarded. French resisters were executed at Ayvelles during both world wars. At present the fort is maintained by a preservation society, and may be visited. Description Built starting in 1876 under the direction of Captain Léon Boulenger, the fort was completed in 1878. The fort's four faces form a square perimeter, surrounded by a ditch wide and deep. The fort features particularly elaborate double caponiers to protect the outer wall and ditch on opposite corners, as well as counterscarps. The caponiers were provided with unique projecting watch-stations, or échauguettes. The fort and a subsidiary battery featured Mougin casemates, each armed with a single de Bange Model 1877 155 mm gun. The fort possessed 53 artillery pieces in 1899, manned by 880 men, and disposed in two-level casemates on a north-south line. The battery is about to the east, connected to the main fort by a covered causeway. The caponiers were damaged by both world wars and by the French military in explosives tests in 1960 in preparation for demolition of the urban fortifications of Charleville Mézières. The Mougin gun was removed at about this time, but the casemate remains. In addition to its own Mougin casemate, the pentagonal detached battery was armed with 10 artillery pieces, served by 150 men. The battery was provided with a wall and ditch, with caponiers and counterscarps for defense. The battery was built in 1878 and was never modernized. The battery's Mougin casemate was entirely demolished after World War II by the French Army. 1914 In 1914 the fort was manned by personnel of the French Fourth Army, under the overall command of General Fernand de Langle de Cary. The fort had been hastily garrisoned after the defeat of French forces in Belgium with two companies of the 45th Territorial Infantry Regiment and 300 territorial artillerymen, under the command of Commandant Georges Lévi Alvarès. These were reserve formations, largely composed of local residents. As French forces retreated and maneuvered in the face of the German attack, the fort was the only French force holding almost of the front between Rimogne and Flize. Under these circumstances, Georges Lévi Alvarès requested permission from the Fourth Army to evacuate the fort in the event of German attack. However, before receiving a reply, he decided to evacuate after sabotaging the fort's arms. The garrison evacuated on August 25. Arriving at Boulzicourt, the troops were ordered back to the fort. At the same time, the Germans were preparing a bombardment of the fort. When the garrison returned to the fort on the 26th, the Germans opened fire. The French column retreated. Reaching Launois, the troops were sequestered. Georges Lévi Alvarès, who had remained at the Fort des Ayvelles, committed suicide on the 27th. His body was found by the Germans and was buried nearby, with honors. German forces had bombarded the fort on the 26th and 27th, and waited until the 29th to enter the fort. They stripped the fort of its remaining metals for scrap. While they occupied the area Germans used the Fort des Ayvelles as a munitions depot and as a prison. The fort was the execution site for three French civilians, executed by the Germans between October 1915 and January 1916. The fort was reoccupied by France at the close of the war in November 1918. 1940 In 1940 the Fort des Ayvelles was manned by the second battalion of the French 148th Fortress Infantry Regiment under the command of Commandant Marie, which was in turn part of the weak 102nd Fortress Infantry Division. The 102nd DIF was the successor organization to the Defensive Sector of the Ardennes, which had administered a weak section of the Maginot Line fortifications. The sector was composed principally of scattered casemates and blockhouses, as the French command regarded the Ardennes sector as unsuitable for mechanized warfare. On 14 May 1940 the fort was bombarded by German forces, while the first and third battalions of the 148th RIF faced direct German attack. During the night of 15 May the fort was abandoned by French forces. The remaining troops of the 148th RIF nonetheless found themselves encircled and surrendered. Once again, the fort was the scene of civilian executions, with thirteen members of the French Resistance executed there. The most notable victims were les quatres cheminots d'Amagne ("the four railway workers of Amagne"), René Arnould, Georges Boillot, Robert Stadler and Lucien Maisonneuve, executed on 26 June 1944 for sabotage at the Amagne-Lucqy depot. Blockhouses Two blockhouses are near the fort, constructed in the 1930s as part of the Defensive Sector of the Ardennes: the Blockhaus du Fort des Ayvelles Sud, and the Blockhaus de Villers-Semeuse. Both were lightly armed. Present status The fort is maintained by the Association du Fort et de la Batterie des Ayvelles, and may be visited. References External links Association du Fort et de la Batterie des Ayvelles Fort des Ayvelles at fortiffsere.fr World War II internment camps in France World War I museums in France World War II museums in France Séré de Rivières system Defensive Sector of the Ardennes
Fort des Ayvelles
[ "Engineering" ]
1,350
[ "Séré de Rivières system", "Fortification lines" ]
34,740,042
https://en.wikipedia.org/wiki/Laminar%20flow%20reactor
A laminar flow reactor (LFR) is a type of chemical reactor that uses laminar flow to control reaction rate, and/or reaction distribution. LFR is generally a long tube with constant diameter that is kept at constant temperature. Reactants are injected at one end and products are collected and monitored at the other. Laminar flow reactors are often used to study an isolated elementary reaction or multi-step reaction mechanism. Overview Laminar flow reactors employ the characteristics of laminar flow to achieve various research purposes. For instance, LFRs can be used to study fluid dynamics in chemical reactions, or they can be utilized to generate special chemical structures such as carbon nanotubes. One feature of the LFR is that the residence time (The time interval during which the chemicals stay in the reactor) of the chemicals in the reactor can be varied by either changing the distance between the reactant input point and the point at which the product/sample is taken, or by adjusting the velocity of the gas/fluid. Therefore the benefit of a laminar flow reactor is that the different factors that may affect a reaction can be easily controlled and adjusted throughout an experiment. Means of analyzing reactants in LFR Means of analyzing the reaction include using a probe that enters into the reactor; or more accurately, sometimes one can utilize non-intrusive optical methods (e.g. use spectrometer to identify and analyze contents) to study reactions in the reactor. Moreover, taking the entire sample of the gas/fluid at the end of the reactor and collecting data may be useful as well. Using methods mentioned above, various data such as concentration, flow velocity etc. can be monitored and analyzed. Flow velocity in LFR Fluids or gases with controlled velocity pass through a laminar flow reactor in a fashion of laminar flow. That is, streams of fluids or gases slide over each other like cards. When analyzing fluids with the same viscosity ("thickness" or "stickiness") but different velocity, fluids are typically characterized into two types of flows: laminar flow and turbulent flow. Compared to turbulent flow, laminar flow tends to have a lower velocity and is generally at a lower Reynolds number. Turbulent flow, on the other hand, is irregular and travels at a higher speed. Therefore the flow velocity of a turbulent flow on one cross section is often assumed to be constant, or "flat". The "non-flat" flow velocity of laminar flow helps explain the mechanism of an LFR. For the fluid/gas moving in an LFR, the velocity near the center of the pipe is higher than the fluids near the wall of the pipe. Thus, the velocity distribution of the reactants tends to decrease from the center to the wall. Residence time distribution (RTD) The velocity near the center of the pipe is higher than the fluids near the wall of the pipe. Thus, the velocity distribution of the reactants tends to be higher in the center and lower on the side. Consider fluid being pumped through an LFR at constant velocity from the inlet, and the concentration of the fluid is monitored at the outlet. The graph of the residence time distribution should look like a negative slope with positive concavity. And the graph is modeled by the function: if is smaller than ; if is greater than or equal to . Notice that the graph has the value of zero initially, this is simply because it takes sometime for the substance to travel through the reactor. When the material is starting to reach the outlet, the concentration drastically increases, and it gradually decreases as time proceeds. Characteristics The laminar flows inside of a LFR has the unique characteristic of flowing in a parallel fashion without disturbing one another. The velocity of the fluid or gas will naturally decrease as it gets closer to the wall and farther from the center. Therefore the reactants have an increasing residence time in the LFR from the center to the side. A gradually increasing residence time gives researchers a clear layout of the reaction at different times. Besides, when studying reactions in LFR, radial gradients in velocity, composition and temperature are significant. In other words, in other reactors where laminar flow is not significant, for instance, in a plug flow reactor, velocity of the object is assumed to be the same on one cross section since the flows are mostly turbulent. In a laminar flow reactor, velocity is significantly different at various points on the same cross section. Therefore the velocity differences throughout the reactor need to be taken into consideration when working with a LFR. Research Various researches pertaining to the modeling of LFR and formations of substances within a LFR have been done over the past decades. For instance, the formation of Single-walled carbon nanotube was investigated in a LFR. As another example, conversion from methane to higher hydrocarbons have been studied in a laminar flow reactor. See also Chemical reactor Laminar flow Plug flow reactor model Turbulence Laminar vs. turbulent flow References External links https://www.youtube.com/watch?v=VOemsoVSPWI (Unavailable) Chemical reactors
Laminar flow reactor
[ "Chemistry", "Engineering" ]
1,046
[ "Chemical reactors", "Chemical reaction engineering", "Chemical equipment" ]
34,741,838
https://en.wikipedia.org/wiki/Perennial%20calendar
A perennial calendar is a calendar that applies to any year, keeping the same dates, weekdays and other features. Perennial calendar systems differ from most widely used calendars which are annual calendars. Annual calendars include features particular to the year represented, and expire at the year's end. A perennial calendar differs also from a perpetual calendar, which is a tool or reference to compute the weekdays of dates for any given year, or for representing a wide range of annual calendars. For example, most representations of the Gregorian calendar year include weekdays and are therefore annual calendars, because the weekdays of its dates vary from year to year. For this reason, proposals to perennialize the Gregorian calendar typically introduce one or another scheme for fixing its dates on the same weekdays every year. History and background The term perennial calendar appeared as early as 1824, in the title of Thomas Ignatius Maria Forster's Perennial calendar and companion to the almanack. In that work he compiled "the events of every day in the year, as connected with history, chronology, botany, natural history, astronomy, popular customs and antiquities, with useful rules of health, observations on the weather, explanations of the feasts and festivals of the church and other miscellaneous useful information". The data listed there for each date in the calendar apply in every year, and supplement data to be found in annual almanacs. Often printed in perennial-calendar format also are book blanks for diaries, ledgers and logs, for use in any year. Entries on the blank pages of these books are organized by calendar dates, without reference to weekdays or year numbers. Calendar reform A goal of modern calendar reform has been to achieve universal acceptance of a perennial calendar, with dates fixed always on the same weekdays, so the same calendar table serves year after year. Advantages claimed for a perennial over an annualized calendar like the Gregorian are simplicity and regularity. Scheduling is simplified for institutions and industries with extended production cycles. School terms and breaks, for example, can fall annually on the same dates. Month-based ordinal dating ("Fourth Thursday in November", "Last Monday in May") will be obsolete. Two methods favored for perennializing the calendar are the introduction of so-called "blank days" and of a periodic "leap week". Blank-day calendars Blank-day calendars remove a day or two (the latter for leap years) from the weekday cycle, resulting in a year length of 364 weekdays. Since this number is evenly divisible by 7, every year can begin on the same weekday. In the twelve-month plan of The World Calendar, for example, the Gregorian year-end date of December 31 is sequestered from the cycle of the week and celebrated as "Worldsday". December 30 falls on a Saturday, Worldsday follows next, and then January 1 begins every new year on a Sunday. The extra day in leap year is treated similarly. Blank-day calendars with thirteen months have also been developed. Among them are: The Georgian calendar, by Hirossa Ap-Iccim (=Rev. Hugh Jones) (1745); The Positivist calendar, by Auguste Comte (1849); and the International Fixed Calendar, by Moses B. Cotsworth (1902), and championed by George Eastman. Blank-day reform proposals face a religious objection, however. Sabbatarians, who are obliged to regard every seventh day as a day of rest and worship, must observe their holy day on a different weekday each year. Leap-week calendars Leap week calendar plans often restrict common years to 364 days, or 52 weeks, and expand leap years to 371 days, or 53 weeks. The added week may extend an existing month, or it may stand alone as an inserted seven-day month. The leap-week calendar may have been conceived originally by Rev. George M. Searle (1839-1918), around 1905. In 1930, James A. Colligan, S.J. proposed a thirteen-month reform, the Pax Calendar. By 1955, Cecil L. Woods proposed the twelve-month Jubilee Calendar which inserts an extra week called "Jubilee" before January in specified years. The Hanke–Henry Permanent Calendar (2003) inserts an extra year-end month of seven days called "Xtra", and the Symmetry454 calendar (circa 2004) lengthens the month of December by one week on leap years. Easter in leap-week calendars The Christian celebration of Easter is historically calculated to occur on the first Sunday after the first ecclesiastical full moon falling on or after 21 March. In leap-week calendars, March 21 is less likely to match astronomical spring equinox than in the Gregorian calendar. The Symmetry454 calendar proposes Sunday, April 7 as a permanently fixed date for Easter, based on the median date of the Sunday after the day of the astronomical lunar opposition that is on or after the day of the astronomical northward equinox, calculated for the meridian of Jerusalem. In 1963 the Second Ecumenical Council of the Vatican declared: Determining leap weeks In the Pax Calendar, the extra week is added in every year having its last number, or its last two numbers, divisible by 6, and in every year ending with the number 99, and every centennial year not divisible by 400. The Hanke-Henry Permanent Calendar's leap week occurs every year that either begins or ends in a Thursday on the corresponding Gregorian calendar. The Symmetry454 calendar's leap week formula was chosen over others based on 10 criteria, including smoothest distribution of weeks, minimal "jitter" and predicted accuracy of 4-5 millennia. Objections Objections to leap weeks include the inconvenience of a periodic extra week for billing and payment cycles, and for dividing the leap year into halves and quarters. Another objection is that anniversaries, such as birthdays, are more likely on average to occur on a leap week than a leap day. Other options Besides blank-day and leap-week reforms only a few other options for achieving a perennial calendar have been suggested. The Long-Sabbath Calendar, by Rick McCarty (1996), extends to 36 hours the last Saturday of the year and the subsequent first Sunday of the new year. Seventy-two hours are thereby covered with two weekdays instead of the usual three, which shortens the year to 364 calendar days without interrupting the weekday cycle. Another option would trim every year to exactly 364 days, allowing the calendar months to drift relative to the seasons. January would move from mid-winter to mid-summer, in the northern hemisphere, after approximately 150 years. The calendar year can be reckoned to drift though all the seasons once every 294 calendar years equal to 293 years of 365.2423208191 days. See also Annual calendar Decimal calendar Pax Calendar Perpetual calendar References Calendars 1820s neologisms
Perennial calendar
[ "Physics" ]
1,408
[ "Spacetime", "Calendars", "Physical quantities", "Time" ]
34,743,992
https://en.wikipedia.org/wiki/Warfield%20group
In algebra, a Warfield group, studied by , is a summand of a simply presented abelian group. References Group theory
Warfield group
[ "Mathematics" ]
29
[ "Group theory", "Fields of abstract algebra" ]
34,744,456
https://en.wikipedia.org/wiki/Radial%20chromatography
Radial chromatography is a form of chromatography, a preparatory technique for separating chemical mixtures. It can also be referred to as centrifugal thin-layer chromatography. It is a common technique for isolating compounds and can be compared to column chromatography as a similar process. A common device used for this technique is a Chromatotron. Here the solvent travels from the center of the circular chromatography silica layered on a plate towards the periphery. The entire system is kept covered in order to prevent evaporation of solvent while developing a chromatogram. The wick at the center of system drips solvent into the system which the provides the mobile phase and moves the sample radially to form the sample spots of different compounds as concentric rings. Continuous annular chromatography uses a stationary phase which is filled into an annular gap. The eluent is continuously fed across the whole bed interface also the feed is continuously fed at the top of the stationary however only at a certain point and not a cross the whole bed. The stationary phase is then rotated with a certain rotation speed. The rotation speed, eluent and feed flow rates have to be defined precisely such that the collector vessels only collect the correct substance. The retention times are transformed into the respective retention angles. References External links What is Radial Flow Chromatography. Archived on 2023-05-15. Chromatography
Radial chromatography
[ "Chemistry" ]
301
[ "Chromatography", "Separation processes" ]
34,745,511
https://en.wikipedia.org/wiki/Bacteriohopanepolyol
Bacteriohopanepolyols (BHPs), bacteriohopanoids, or bacterial pentacyclic triterpenoids are commonly found in the lipid cell membranes of bacteria. BHPs are frequently used as biomarkers in sedimentary rocks and can provide paleoecological information about ancient bacterial communities. Function Several studies have suggested that bacteriohopanepolyols play a role in the structure of membranes due to their polycylic structures and amphiphilic properties. BHPs have been hypothesized to be an evolutionary precursor to sterols, a class of biochemical compounds which is primarily present in eukaryotic cell membranes. The presence of BHPs in membranes has been found to improve the temperature, antimicrobial resistance, and pH tolerance of bacteria. Additionally, BHPs have been found to be an important constituent of 'lipid rafts', which are enriched in specific lipids and provide transport, protein synthesis, and signal transduction proteins. Prior to their discovery in bacteria, lipid rafts were considered a key piece of the evolution of more complex, eukaryotic cells. Sources Bacteriohopanepolyols have been found to be present in all types of sediment since their discovery in 1979, and are produced by a wide range of bacteria including alpha-, beta-, cyano-, and gammaproteobacteria While early studies estimated that around half of all species of bacteria produce hopanoids, more recent studies estimate around 4% of bacteria have the ability to produce hopanoids. Several studies have used culture-independent methods to survey bacterial genomes for genes which may imply the ability to produce BHPs The squalene-hopene cyclase gene (sqhC) produces an enzyme that catalyzes the cyclization of squalene, a precursor of BHPs. Among thirty sequenced bacteria, this gene was found in the genomes of all hopanoid-containing bacteria, and very few of the bacteria which do not produce hopanoids, and therefore the presence of the sqhC gene was assumed to mean that the gene was expressed. Overall, fewer than ten-percent of the marine and freshwater bacterioplankton were found to possess this gene. Analysis Bacteriohopanepolyols are commonly identified through chemical extraction of organic matter followed by analysis on a mass spectrometer. Extraction protocols are intended to purify natural samples to allow for analysis of a simpler mixture of compounds. Differences in the efficiency of extraction methods have been found to vary for different types of BHPs. BHPs were first analyzed using a gas chromatograph mass spectrometer (GC-MS), however the use of HPLC-MS methods have become more common in recent years due to the ability to analyze BHPs without the Rohmer preparation procedure which resulted in a loss of specificity. Despite its advantages, analysis of BHPs using HPLC-MS is complicated by a lack of sufficient standards and variations in the efficiency of acetylation among different BHPs. Biomarker utility The polycyclic hydrocarbon skeleton of BHPs makes them resistant to degradation, and allows for them to be preserved for long periods of time in the geologic record. However, the use of BHPs as a biomarker for a specific group of bacteria is limited by the current state of knowledge regarding the identification of groups of bacteria which produce specific bacteriohopanepolyols. Some BHPs may be produced by a diverse range of organisms, such as bacteriohopane-32,33,34,35-tetrol (BHT), and the biological source of many BHPs is uncertain, complicating interpretation of BHPs. Initially, BHPs were thought to only be present in aerobic bacteria, however they have since been found in anaerobic bacteria. BHPs have often been used as an indicator for cyanobacteria, and forty-nine out of fifty-eight cultured cyanobacteria have been found to produce BHP. In particular, 2β-methylbiohopanoids has only been found to be produced in significant quantities by cyanobacteria. An isomer of bacteriohopanetetrol was found to be associated with anoxic and suboxic conditions in marine pelagic sediments. Bacteriohopanepolyol identification has been paired with stable carbon isotope analysis, for greater specificity. In particular, the detection of 3-methylhopanoids (hopanoids with a methyl group at the C3 position) which are highly depleted in 13C are interpreted as a proxy for methanotrophy. References See also Hopane References Terpenes and terpenoids
Bacteriohopanepolyol
[ "Chemistry" ]
983
[ "Organic compounds", "Biomolecules by chemical classification", "Terpenes and terpenoids", "Natural products" ]
34,745,933
https://en.wikipedia.org/wiki/Mesocrystal
A mesocrystal is a material structure composed of numerous small crystals of similar size and shape, which are arranged in a regular periodic pattern. It is a form of oriented aggregation, where the small crystals have parallel crystallographic alignment but are spatially separated. When the sizes of individual components are at the nanoscale, mesocrystals represent a new class of nanostructured solids made from crystiallographically oriented nanoparticles. The sole criterion for determining whether a material is mesocrystal is the unique crystallographically hierarchical structure, not its formation mechanism. Discovery Helmut Cölfen discovered and named mesocrystals in 2005 during his studies on biominerals. He suggested that their growth was due to a non-classical, self-assembly based process. Structure and formation Mesocrystal is an abbreviation for mesoscopically structured crystal, where individual subunits often form a perfect 3D order, as in a traditional crystal where the subunits are individual atoms. Formation methods Nanoparticle alignment by organic matrix This is when a mesocrystal is formed by filling organic matrix compartments with crystalline matter. This crystalline matter would be oriented by the organic matrix. This is the process of biomineralization and this is how mesocrystals are produced in nature. Ordering by physical fields In most cases mesocrystals form nanoparticles in solution. These nanoparticles aggregate and arrange in crystallographic formation, without any additives. The main causes of this ordering are tensorial polarization forces and dipole fields. Mineral bridges Formation with mineral bridges occur with the formation of nanocrystals. Growth is quenched at this stage by the absorption of a polymer into the nanoparticle surface. Now mineral bridges can nucleate at the defect site, within the growing inhibition layer on the nanocrystal. Through this, a new nanocrystal grows on the mineral bridge, and the growth is again stopped by the polymer. This process is repeated until the crystal builds up. Space constraints and self-similar growth This argument for formation of mesocrystals requires only a confined space that the reaction takes place in. As the nanoparticle grow into crystals, they have no choice but to align with each other in such a confined space. Applications Mesocrystals have unique structural features and the physical and physiochemical properties that come from that structure have made them become a subject of interest. Mesocrystals are expected to have a role in many different applications. These include heterogeneous photocatalysts, electrodes, optoelectronics, biomedical materials, and lightweight structural materials. The properties that make mesocrystals viable for future applications are their shared properties with nanoparticulate, mesoporous, and single-crystal materials. Because mesocrystals are made up of nanoparticles, the properties of the nanoparticles themselves are, in some cases, passed to the whole mesocrystal structure. This allows for the practical application of mesocrystals because they are "potentially more stable analogues of nanoparticulate materials." High porosity is generally a quality of mesocrystals, this is the property shared with mesoporous materials. Closed, internal pores are good for thermal and dielectric insulation and the open pores then aid in absorption and could be utilized for medical delivery. Alternatively, a mesocrystal could have its pores filled and then it would be similar to a single-crystal material and have some unusual electronic and optical properties. The diversity of the properties of mesocrystals could allow them to be effectively utilized in many applications. In nature The spines of sea urchins are composed of mesocrystals of calcite nano-crystals (92%) in a matrix of non-crystalline calcium carbonate (8%). This structure makes the spines hard but also shock-absorbing, which special property makes them effective defences against predators. Mesocrystals also appear in the shells of some eggs, coral, chitin, and the shells of mussels. References Crystals Phases of matter Materials science
Mesocrystal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
872
[ "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Crystallography", "Crystals", "nan", "Matter" ]
34,749,877
https://en.wikipedia.org/wiki/Mathematical%20Magick
Mathematical Magick (complete title: Mathematical Magick, or, The wonders that may by performed by mechanical geometry.) is a treatise by the English clergyman, natural philosopher, polymath and author John Wilkins (1614 – 1672). It was first published in 1648 in London, another edition was printed in 1680 and further editions were published in 1691 and 1707. Abstract Wilkins dedicated his work to His Highness the Prince Elector Palatine (Charles I Louis) who was in London at the time. It is divided into two books, one headed Archimedes, because he was the chiefest in discovering of Mechanical powers, the other was called Daedalus because he was one of the first and most famous amongst the Ancients for his skill in making Automata. Wilkins sets out and explains the principles of mechanics in the first book and gives an outlook in the second book on future technical developments like flying which he anticipates as certain if only sufficient exercise, research and development would be directed to these topics. The treatise is an example of his general intention to disseminate scientific knowledge and method and of his attempts to persuade his readers to pursue further scientific studies. First book In the 20 chapters of the first book, traditional mechanical devices are discussed such as the balance, the lever, the wheel or pulley and the block and tackle, the wedge, and the screw. The powers acting on them are compared to those acting in the human body. The book deals with the phrase attributed to Archimedes saying that if he did but know where to stand and fasten his instrument, he could move the world and shows the effect of a series of gear transmissions one linked to the other. It shows the importance of various speeds and the theoretical possibility to increase speed beyond the speed of the earth at the equator. Finally, siege engines like catapults are compared with the cost and effect of then-modern guns. Second book Various devices In the 15 chapters of the second book, various devices are examined which move independently of human interference like clocks and watches, water mills and wind mills. Wilkins explains devices being driven by the motion of air in a chimney or by pressurized air. A land yacht is proposed driven by two sails on two masts, and a wagon powered by a vertical axis wind turbine. A number of independently moving small artificial figures representing men and animals are described. The possibilities are considered to improve the type of submarine designed and built by Cornelis Drebbel. The tales about various flying devices are related and doubts as to their truth are dissipated. Wilkins explains that it should be possible for a man, too, to fly by himself if a frame were built where the person could sit and if this frame was sufficiently pushed in the air. Art of flying In chapter VII, Wilkins discusses various methods how a man could fly, namely by the help of spirits and good or evil angels (as related on various occasions in the Bible), by the help of fowls, by wings fastened immediately to the body or by a flying chariot. The whole of this chapter (and of the following one) concern the possibilities of flying. In a single preliminary phrase, he refers to previous reports of flight attempts: He writes that sufficient practise should enable a man to fly, most probably by "a flying chariot, which may be so contrived as to carry a man within it" and equipped with a sort of engine, or else big enough to carry several people, each successively working to fly it. He used the next chapter to dissipate any doubts there may be as to the possibility of such a flying chariot, should a number of particular items be developed and tested. Perpetual motion and perpetual lamps In Chapters IX to XV, extensive discussions and deliberations are set out why a perpetual motion should be feasible, why the stories about lamps burning for hundreds of years were true and how such lamps could be made and perpetual motions created. References 1648 books Applied mathematics Mechanical engineering
Mathematical Magick
[ "Physics", "Mathematics", "Engineering" ]
804
[ "Applied mathematics", "Applied and interdisciplinary physics", "Mechanical engineering" ]
34,750,362
https://en.wikipedia.org/wiki/ZEPLIN-III
The ZEPLIN-III dark matter experiment attempted to detect galactic WIMPs using a 12 kg liquid xenon target. It operated from 2006 to 2011 at the Boulby Underground Laboratory in Loftus, North Yorkshire. This was the last in a series of xenon-based experiments in the ZEPLIN programme pursued originally by the UK Dark Matter Collaboration (UKDMC). The ZEPLIN-III project was led by Imperial College London and also included the Rutherford Appleton Laboratory and the University of Edinburgh in the UK, as well as LIP-Coimbra in Portugal and ITEP-Moscow in Russia. It ruled out cross-sections for elastic scattering of WIMPs off nucleons above 3.9 × 10−8 pb (3.9 × 10−44 cm2) from the two science runs conducted at Boulby (83 days in 2008 and 319 days in 2010/11). Direct dark matter search experiments look for extremely rare and very weak collisions expected to occur between the cold dark matter particles that are believed to permeate our galaxy and the nuclei of atoms in the active medium of a radiation detector. These hypothetical elementary particles could be Weakly Interacting Massive Particles, or WIMPs, weighing as little as a few protons or as much as several heavy nuclei. Their nature is not yet known, but no sensible candidates remain within the Standard Model of particle physics to explain the dark matter problem. Detection technology Condensed noble gases, most notably liquid xenon and liquid argon, are excellent radiation detection media. They can produce two signatures for each particle interaction: a fast flash of light (scintillation) and the local release of charge (ionisation). In two-phase xenon – so called since it involves liquid and gas phases in equilibrium – the scintillation light produced by an interaction in the liquid is detected directly with photomultiplier tubes; the ionisation electrons released at the interaction site are drifted up to the liquid surface under an external electric field, and subsequently emitted into a thin layer of xenon vapour. Once in the gas, they generate a second, larger pulse of light (electroluminescence or proportional scintillation), which is detected by the same array of photomultipliers. These systems are also known as xenon 'emission detectors'. This configuration is that of a time projection chamber (TPC); it allows three-dimensional reconstruction of the interaction site, since the depth coordinate (z) can be measured very accurately from the time separation between the two light pulses. The horizontal coordinates can be reconstructed from the hit pattern in the photomultiplier array(s). Critically for WIMP searches, the ratio between the two response channels (scintillation and ionisation) allows the rejection of the predominant backgrounds for WIMP searches: gamma and beta radiation from trace radioactivity in detector materials and the immediate surroundings. WIMP candidate events produce lower ionisation/scintillation ratios than the more prevalent background interactions. The ZEPLIN programme pioneered the use of two-phase technology for WIMP searches. The technique itself, however, was first developed for radiation detection using argon in the early 1970s. Lebedenko, one of its pioneers at the Moscow Engineering Physics Institute, was involved in building ZEPLIN-III in the UK from 2001. Developed alongside it, but on a faster timescale, ZEPLIN-II was the first such WIMP detector to operate in the world (2005). This technology was also adopted very successfully by the XENON programme. Two-phase argon has also been used for dark matter searches by the WARP collaboration and ArDM. LUX is developing similar systems that have set improved limits. History The ZEPLIN (ZonEd Proportional scintillation in LIquid Noble gases) series of experiments was a progressive programme pursued by the UK Dark Matter Collaboration using liquid xenon. It evolved alongside the DRIFT programme which promoted the use of gas-filled TPCs to recover directional information on WIMP scattering. In the late 1980s the UKDMC had explored the potential of different materials and techniques, including cryogenic LiF, CaF2, silicon and germanium, from which a programme emerged at Boulby based on room-temperature NaI(Tl) scintillators. The subsequent move to a new target material, liquid xenon, was motivated by the realisation that noble liquid targets are inherently more scalable and could achieve lower energy thresholds and better background discrimination. In particular, external layers of the bulk target, affected more by external backgrounds, can be sacrificed during data analysis if the position of the interactions in known; this leaves an inner fiducial volume with potentially very low background rates. This self-shielding effect (alluded to by the 'zoned' term in the contrived ZEPLIN acronym) explains the faster gain in sensitivity of these targets compared to technologies based on a modular approach adopted with crystal detectors, where each module brings its own background. ZEPLIN-I, a 3 kg liquid xenon target, operated at Boulby from the late 1990s. It used pulse shape discrimination for background rejection, exploiting a small but helpful difference between the timing properties of the scintillation light caused by WIMPs and background interactions. This was followed by two-phase systems ZEPLIN-II and ZEPLIN-III, which were designed and built in parallel at RAL/UCLA and Imperial College, respectively. ZEPLIN-II was the first two-phase system deployed to search for dark matter in the world; it consisted of a 30 kg liquid xenon target topped by a 3 mm layer of gas in a so-called three-electrode configuration: separate electric fields were applied to the bulk of the liquid (WIMP target) and to the gas region above it by using an extra electrode underneath the liquid surface (in addition to an anode grid, located above the gas, and a cathode, at the bottom of the chamber). In ZEPLIN-II an array of 7 photomultipliers viewed the chamber from above in the gas phase. ZEPLIN-III was proposed in the late 1990s, based partly on a similar concept developed at ITEP, and built by Prof. Tim Sumner and his team at Imperial College. It was deployed underground at Boulby in late 2006, where it operated until 2011. It was a two-electrode chamber, where electron emission into the gas was achieved by a strong (4 kV/cm) field in the liquid bulk rather than by an additional electrode. The photomultiplier array contained 31 photon detectors viewing the WIMP target from below, immersed in the cold liquid xenon. ZEPLIN–II and –III were purposely designed in different ways, so that the technologies employed in each sub-system could be appraised and selected for the final experiment proposed by the UKDMC: a tonne-scale xenon target (ZEPLIN-MAX) capable of probing most of the parameter space favored by theory at that point (1 × 10−10 pb), although this latter system was never built in the UK for lack of funding. Results Although the ZEPLIN-III liquid xenon target was built on the same scale as that of its ZEPLIN predecessors, it achieved significant improvements in WIMP sensitivity due to the higher discrimination factor achieved and to a lower overall background. In 2011 it published exclusion limits on the spin-independent WIMP-nucleon elastic scattering cross-section above 3.9 × 10−8 pb for a 50 GeV WIMP mass. Although not as stringent as results from XENON100, this was achieved with a 10 times smaller fiducial mass and demonstrated the best background discrimination ever achieved in these detectors. The WIMP-neutron spin-dependent cross-section was excluded above 8.0 × 10−3 pb. It also ruled out an inelastic WIMP scattering model which attempted to reconcile a positive claim from DAMA with the absence of signal in other experiments. References External links ZEPLIN-III Project Boulby Underground Laboratory UK Dark Matter Collaboration Experiments for dark matter search Research institutes in North Yorkshire
ZEPLIN-III
[ "Physics" ]
1,672
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
34,754,337
https://en.wikipedia.org/wiki/Zyron
Zyron is a registered trademark for specialty gases marketed to the global electronics industry by DuPont. History Freon was used as the original brand name for electronic gases produced and marketed by DuPont. With the depletion of the ozone layer and the subsequent phase-out of chlorofluorocarbon (CFC) gas compounds, the company rebranded this product line to differentiate from refrigerant gases that had been using the same Freon brand name. The name was developed in October 1991 by DuPont employees Paul Bechly and Dr. Nicholas Nazarenko, was first used in commerce on June 12, 1992, and became a registered trademark of DuPont in 1993. Naming System The Zyron product naming system was also developed by Bechly and Nazarenko in October 1991. The system is based upon the historical industry numbering system for fluorinated alkanes to identify the chemical compound, followed by an (N) suffix component that specified product purity. The naming system was intentionally not trademarked to allow for industrial adoption. As example for Zyron 116 N5: the 116 represents the compound hexafluoroethane, and N5 represents "5 nines" or 99.999% purity. As example for Zyron 23 N3 the 23 represents the compound trifluoromethane, and N3 represents "3 nine" or 99.99% purity. Products Zyron 116 N5 (hexafluoroethane) Zyron 318 N4 (octafluorocyclobutane) Zyron 23 N5 (trifluoromethane) Zyron 23 N3 (trifluoromethane) Historical products included: Zyron 14 (tetrafluoromethane), Zyron 32 (difluoromethane), Zyron 125 (pentafluoroethane), and Zyron NF3 (nitrogen trifluoride). Applications The primary applications of these gases for the electronics industry are for etching of silicon wafer, and cleaning of chemical vapor deposition chamber tools. These are all plasma-based product applications where the product is essentially destroyed in use. Alternative products to the Zyron line for various application in electronics include the chemical compounds HCl, BCl3, CF4, ClF3, CH2F2, GeH4, C4F6, NF3, C5F8, PH3, C3H6, SiH4, and WF6. In the news Zyron expansion announcement (1999) Zyron 2nd expansion announcement (2001) Zyron NF3 announcement (2002) Zyron team award announcement (2007) Semicon Taiwan announcement (2010) See also Semiconductor device fabrication Fluorocarbons Perfluorinated compound Greenhouse gas Global warming References External links DuPont Zyron website Semiconductor Industry Association United Nations Framework Convention on Climate Change Paul L Bechly papers (Accession 2723), Hagley Museum and Library Brand name materials Organofluorides Semiconductor device fabrication
Zyron
[ "Materials_science" ]
635
[ "Semiconductor device fabrication", "Microtechnology" ]
23,114,936
https://en.wikipedia.org/wiki/Induction%20plasma
Induction plasma, also called inductively coupled plasma, is a type of high temperature plasma generated by electromagnetic induction, usually coupled with argon gas. The magnetic field induces an electric current within the gas which creates the plasma. The plasma can reach temperatures up to 10,000 Kelvin. Inductive plasma technology is used in fields such as powder spheroidization and nano-material synthesis. The technology is applied via an Induction plasma torch, which consists of three basic elements: the induction coil, a confinement chamber, and a torch head, or gas distributor. The main benefit of this technology is the elimination of electrodes, which can deteriorate and introduce contamination. History The 1960s were the incipient period of thermal plasma technology, spurred by the needs of aerospace programs. Among the various methods of thermal plasma generation, induction plasma (or inductively coupled plasma) takes up an important role. Early attempts to maintain inductively coupled plasma on a stream of gas date back to Babat in 1947 and Reed in 1961. Effort was concentrated on the fundamental studies of energy coupling mechanism and the characteristics of the flow, temperature and concentration fields in plasma discharge. In the 1980s, there was increasing interest in high-performance materials and other scientific issues, and in induction plasma for industrial-scale applications such as waste treatment. Substantial research and development was devoted to bridge the gap between laboratory gadget and industry integration. After decades' effort, induction plasma technology has gained a firm foothold in modern advanced industry. Generation Induction heating is a mature technology with centuries of history. A conductive metallic piece, inside a coil of high frequency, will be "induced", and heated to the red-hot state. There is no difference in cardinal principle for either induction heating or "inductively coupled plasma", only that the medium to induce, in the latter case, is replaced by the flowing gas, and the temperature obtained is extremely high, as it arrives the "fourth state of matter"—plasma. An inductively coupled plasma (ICP) torch is essentially a copper coil of several turns, through which cooling water is running in order to dissipate the heat produced in operation. The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density, and E to H heating mode transition occurs with external inputs. The coil wraps a confinement tube, inside which the induction (H mode) plasma is generated. One end of the confinement tube is open; the plasma is actually maintained on a continuum gas flow. During induction plasma operation, the generator supplies an alternating current (ac) of radio frequency (r.f.) to the torch coil; this ac induces an alternating magnetic field inside the coil, after Ampère's law (for a solenoid coil): where, is the flux of magnetic field, is permeability constant , is the coil current, is the number of coil turns per unit length, and is the mean radius of the coil turns. According to Faraday's Law, a variation in magnetic field flux will induce a voltage, or electromagnetic force: where, is the number of coil turns, and the item in parentheses is the rate at which the flux is changing. The plasma is conductive (assuming a plasma already exists in the torch). This electromagnetic force, E, will in turn drive a current of density j in closed loops. The situation is much similar to heating a metal rod in the induction coil: energy transferred to the plasma is dissipated via Joule heating, j2R, from Ohm's law, where R is the resistance of plasma. Since the plasma has a relatively high electrical conductivity, it is difficult for the alternating magnetic field to penetrate it, especially at very high frequencies. This phenomenon is usually described as the "skin effect". The intuitive scenario is that the induced currents surrounding each magnetic line counteract each other, so that a net induced current is concentrated only near the periphery of plasma. It means the hottest part of plasma is off-axis. Therefore, the induction plasma is something like an "annular shell". Observing on the axis of plasma, it looks like a bright "bagel". In practice, the ignition of plasma under low pressure conditions (<300 torr) is almost spontaneous, once the r.f. power imposed on the coil achieves a certain threshold value (depending on the torch configuration, gas flow rate etc.). The state of plasma gas (usually argon) will swiftly transit from glow-discharge to arc-break and create a stable induction plasma. For the case of atmospheric ambient pressure conditions, ignition is often accomplished with the aid of a Tesla coil, which produces high-frequency, high-voltage electric sparks that induce local arc-break inside the torch and stimulate a cascade of ionization of plasma gas, ultimately resulting in a stable plasma. Induction plasma torch Induction plasma torch is the core of the induction plasma technology. Despite the existence of hundreds of different designs, an induction plasma torch consists of essentially three components: Coil The induction coil consists of several spiral turns, depending on the r.f. power source characteristics. Coil parameters including the coil diameter, number of coil turns, and radius of each turn, are specified in such a way to create an electrical "tank circuit" with proper electrical impedance. Coils are typically hollow along their cylindrical axis, filled with internal liquid cooling (e.g., de-ionized water) to mitigate high operating temperatures of the coils that result from the high electrical currents required during operation. Confinement tube This tube serves to confine the plasma. Quartz tube is the common implementation. The tube is often cooled either by compressed air (<10 kW) or cooling water. While the transparency of quartz tube is demanded in many laboratory applications (such as spectrum diagnostic), its relatively poor mechanical and thermal properties pose a risk to other parts (e.g., o-ring seals) that may be damaged under the intense radiation of high-temperature plasma. These constraints limit the use of quartz tubes to low power torches only (<30 kW). For industrial, high power plasma applications (30~250 kW), tubes made of ceramic materials are typically used. The ideal candidate material will possess good thermal conductivity and excellent thermal shock resistance. For the time being, silicon nitride (Si3N4) is the first choice. Torches of even greater power employ a metal wall cage for the plasma confinement tube, with engineering tradeoffs of lower power coupling efficiencies and increased risk of chemical interactions with the plasma gases. Gas distributor Often called a torch head, this part is responsible for the introduction of different gas streams into the discharge zone. Generally, there are three gas lines passing to the torch head. According to their distance to the center of circle, these three gas streams are also arbitrarily named as Q1, Q2, and Q3. Q1 is the carrier gas that is usually introduced into the plasma torch through an injector at the center of the torch head. As the name indicates it, the function of Q1 is to convey the precursor (powders or liquid) into plasma. Argon is the usual carrier gas, however, many other reactive gases (i.e., oxygen, NH3, CH4, etc.) are often involved in the carrier gas, depending on the processing requirement. Q2 is the plasma forming gas, commonly called as the "Central Gas". In today's induction plasma torch design, it is almost unexceptional that the central gas is introduced into the torch chamber by tangentially swirling. The swirling gas stream is maintained by an internal tube that hoops the swirl till to the level of the first turn of induction coil. All these engineering concepts are aiming to create the proper flow pattern necessary to insure the stability of the gas discharge in the center of the coil region. Q3 is commonly referred to as "Sheath Gas" that is introduced outside the internal tube mentioned above. The flow pattern of Q3 can be either vortex or straight. The function of sheath gas is twofold. It helps to stabilize the plasma discharge; most importantly, it protects the confinement tube, as a cooling medium. Plasma gases and plasma performance The minimum power to sustain an induction plasma depends on pressure, frequency and gas composition. The lower sustaining power setting is achieved with high r.f. frequency, low pressure, and monatomic gas, such as argon. Once diatomic gas is introduced into the plasma, the sustaining power would be drastically increased, because extra dissociation energy is required to break gaseous molecular bonds first, so then further excitation to plasma state is possible. The major reasons to use diatomic gases in plasma processing are (1) to get a plasma of high energy content and good thermal conductivity (see Table below), and (2) to conform the processing chemistry. In practice, the selection of plasma gases in an induction plasma processing is first determined by the processing chemistry, i.e., if the processing requiring a reductive or oxidative, or other environment. Then suitable second gas may be selected and added to argon, so as to get a better heat transfer between plasma and the materials to treat. Ar–He, Ar–H2, Ar–N2, Ar–O2, air, etc. mixture are very commonly used induction plasmas. Since the energy dissipation in the discharge takes places essentially in the outer annular shell of plasma, the second gas is usually introduced along with the sheath gas line, rather than the central gas line. Industrial application of induction plasma technology Following the evolution of the induction plasma technology in laboratories, the major advantages of the induction plasma have been identified: Induction plasma allows to create a high purity plasma without contamination from electrodes as it is the case in DC plasma. The possibility of the axial feeding of precursors, being solid powders, or suspensions, liquids. This feature allows to expose materials to the high temperature of plasma of up to 10000 °C. Due to the absence of electrodes, a wide selection of process gases is possible, i.e., the torch could work in either reductive, or, oxidative, even corrosive atmospheres. With this capability, an induction plasma torch often works as not only a high temperature, high enthalpy heat source, but also chemical reaction vessels. Relatively long residence time of precursor in the plasma plume (several milliseconds up to hundreds milliseconds), compared with dc plasma. Relatively large plasma volume. These features of induction plasma technology, has found niche applications in industrial scale operation in the last decade. The successful industrial application of induction plasma process depends largely on many fundamental engineering supports. For example, the industrial plasma torch design, which allows high power level (50 to 600 kW) and long duration (three shifts of 8 hours/day) of plasma processing. Another example is the powder feeders that convey large quantity of solid precursor (1 to 30 kg/h) with reliable and constant feed rate. There are many examples of the industrial applications of induction plasma technology, such as powder spheroidization, nanosized powders synthesis, induction plasma spraying, waste treatments. Powder spheroidization The requirement of powders spheroidization (as well as densification) comes from very different industrial fields, from powder metallurgy to the electronic packaging. Generally speaking, the pressing need for an industrial process to turn to spherical powders is to seek at least one of the following benefits which result from the spheroidization process: Improve the powders flow-ability. Increase the powders packing density. Eliminate powder internal cavities and fractures. Change the surface morphology of the particles. Other unique motive, such as optical reflection, chemical purity etc. Spheroidization is a process of in-flight melting. The powder precursor of angular shape is introduced into induction plasma, and melted immediately in the high temperatures of plasma. The melted powder particles are assuming the spherical shape under the action of surface tension of liquid state. These droplets will be drastically cooled down when fly out of the plasma plume, because of the big temperature gradient exciting in the plasma. The solidified spheres are thus collected as the spheroidization products. As the process does not use any electrodes or crucibles a very high purity can be maintained. The technology is available both at laboratory and industrial scales. A great variety of ceramics, metals and metal alloys have been successfully spheroidized/densified using induction plasma spheroidization. Due to the high temperature of the plasma even materials with very high melting temperatures can be spheroidized. Following are some typical materials that are spheroidized on a commercial scale. Oxide ceramics: SiO2, ZrO2, YSZ, Al2TiO5, glass Non-oxides: WC, WC–Co, CaF2, TiN Metals: Re, Ta, Mo, W Alloys: Cr–Fe–C, Re–Mo, Re–W, refractory high entropy alloys Advantages of powder spheroidization compared to gas atomization are: High yield (spheroidized powders have the same particle size distribution as precursor powder) Wide range of materials (almost all ceramics and metals) High purity (no pollution from electrodes or crucibles) Possibility to recycle used powders due to improvement of sphericity and in some cases reduction of oxygen content High sphericity, low porosity and absence of satellites Nano-materials synthesis It is the increased demand for nanopowders that promotes the extensive research and development of various techniques for the synthesis of nanometric powders. The challenges for an industrial application technology are productivity, quality controllability, and affordability. Induction plasma technology implements in-flight evaporation of precursor, even those raw materials of the highest boiling point; operating under various atmospheres, permitting synthesis of a great variety of nanopowders, and thus become much more reliable and efficient technology for synthesis of nanopowders in both laboratory and industrial scales. Induction plasma used for nanopowder synthesis has many advantages over the alternative techniques, such as high purity, high flexibility, easy to scale up, easy to operate and process control. In the nano-synthesis process, material is first heated up to evaporation in induction plasma, and the vapours are subsequently subjected to a very rapid quenching in the quench/reaction zone. The quench gas can be inert gases such as Ar and N2 or reactive gases such as CH4 and NH3, depending on the type of nanopowders to be synthesized. The nanometric powders produced are usually collected by porous filters, which are installed away from the plasma reactor section. Because of the high reactivity of metal powders, special attention should be given to powder pacification prior to the removal of the collected powder from the filtration section of the process. The induction plasma system has been successfully used in the synthesis of nanopowders. The typical size range of the nano-particles produced is from 20 to 100 nm, depending on the quench conditions employed. The productivity varies from few hundreds g/h to 3~4 kg/h, according to the different materials' physical properties and the power level of the plasma. A typical induction plasma nano-synthesis system for industrial application is shown below. The photos of some nano-product from the same equipment are included. Plasma wind tunnels During atmosperic entry space crafts are exposed to high heat fluxes and need to be protected with thermal protection systems materials. For the development these materials need to be tested in similar conditions. Plasma wind tunnels also called high enthalpy ground testing facilities reproduce these conditions. Induction plasma is used for these plasma wind tunnels because it can generate a high-enthalpy plasma free of contaminants. Gallery Summary Induction plasma technology achieves mainly the aforementioned high-added-value processes. Besides the "spheroidization" and "nanomaterial synthesis", the high risk waste treatment, refractory materials deposit, noble material synthesis etc. may be the next industrial fields for induction plasma technology. See also Plasma propulsion engine Notes References Plasma types
Induction plasma
[ "Physics" ]
3,352
[ "Plasma types", "Plasma physics" ]
23,118,227
https://en.wikipedia.org/wiki/Bifunctionality
In chemistry, bifunctionality or difunctionality is the presence of two functional groups in a molecule. A bifunctional species has the properties of each of the two types of functional groups, such as an alcohol (), amide (), aldehyde (), nitrile () or carboxylic acid (). Many bifunctional species are used to produce complex materials. They participate in condensation polymerization like polyester and polyamide. Polyfunctional species have more than two functional groups. Most biological compounds are polyfunctional. See also Functionality (chemistry) References Further reading Properties of Single Organic Molecules on Crystal Surfaces Functional groups Organic chemistry
Bifunctionality
[ "Chemistry" ]
147
[ "Functional groups", "nan" ]
30,703,104
https://en.wikipedia.org/wiki/Natural%20bond%20orbital
In quantum chemistry, a natural bond orbital or NBO is a calculated bonding orbital with maximum electron density. The NBOs are one of a sequence of natural localized orbital sets that include "natural atomic orbitals" (NAO), "natural hybrid orbitals" (NHO), "natural bonding orbitals" (NBO) and "natural (semi-)localized molecular orbitals" (NLMO). These natural localized sets are intermediate between basis atomic orbitals (AO) and molecular orbitals (MO): Atomic orbital → NAO → NHO → NBO → NLMO → Molecular orbital Natural (localized) orbitals are used in computational chemistry to calculate the distribution of electron density in atoms and in bonds between atoms. They have the "maximum-occupancy character" in localized 1-center and 2-center regions of the molecule. Natural bond orbitals (NBOs) include the highest possible percentage of the electron density, ideally close to 2.000, providing the most accurate possible “natural Lewis structure” of ψ. A high percentage of electron density (denoted %-ρL), often found to be >99% for common organic molecules, correspond with an accurate natural Lewis structure. The concept of natural orbitals was first introduced by Per-Olov Löwdin in 1955, to describe the unique set of orthonormal 1-electron functions that are intrinsic to the N-electron wavefunction. Theory Each bonding NBO σAB (the donor) can be written in terms of two directed valence hybrids (NHOs) hA, hB on atoms A and B, with corresponding polarization coefficients cA, cB: σAB = cA hΑ + cB hB The bonds vary smoothly from covalent (cA = cB) to ionic (cA >> cB) limit. Each valence bonding NBO σ must be paired with a corresponding valence antibonding NBO σ* (the acceptor) to complete the span of the valence space: σAB* = cA hΑ − cB hB The bonding NBOs are of the "Lewis orbital"-type (occupation numbers near 2); antibonding NBOs are of the "non-Lewis orbital"-type (occupation numbers near 0). In an idealized Lewis structure, full Lewis orbitals (two electrons) are complemented by formally empty non-Lewis orbitals. Weak occupancies of the valence antibonds signal irreducible departures from an idealized localized Lewis structure, which means true "delocalization effects". Lewis structures With a computer program that can calculate NBOs, optimal Lewis structures can be found. An optimal Lewis structure can be defined as that one with the maximum amount of electronic charge in Lewis orbitals (Lewis charge). A low amount of electronic charge in Lewis orbitals indicates strong effects of electron delocalization. In resonance structures, major and minor contributing structures may exist. For amides, for example, NBO calculations show that the structure with a carbonyl double bond is the dominant Lewis structure. However, in NBO calculations, "covalent-ionic resonance" is not needed due to the inclusion of bond-polarity effects in the resonance structures. This is similar to other modern valence bond theory methods. See also Molecular orbital theory Basis set (chemistry) References External links Homepage for the NBO computer program: http://nbo6.chem.wisc.edu/ IUPAC Gold Book definition: natural bond orbital (NBO) Free, open-source implementation for Atomic orbital → NAO transformation and Natural Population Analysis methods: JANPA package Quantum chemistry
Natural bond orbital
[ "Physics", "Chemistry" ]
751
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
30,710,121
https://en.wikipedia.org/wiki/Laser%20diffraction%20analysis
Laser diffraction analysis, also known as laser diffraction spectroscopy, is a technology that utilizes diffraction patterns of a laser beam passed through any object ranging from nanometers to millimeters in size to quickly measure geometrical dimensions of a particle. This particle size analysis process does not depend on volumetric flow rate, the amount of particles that passes through a surface over time. Fraunhofer vs. Mie Theory Laser diffraction analysis is originally based on the Fraunhofer diffraction theory, stating that the intensity of light scattered by a particle is directly proportional to the particle size. The angle of the laser beam and particle size have an inversely proportional relationship, where the laser beam angle increases as particle size decreases and vice versa. The Mie scattering model, or Mie theory, is used as alternative to the Fraunhofer theory since the 1990s. Commercial laser diffraction analyzers leave to the user the choice of using either Fraunhofer or Mie theory for data analysis, hence the importance of understanding the strengths and limitations of both models. Fraunhofer theory only takes into account the diffraction phenomena occurring at the contour of the particle. Its main advantage is that it does not require any knowledge of the optical properties (complex refractive index) of the particle’s material. Hence is it typically applied to samples of unknown optical properties, or to mixtures of different materials. For samples of known optical properties, Fraunhofer theory should only be applied for particles of an expected diameter at least 10 times larger than the light source’s wavelength, and/or to opaque particles. The Mie theory is based on measuring the scattering of electromagnetic waves on spherical particles. Hence, it is taking into account not only the diffraction at the particle’s contour, but also the refraction, reflection and absorption phenomena within the particle and at its surface. Thus, this theory is better suited than the Fraunhofer theory for particles that are not significantly larger than the wavelength of the light source, and to transparent particles. The model’s main limitation is that it requires precise knowledge of the complex refractive index (including the absorption coefficient) of the particle’s material. The lower theoretical detection limit of laser diffraction, using the Mie theory, is generally thought to lie around 10 nm. Optical setup Laser diffraction analysis is typically accomplished via a red He-Ne laser or laser diode, a high-voltage power supply, and structural packaging. Alternatively, blue laser diodes or LEDs of shorter wavelength may be used. The light source affects the detection limits, with lasers of shorter wavelengths better suited for the detection of submicron particles. Angling of the light energy produced by the laser is detected by having a beam of light go through a flow of dispersed particles and then onto a sensor. A lens is placed between the object being analyzed and the detector's focal point, causing only the surrounding laser diffraction to appear. The sizes the laser can analyze depend on the lens' focal length, the distance from the lens to its point of focus. As the focal length increases, the area the laser can detect increases as well, displaying a proportional relationship. Multiple light detectors are used to collect the diffracted light, which are placed at fixed angles relative to the laser beam. More detector elements extend sensitivity and size limits. A computer can then be used to detect the object's particle sizes from the light energy produced and its layout, which the computer derives from the data collected on the particle frequencies and wavelengths. In practical terms, laser diffraction instruments can measure particles in liquid suspension, using a carrier solvent, or as dry powders, using compressed air or simply gravity to mobilize the particles. Sprays and aerosols generally require a specific setup. Results Volume-weighted particle size distribution Because the light energy recorded by the detector array is proportional to the volume of the particles, laser diffraction results are intrinsically volume-weighted. This means that the particle size distribution represents the volume of particle material in the different size classes. This is in contrast to counting-based optical methods such as microscopy or dynamic image analysis, which report the number of particles in the different size classes. That the diffracted light is proportional to the particle’s volume also implies that results are assuming particle sphericity, i.e. that the particle size result is an equivalent spherical diameter. Hence particle shape cannot be determined by the technique. The main graphical representation of laser diffraction results is the volume-weighted particle size distribution, either represented as density distribution (which highlights the different modes) or as cumulative undersize distribution. Numerical results The most widely used numerical laser diffraction results are: The median volume-weighted diameter, or D50. Derived from the cumulative curve, it represents the particle diameter separating the upper 50 % of the data from the lower 50 %. The D10 and D90 values, also derived from the cumulative curve. The mean volume-weighted diameter, also termed D[4,3] or De Brouckere mean diameter. The span, which gives a measure of the width of the particle size distribution, and is calculated as span = [D90 – D10]/D50. Result quality and instrument validation Harmonized standards for the accuracy and precision of laser diffraction measurements have been defined both by ISO, in standard ISO 13320:2020, and by the United States Pharmacopoeia, in chapter USP <429>. Uses Laser diffraction analysis has been used to measure particle-size objects in situations such as:  observing distribution of soil texture and sediments such as clay and mud, with an emphasis on silt and the sizes of bigger samples of clay.  determining in situ measurements of particles in estuaries. Particles in estuaries are important as they allow for natural or pollutant chemical species to move around with ease. The size, density, and stability of particles in estuaries are important for their transportation. Laser diffraction analysis is used here to compare particle size distributions to support this claim as well as find cycles of change in estuaries that occur because of different particles.  soil and its stability when wet. The stability of soil aggregation (clumps held together by moist clay) and clay dispersion (clay separating in moist soil), the two different states of soil in the Cerrado savanna region, were compared with laser diffraction analysis to determine if plowing had an effect on the two. Measurements were made before plowing and after plowing for different intervals of time. Clay dispersion turned out to not be affected by plowing while soil aggregation did.  erythrocyte deformability under shear. Due to a special phenomenon called tank treading, the membrane of the erythrocyte (red blood cell, RBC) rotates relative to the shear force and the cell's cytoplasm causing RBCs to orient themselves. Oriented and stretched red blood cells have a diffraction pattern representing the apparent particle size in each direction, making it possible to measure the erythrocyte deformability and the orientability of the cells. In an ektacytometer erythrocyte deformability can be measured under changing osmotic stress or oxygen tension and is used in the diagnosis and follow up of congenital hemolytic anemias. Comparisons Since laser diffraction analysis is not the sole way of measuring particles it has been compared to the sieve-pipette method, which is a traditional technique for grain size analysis. When compared, results showed that laser diffraction analysis made fast calculations that were easy to recreate after a one-time analysis, did not need large sample sizes, and produced large amounts of data. Results can easily be manipulated because the data is on a digital surface. Both the sieve-pipette method and laser diffraction analysis are able to analyze minuscule objects, but laser diffraction analysis resulted in having better precision than its counterpart method of particle measurement. Criticism Laser diffraction analysis has been questioned in validity in the following areas:  assumptions including particles having random configurations and volume values. In some dispersion units, particles have been shown to align themselves together rather than have a turbulent flow, causing them to lead themselves in an orderly direction. algorithms used in laser diffraction analysis are not thoroughly validated. Different algorithms are used at times to have collected data match assumptions made by users as an attempt to avoid data that looks incorrect. measurement inaccuracies due to sharp edges on objects. Laser diffraction analysis has the chance of detecting imaginary particles at sharp edges because of the large angles the lasers make upon them. when compared to the data collecting of optical imaging, another particle-sizing technique, correlation between the two was poor for non-spherical particles. This is due to the fact that the underlying Fraunhofer and Mie theories only cover spherical particles. Non-spherical particles cause more diffuse scatter patterns and are more difficult to interpret. Some manufacturers have included algorithms in their software, which can partly compensate for non-spherical particles. See also Diffraction tomography List of laser articles Particle size distribution References Spectroscopy
Laser diffraction analysis
[ "Physics", "Chemistry" ]
1,897
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
30,712,001
https://en.wikipedia.org/wiki/Variant%20Call%20Format
The Variant Call Format or VCF is a standard text file format used in bioinformatics for storing gene sequence or DNA sequence variations. The format was developed in 2010 for the 1000 Genomes Project and has since been used by other large-scale genotyping and DNA sequencing projects. VCF is a common output format for variant calling programs due to its relative simplicity and scalability. Many tools have been developed for editing and manipulating VCF files, including VCFtools, which was released in conjunction with the VCF format in 2011, and BCFtools, which was included as part of SAMtools until being split into an independent package in 2014. The standard is currently in version 4.5, although the 1000 Genomes Project has developed its own specification for structural variations such as duplications, which are not easily accommodated into the existing schema. Additional file formats have been developed based on VCF, including genomic VCF (gVCF). gVCF is an extended format which includes additional information about "blocks" that match the reference and their qualities. Example ##fileformat=VCFv4.3 ##fileDate=20090805 ##source=myImputationProgramV3.1 ##reference=file:///seq/references/1000GenomesPilot-NCBI36.fasta ##contig=<ID=20,length=62435964,assembly=B36,md5=f126cdf8a6e0c7f379d618ff66beb2da,species="Homo sapiens",taxonomy=x> ##phasing=partial ##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of Samples With Data"> ##INFO=<ID=DP,Number=1,Type=Integer,Description="Total Depth"> ##INFO=<ID=AF,Number=A,Type=Float,Description="Allele Frequency"> ##INFO=<ID=AA,Number=1,Type=String,Description="Ancestral Allele"> ##INFO=<ID=DB,Number=0,Type=Flag,Description="dbSNP membership, build 129"> ##INFO=<ID=H2,Number=0,Type=Flag,Description="HapMap2 membership"> ##FILTER=<ID=q10,Description="Quality below 10"> ##FILTER=<ID=s50,Description="Less than 50% of samples have data"> ##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"> ##FORMAT=<ID=GQ,Number=1,Type=Integer,Description="Genotype Quality"> ##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Read Depth"> ##FORMAT=<ID=HQ,Number=2,Type=Integer,Description="Haplotype Quality"> #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT NA00001 NA00002 NA00003 20 14370 rs6054257 G A 29 PASS NS=3;DP=14;AF=0.5;DB;H2 GT:GQ:DP:HQ 0|0:48:1:51,51 1|0:48:8:51,51 1/1:43:5:.,. 20 17330 . T A 3 q10 NS=3;DP=11;AF=0.017 GT:GQ:DP:HQ 0|0:49:3:58,50 0|1:3:5:65,3 0/0:41:3 20 1110696 rs6040355 A G,T 67 PASS NS=2;DP=10;AF=0.333,0.667;AA=T;DB GT:GQ:DP:HQ 1|2:21:6:23,27 2|1:2:0:18,2 2/2:35:4 20 1230237 . T . 47 PASS NS=3;DP=13;AA=T GT:GQ:DP:HQ 0|0:54:7:56,60 0|0:48:4:51,51 0/0:61:2 20 1234567 microsat1 GTC G,GTCT 50 PASS NS=3;DP=9;AA=G GT:GQ:DP 0/1:35:4 0/2:17:2 1/1:40:3 The VCF header The header begins the file and provides metadata describing the body of the file. Header lines are denoted as starting with . Special keywords in the header are denoted with . Recommended keywords include , and . The header contains keywords that optionally semantically and syntactically describe the fields used in the body of the file, notably INFO, FILTER, and FORMAT (see below). The columns of a VCF The body of VCF follows the header, and is tab separated into 8 mandatory columns and an unlimited number of optional columns that may be used to record other information about the sample(s). When additional columns are used, the first optional column is used to describe the format of the data in the columns that follow. Common INFO fields Arbitrary keys are permitted, although the following sub-fields are reserved (albeit optional): Any other info fields are defined in the .vcf header. Common FORMAT fields Any other format fields are defined in the .vcf header. See also The FASTA format, used to represent genome sequences. The FASTQ format, used to represent DNA sequencer reads along with quality scores. The SAM format, used to represent genome sequencer reads that have been aligned to genome sequences. The GVF format (Genome Variation Format), an extension based on the GFF3 format. Global Alliance for Genomics and Health (GA4GH), the group leading the management and expansion of the VCF format. The VCF specification is no longer maintained by the 1000 Genomes Project. Human genome Human genetic variation Single Nucleotide Polymorphism (SNP) References External links An explanation of the format in picture form VCFtools BCFtools Biological sequence format
Variant Call Format
[ "Biology" ]
1,363
[ "Bioinformatics", "Biological sequence format" ]
27,886,855
https://en.wikipedia.org/wiki/Aryabhatta%20Research%20Institute%20of%20Observational%20Sciences
Aryabhatta Research Institute of Observational Sciences (ARIES) is a research institute in Nainital, Uttarakhand, India which specializes in astronomy, solar physics, astrophysics and atmospheric science. It is an autonomous body under the Department of Science and Technology, Government of India. The institute is situated at Manora Peak (elevation ), about from Nainital, headquarters of Kumaon division. The astronomical observatory is open to the public on afternoons, and on occasional moonlit nights with prior permission. History The institute was started on 20 April 1954 under the supervision of Dr. A. N. Singh as Uttar Pradesh State Observatory (UPSO) in the premises of the Government Sanskrit College, presently known as Sampurnanand Sanskrit Vishwavidyalaya, Varanasi, Uttar Pradesh. With the formation of the State of Uttarakhand on 9 November 2000, and because of its geographical location within the boundaries of Uttarakhand, UPSO came under the administrative control of the Government of Uttarakhand and was re-christened as the State Observatory (SO). The institute was named as Aryabhatta Research Institute of Observational Sciences (ARIES) when it came under the Department of Science & Technology (DST), Government of India as an autonomous body on 22 March 2004. Site ARIES has of land at Manora Peak, Nainital on which functional and residential buildings are located. An additional has been acquired at Devasthal, nearly away by road, for new observational facilities. The site has about 200 clear nights in a year and the median ground level seeing is about 1". Research facilities Astronomy and Astrophysics Research activities at ARIES cover topics related to the sun, stars and galaxies. ARIES has made significant contributions particularly to the field of star clusters and gamma-ray bursts (GRBs). The longitude of ARIES (79° East) locates it in the middle of a 180-degree wide longitude band having modern astronomical facilities lying between the Canary Islands (20° West) and Eastern Australia (157° East). Therefore, observations which are not possible in the Canary Islands or Australia due to daylight can be made at ARIES. Because of its geographical location, ARIES has made contributions to many areas of astronomical research, particularly those involving time-critical phenomena (e.g., the first successful attempt in the country to observe optical afterglow of GRBs was carried out from ARIES). Many eclipsing binaries, variable stars, star clusters, nearby galaxies, GRBs, and supernovas have been observed from ARIES. The other research fields of the institute include solar astronomy, stellar astronomy, star clusters, stellar variability and pulsation, photometric studies of nearby galaxies, quasars, and transient events like supernovas and highly energetic gamma-ray bursts. A total solar eclipse lasting about 4 minutes was successfully observed from Manavgat, Antalya in Turkey on 29 March 2006 by a team of scientists from ARIES. In past, new ring systems around Saturn, Uranus, and Neptune were discovered from the observatory. Recently, a direct correlation between the intra-night optical variability and the degree of polarization of the radio jets in quasars was established based on the observations from ARIES. For the first time periodic oscillations are detected in optical intra-day variability data of blazars, which is useful to get the black hole mass of blazars, and also provide support to accretion disk-based models of AGNs. Atmospheric Sciences Nainital is located at a high altitude in the central Himalayas and away from cities or other major pollution sources. This makes it suitable for carrying out observations on background conditions and for studying the regional environment, particularly interactions between natural and anthropogenic trace species and climate change. Additionally, the ARIES site can also provide information on long range transport of pollutants. Studies on lower atmospheric dynamics are also very important in this region, which is severely lacking over northern India. Infrastructure The Institute has in-house workshops to meet the requirements of electronic, mechanical, and optical maintenance of the instruments. ARIES has a modern computer center with Internet and a library with more than 10,000 volumes of research journals and a collection of books on astronomy, astrophysics and atmospheric science. Facilities 3.6m Devasthal Optical Telescope 1.3m Devasthal Optical Telescope 104 cm Sampurnanand Telescope Solar Telescope 4m International Liquid Mirror Telescope (ILMT) Baker-Nunn Schmidt Telescope (BNST) Stratosphere Troposphere Radar See also Astrotourism in India List of planetariums Devasthal Observatory List of astronomical observatories References Aryabhatta Research Institute of Observational Sciences External links ARIES, Official website ARIES at Department of Science and Technology website Research institutes in Uttarakhand Astrophysics research institutes Astronomical observatories in India Space programme of India Nainital Research institutes established in 1954 Organisations based in Uttarakhand 1954 establishments in Uttar Pradesh
Aryabhatta Research Institute of Observational Sciences
[ "Physics" ]
1,029
[ "Astrophysics research institutes", "Astrophysics" ]
5,300,610
https://en.wikipedia.org/wiki/Boolean%20delay%20equation
A Boolean Delay Equation (BDE) is an evolution rule for the state of dynamical variables whose values may be represented by a finite discrete numbers os states, such as 0 and 1. As a novel type of semi-discrete dynamical systems, Boolean delay equations (BDEs) are models with Boolean-valued variables that evolve in continuous time. Since at the present time, most phenomena are too complex to be modeled by partial differential equations (as continuous infinite-dimensional systems), BDEs are intended as a (heuristic) first step on the challenging road to further understanding and modeling them. For instance, one can mention complex problems in fluid dynamics, climate dynamics, solid-earth geophysics, and many problems elsewhere in natural sciences where much of the discourse is still conceptual. One example of a BDE is the Ring oscillator equation: , which produces periodic oscillations. More complex equations can display richer behavior, such as nonperiodic and chaotic (deterministic) behavior. References Further reading Dynamical systems Mathematical modeling
Boolean delay equation
[ "Physics", "Mathematics" ]
218
[ "Applied mathematics", "Mathematical modeling", "Mechanics", "Dynamical systems" ]
5,301,306
https://en.wikipedia.org/wiki/Portable%20water%20purification
Portable water purification devices are self-contained, easily transported units used to purify water from untreated sources (such as rivers, lakes, and wells) for drinking purposes. Their main function is to eliminate pathogens, and often also suspended solids and some unpalatable or toxic compounds. These units provide an autonomous supply of drinking water to people without access to clean water supply services, including inhabitants of developing countries and disaster areas, military personnel, campers, hikers, and workers in wilderness, and survivalists. They are also called point-of-use water treatment systems and field water disinfection techniques. Techniques include heat (including boiling), filtration, activated charcoal adsorption, chemical disinfection (e.g. chlorination, iodine, ozonation, etc.), ultraviolet purification (including sodis), distillation (including solar distillation), and flocculation. Often these are used in combination. Drinking water hazards Untreated water may contain potentially pathogenic agents, including protozoa, bacteria, viruses, and some larvae of higher-order parasites such as liver flukes and roundworms. Chemical pollutants such as pesticides, heavy metals and synthetic organics may be present. Other components may affect taste, odour and general aesthetic qualities, including turbidity from soil or clay, colour from humic acid or microscopic algae, odours from certain type of bacteria, particularly Actinomycetes which produce geosmin, and saltiness from brackish or sea water. Common metallic contaminants such as copper and lead can be treated by increasing the pH using soda ash or lime, which precipitates such metals. Careful decanting of the clear water after settlement or the use of filtration provides acceptably low levels of metals. Water contaminated by aluminium or zinc cannot be treated in this way using a strong alkali as higher pHs re-dissolve the metal salts. Salt is difficult to remove except by reverse osmosis or distillation. Most portable treatment processes focus on mitigating human pathogens for safety and removing particulates matter, tastes and odours. Significant pathogens commonly present in the developed world include Giardia, Cryptosporidium, Shigella, hepatitis A virus, Escherichia coli, and enterovirus. In less developed countries there may be risks from cholera and dysentery organisms and a range of tropical enteroparasites. Giardia lamblia and Cryptosporidium spp., both of which cause diarrhea (see giardiasis and cryptosporidiosis) are common pathogens. In backcountry areas of the United States and Canada they are sometimes present in sufficient quantity that water treatment is justified for backpackers, although this has created some controversy. (See wilderness acquired diarrhea.) In Hawaii and other tropical areas, Leptospira spp. are another possible problem. Less commonly seen in developed countries are organisms such as Vibrio cholerae which causes cholera and various strains of Salmonella which cause typhoid and para-typhoid diseases. Pathogenic viruses may also be found in water. The larvae of flukes are particularly dangerous in area frequented by sheep, deer, or cattle. If such microscopic larvae are ingested, they can form potentially life-threatening cysts in the brain or liver. This risk extends to plants grown in or near water including the commonly eaten watercress. In general, more human activity up stream (i.e. the larger the stream/river) the greater the potential for contamination from sewage effluent, surface runoff, or industrial pollutants. Groundwater pollution may occur from human activity (e.g. on-site sanitation systems or mining) or might be naturally occurring (e.g. from arsenic in some regions of India and Bangladesh). Water collected as far upstream as possible above all known or anticipated risks of pollution poses the lowest risk of contamination and is best suited to portable treatment methods. Techniques Not all techniques by themselves will mitigate all hazards. Although flocculation followed by filtration has been suggested as best practice this is rarely practicable without the ability to carefully control pH and settling conditions. Ill-advised use of alum as a flocculant can lead to unacceptable levels of aluminium in the water so treated. If water is to be stored, halogens offer extended protection. Heat (boiling) Heat kills disease-causing micro-organisms, with higher temperatures and/or duration required for some pathogens. Sterilization of water (killing all living contaminants) is not necessary to make water safe to drink; one only needs to render enteric (intestinal) pathogens harmless. Boiling does not remove most pollutants and does not leave any residual protection. The WHO states bringing water to rolling boil then naturally cooling is sufficient to inactivate pathogenic bacteria, viruses and protozoa. The CDC recommends a rolling boil for 1 minute. At high elevations, though, the boiling point of water drops. At altitudes greater than boiling should continue for 3 minutes. All bacterial pathogens are quickly killed above , therefore, although boiling is not necessary to make the water safe to drink, the time taken to heat the water to boiling is usually sufficient to reduce bacterial concentrations to safe levels. Encysted protozoan pathogens may require higher temperatures to remove any risk. Boiling is not always necessary nor sometimes enough. Pasteurization where enough pathogens are killed typically occurs at 63 °C for 30 minutes or 72 °C for 15 seconds. Certain pathogens must be heated above boiling (e.g. botulismClostridium botulinum requires , most endospores require , and prions even higher). Higher temperatures may be achieved with a pressure cooker. Heat combined with ultraviolet light (UV), such as sodis method, reduces the necessary temperature and duration. Filtration Portable pump filters are commercially available with ceramic filters that filter 5,000 to 50,000 litres per cartridge, removing pathogens down to the 0.2–0.3 micrometer (μm) range. Some also utilize activated charcoal filtering. Most filters of this kind remove most bacteria and protozoa, such as Cryptosporidium and Giardia lamblia, but not viruses except for the very largest of 0.3 μm and larger diameters, so disinfection by chemicals or ultraviolet light is still required after filtration. It is worth noting that not all bacteria are removed by 0.2 μm pump filters; for example, strands of thread-like Leptospira spp. (which can cause leptospirosis) are thin enough to pass through a 0.2 μm filter. Effective chemical additives to address shortcomings in pump filters include chlorine, chlorine dioxide, iodine, and sodium hypochlorite (bleach). There have been polymer and ceramic filters on the market that incorporated iodine post-treatment in their filter elements to kill viruses and the smaller bacteria that cannot be filtered out, but most have disappeared due to the unpleasant taste imparted to the water, as well as possible adverse health effects when iodine is ingested over protracted periods. While the filtration elements may do an excellent job of removing most bacteria and fungi contaminants from drinking water when new, the elements themselves can become colonization sites. In recent years some filters have been enhanced by bonding silver metal nanoparticles to the ceramic element and/or to the activated charcoal to suppress growth of pathogens. Small, hand-pumped reverse osmosis filters were originally developed for the military in the late 1980s for use as survival equipment, for example, to be included with inflatable rafts on aircraft. Civilian versions are available. Instead of using the static pressure of a water supply line to force the water through the filter, pressure is provided by a hand-operated pump. These devices can generate drinkable water from seawater. The Portable Aqua Unit for Lifesaving (short PAUL) is a portable ultrafiltration-based membrane water filter for humanitarian aid. It allows the decentralized supply of clean water in emergency and disaster situations for about 400 persons per unit per day. The filter is designed to function with neither chemicals nor energy nor trained personnel. Activated charcoal adsorption Granular activated carbon filtering utilizes a form of activated carbon with a high surface area, and adsorbs many compounds, including many toxic compounds. Water passing through activated carbon is commonly used in concert with hand pumped filters to address organic contamination, taste, or objectionable odors. Activated carbon filters aren't usually used as the primary purification techniques of portable water purification devices, but rather as secondary means to complement another purification technique. It is most commonly implemented for pre- or post-filtering, in a separate step than ceramic filtering, in either case being implemented prior to the addition of chemical disinfectants used to control bacteria or viruses that filters cannot remove. Activated charcoal can remove chlorine from treated water, removing any residual protection remaining in the water protecting against pathogens, and should not, in general, be used without careful thought after chemical disinfection treatments in portable water purification processing. Ceramic/Carbon Core filters with a 0.5 μm or smaller pore size are excellent for removing bacteria and cysts while also removing chemicals. Chemical disinfection with halogens Chemical disinfection with halogens, chiefly chlorine and iodine, results from oxidation of essential cellular structures and enzymes. The primary factors that determine the rate and proportion of microorganisms killed are the residual or available halogen concentration and the exposure time. Secondary factors are pathogen species, water temperature, pH, and organic contaminants. In field-water disinfection, use of concentrations of 1–16 mg/L for 10–60 min is generally effective. Of note, Cryptosporidium oocysts, likely Cyclospora species, Ascaris eggs are extremely resistant to halogens and field inactivation may not be practical with bleach and iodine. Iodine Iodine used for water purification is commonly added to water as a solution, in crystallized form, or in tablets containing tetraglycine hydroperiodide that release 8 mg of iodine per tablet. The iodine kills many, but not all, of the most common pathogens present in natural fresh water sources. Carrying iodine for water purification is an imperfect but lightweight solution for those in need of field purification of drinking water. Kits are available in camping stores that include an iodine pill and a second pill (vitamin C or ascorbic acid) that will remove the iodine taste from the water after it has been disinfected. The addition of vitamin C, in the form of a pill or in flavored drink powders, precipitates much of the iodine out of the solution, so it should not be added until the iodine has had sufficient time to work. This time is 30 minutes in relatively clear, warm water, but is considerably longer if the water is turbid or cold. If the iodine has precipitated out of the solution, then the drinking water has less available iodine in the solution. Tetraglycine hydroperiodide maintains its effectiveness indefinitely before the container is opened; although some manufacturers suggest not using the tablets more than three months after the container has initially been opened, the shelf life is in fact very long provided that the container is resealed immediately after each time it is opened. Similarly to potassium iodide (KI), sufficient consumption of tetraglycine hydroperiodide tablets may protect the thyroid against uptake of radioactive iodine. A 1995 study found that daily consumption of water treated with 4 tablets containing tetraglycine hydroperiodide reduced the uptake of radioactive iodine in human subjects to a mean of 1.1 percent, from a baseline mean of 16 percent, after a week of treatment. At 90 days of daily treatment, uptake was further reduced to a mean of 0.5 percent. However, unlike KI, tetraglycine hydroperiodide is not recommended by the WHO for this purpose. Iodine should be allowed at least 30 minutes to kill Giardia. Iodine crystals A potentially lower cost alternative to using iodine-based water purification tablets is the use of iodine crystals, although there are serious risks of acute iodine toxicity if preparation and dilution are not measured with some accuracy. This method may not be adequate in killing Giardia cysts in cold water. An advantage of using iodine crystals is that only a small amount of iodine is dissolved from the iodine crystals at each use, giving this method of treating water a capability for treating very large volumes of water. Unlike tetraglycine hydroperiodide tablets, iodine crystals have an unlimited shelf life as long as they are not exposed to air for long periods of time or are kept under water. Iodine crystals will sublimate if exposed to air for long periods of time. The large quantity of water that can be purified with iodine crystals at low cost makes this technique especially cost effective for point of use or emergency water purification methods intended for use longer than the shelf life of tetraglycine hydroperiodide. Halazone tablets Chlorine-based halazone tablets were formerly popularly used for portable water purification. Chlorine in water is more than three times more effective as a disinfectant against Escherichia coli than iodine. Halazone tablets were thus commonly used during World War II by U.S. soldiers for portable water purification, even being included in accessory packs for C-rations until 1945. Sodium dichloroisocyanurate (NaDCC) has largely displaced halazone tablets for the few remaining chlorine-based water purification tablets available today. Bleach Common bleach including calcium hypochlorite (Ca[OCl]2) and sodium hypochlorite (NaOCl) are common, well-researched, low-cost oxidizers. Chlorine bleach tablets give a more stable platform for disinfecting the water than liquid bleach as the liquid version tends to degrade with age and give unregulated results unless assays are carried out, which may be impractical in the field. Still, liquid bleach may nonetheless safely be used for short-term emergency water disinfection. The EPA recommends two drops of 8.25% sodium hypochlorite solution (regular, unscented chlorine bleach) mixed per one quart/liter of water and leave to stand covered for 30 to 60 minutes. Two drops of 5% solution also suffices. Double the amount of bleach if the water is cloudy, colored, or very cold. Afterwards, the water should have a slight chlorine odor. If not repeat the dosage and let stand for another 15 minutes before use. After this treatment, the water may be left open to reduce the chlorine smell and taste. The Centers for Disease Control & Prevention (CDC) and Population Services International (PSI) promote a similar product (a 0.5% - 1.5% sodium hypochlorite solution) as part of their Safe Water System (SWS) strategy. The product is sold in developing countries under local brand names specifically for the purpose of disinfecting drinking water. Neither chlorine (e.g., bleach) nor iodine alone is considered completely effective against Cryptosporidium, although they are partially effective against Giardia. Chlorine is considered slightly better against the latter. A more complete field solution that includes chemical disinfectants is to first filter the water, using a 0.2 μm ceramic cartridge pumped filter, followed by treatment with iodine or chlorine, thereby filtering out cryptosporidium, Giardia, and most bacteria, along with the larger viruses, while also using chemical disinfectant to address smaller viruses and bacteria that the filter cannot remove. This combination is also potentially more effective in some cases than even using portable electronic disinfection based on UV treatment. Chlorine dioxide Chlorine dioxide can come from tablets or be created by mixing two chemicals together. It is more effective than iodine or chlorine against giardia, and although it has only low to moderate effectiveness against cryptosporidium, iodine and chlorine are ineffective against this protozoan. The cost of chlorine dioxide treatment is higher than the cost of iodine treatment. Mixed oxidant A simple brine {salt + water} solution in an electrolytic reaction produces a powerful mixed oxidant disinfectant (mostly chlorine in the form of hypochlorous acid (HOCl) and some peroxide, ozone, chlorine dioxide). Chlorine tablets Sodium dichloroisocyanurate or troclosene sodium, more commonly shortened as NaDCC, is a form of chlorine used for disinfection. It is used by major non-governmental organizations such as UNICEF to treat water in emergencies. Sodium dichloroisocyanurate tablets are available in a range of concentrations to treat differing volumes of water to give the World Health Organization's recommended 5ppm available chlorine. They are effervescent tablets allowing the tablet to dissolve in a matter of minutes. Other chemical disinfection additives Silver ion tablets An alternative to iodine-based preparations in some usage scenarios are silver ion/chlorine dioxide-based tablets or droplets. These solutions may disinfect water more effectively than iodine-based techniques while leaving hardly any noticeable taste in the water in some usage scenarios. Silver ion/chlorine dioxide-based disinfecting agents will kill Cryptosporidium and Giardia, if utilized correctly. The primary disadvantage of silver ion/chlorine dioxide-based techniques is the long purification times (generally 30 minutes to 4 hours, depending on the formulation used). Another concern is the possible deposition and accumulation of silver compounds in various body tissues leading to a rare condition called argyria that results in a permanent, disfiguring, bluish-gray pigmentation of the skin, eyes, and mucous membranes. Hydrogen peroxide One recent study has found that the wild Salmonella which would reproduce quickly during subsequent dark storage of solar-disinfected water could be controlled by the addition of just 10 parts per million of hydrogen peroxide. Ultraviolet purification Ultraviolet (UV) light induces the formation of covalent linkages on DNA and thereby prevents microbes from reproducing. Without reproduction, the microbes become far less dangerous. Germicidal UV-C light in the short wavelength range of 100–280 nm acts on thymine, one of the four base nucleotides in DNA. When a germicidal UV photon is absorbed by a thymine molecule that is adjacent to another thymine within the DNA strand, a covalent bond or dimer between the molecules is created. This thymine dimer prevents enzymes from "reading" the DNA and copying it, thus neutering the microbe. Prolonged exposure to ionizing radiation can cause single and double-stranded breaks in DNA, oxidation of membrane lipids, and denaturation of proteins, all of which are toxic to cells. Still, there are limits to this technology. Water turbidity (i.e., the amount of suspended & colloidal solids contained in the water to be treated) must be low, such that the water is clear, for UV purification to work well - thus a pre-filter step might be necessary. A concern with UV portable water purification is that some pathogens are hundreds of times less sensitive to UV light than others. Protozoan cysts were once believed to be among the least sensitive, however recent studies have proved otherwise, demonstrating that both Cryptosporidium and Giardia are deactivated by a UV dose of just 6 mJ/cm2 However, EPA regulations and other studies show that it is viruses that are the limiting factor of UV treatment, requiring a 10-30 times greater dose of UV light than Giardia or Cryptosporidium. Studies have shown that UV doses at the levels provided by common portable UV units are effective at killing Giardia and that there was no evidence of repair and reactivation of the cysts. Water treated with UV still has the microbes present in the water, only with their means for reproduction turned "off". In the event that such UV-treated water containing neutered microbes is exposed to visible light (specifically, wavelengths of light over 330-500 nm) for any significant period of time, a process known as photo reactivation can take place, where the possibility for repairing the damage in the bacteria's reproduction DNA arises, potentially rendering them once more capable of reproducing and causing disease. UV-treated water must therefore not be exposed to visible light for any significant period of time after UV treatment, before consumption, to avoid ingesting reactivated and dangerous microbes. Recent developments in semiconductor technology allows for the development of UV-C Light Emitting Diodes (LEDs). UV-C LED systems address disadvantages of mercury-based technology, namely: power-cycling penalties, high power needs, fragility, warm-up time, and mercury content. Solar water disinfection In solar water disinfection (often shortened as "sodis"), microbes are destroyed by temperature and UVA radiation provided by the sun. Water is placed in a transparent plastic PET bottle or plastic bag, oxygenated by shaking partially filled capped bottles prior to filling the bottles all the way, and left in the sun for 6–24 hours atop a reflective surface. Solar distillation Solar distillation relies on sunlight to warm and evaporate the water to be purified which then condenses and trickles into a container. In theory, a solar (condensation) still removes all pathogens, salts, metals, and most chemicals but in field practice the lack of clean components, easy contact with dirt, improvised construction, and disturbances result in cleaner, yet contaminated water. Homemade water filters Water filters can be made on-site using local materials such as sand and charcoal (e.g. from firewood burned in a special way). These filters are sometimes used by soldiers and outdoor enthusiasts. Due to their low cost they can be made and used by anyone. The reliability of such systems is highly variable. Such filters can do little, if anything, to mitigate germs and other harmful constituents and can give a false sense of security that the water so produced is potable. Water processed through an improvised filter should undergo secondary processing such as boiling to render it safe for consumption. Prevention of water contamination Human water-borne diseases usually come from other humans, thus human-derived materials (feces, medical waste, wash water, lawn chemicals, gasoline engines, garbage, etc.) should be kept far away from water sources. For example, human excreta should be buried well away (>) from water sources to reduce contamination. In some wilderness areas it is recommended that all waste be packed up and carted out to a properly designated disposal point. See also Ceramic water filter Desalination Self-supply of water and sanitation Solar water disinfection Traveler's diarrhea Water quality Wilderness acquired diarrhea References External links Household Water Treatment Knowledge on CAWST website Water Camping Drinking water Hiking Waterborne diseases Wilderness Camping equipment Hiking equipment Emergency services Water treatment
Portable water purification
[ "Chemistry", "Engineering", "Environmental_science" ]
4,929
[ "Hydrology", "Water", "Water treatment", "Water pollution", "Environmental engineering", "Water technology" ]
5,302,186
https://en.wikipedia.org/wiki/Tin%28IV%29%20sulfide
Tin(IV) sulfide is a compound with the formula . The compound crystallizes in the cadmium iodide motif, with the Sn(IV) situated in "octahedral holes' defined by six sulfide centers. It occurs naturally as the rare mineral berndtite. It is useful as semiconductor material with band gap 2.2 eV. Reactions The compound precipitates as a brown solid upon the addition of to solutions of tin(IV) species. This reaction is reversed at low pH. Crystalline has a bronze color and is used in decorative coating where it is known as mosaic gold. The material also reacts with sulfide salts to give a series of thiostannates with the formula . A simplified equation for this depolymerization reaction is + → . Applications Tin (IV) sulfide has various uses in electrochemistry. It can be used in anodes of lithium-ion batteries, where an intercalation process occurs to form Li2S. It can also be used in a similar way in electrodes of supercapacitors, which can be used as alternative source of energy storage. SnS2 has also been identified as a potential component of thermoelectric devices, which convert thermal energy to electrical energy. In one example, this property was made possible by forming a composite of SnS2 with multiwalled carbon nanotubes. SnS2 can also be used in wastewater treatment. Forming a membrane with SnS2 and carbon nanofibers can potentially allow for the reduction of certain impurities in water, an example of which is hexavalent chromium. See also Mosaic Gold References External links Tin (IV) Sulfide Powder, Stanford Advanced Materials Tin Sulfide (SnS2), PubChem Tin(IV) compounds IV-VI semiconductors Disulfides
Tin(IV) sulfide
[ "Chemistry" ]
379
[ "Semiconductor materials", "IV-VI semiconductors" ]
5,302,310
https://en.wikipedia.org/wiki/Titanium%20hydride
Titanium hydride normally refers to the inorganic compound and related nonstoichiometric materials. It is commercially available as a stable grey/black powder, which is used as an additive in the production of Alnico sintered magnets, in the sintering of powdered metals, the production of metal foam, the production of powdered titanium metal and in pyrotechnics. Also known as titanium–hydrogen alloy, it is an alloy of titanium, hydrogen, and possibly other elements. When hydrogen is the main alloying element, its content in the titanium hydride is between 0.02% and 4.0% by weight. Alloying elements intentionally added to modify the characteristics of titanium hydride include gallium, iron, vanadium, and aluminium. Production In the commercial process for producing non-stoichiometric , titanium metal sponge is treated with hydrogen gas at atmospheric pressure at between 300-500 °C. Absorption of hydrogen is exothermic and rapid, changing the color of the sponge grey/black. The brittle product is ground to a powder, which has a composition around . In the laboratory, titanium hydride is produced by heating titanium powder under flowing hydrogen at 700 °C, the idealized equation being: Other methods of producing titanium hydride include electrochemical and ball milling methods. Reactions is unaffected by water and air. It is slowly attacked by strong acids and is degraded by hydrofluoric and hot sulfuric acids. It reacts rapidly with oxidizing agents, this reactivity leading to the use of titanium hydride in pyrotechnics. The material has been used to produce highly pure hydrogen, which is released upon heating the solid. Hydrogen release in TiH~2 starts just above 400 °C but may not be complete until the melting point of titanium metal. Titanium tritide (TiH) has been proposed for long-term storage of tritium gas. Structure As approaches stoichiometry, it adopts a distorted body-centered tetragonal structure, termed the ε-form with an axial ratio of less than 1. This composition is very unstable with respect to partial thermal decomposition, unless maintained under a pure hydrogen atmosphere. Otherwise, the composition rapidly decomposes at room temperature until an approximate composition of is reached. This composition adopts the fluorite structure, and is termed the δ-form, and only very slowly thermally decomposing at room temperature until an approximate composition of is reached, at which point, inclusions of the hexagonal close packed α-form, which is the same form as pure titanium, begin to appear. The evolution of the dihydride from titanium metal and hydrogen has been examined in some detail. α-Titanium has a hexagonal close packed (hcp) structure at room temperature. Hydrogen initially occupies tetrahedral interstitial sites in the titanium. As the H/Ti ratio approaches 2, the material adopts the β-form to a face centred cubic (fcc), δ-form, the H atoms eventually filling all the tetrahedral sites to give the limiting stoichiometry of . The various phases are described in the table below. If titanium hydride contains 4.0% hydrogen at less than around 40 °C then it transforms into a body-centred tetragonal (bct) structure called ε-titanium. When titanium hydrides with less than 1.3% hydrogen, known as hypoeutectoid titanium hydride are cooled, the β-titanium phase of the mixture attempts to revert to the α-titanium phase, resulting in an excess of hydrogen. One way for hydrogen to leave the β-titanium phase is for the titanium to partially transform into δ-titanium, leaving behind titanium that is low enough in hydrogen to take the form of α-titanium, resulting in an α-titanium matrix with δ-titanium inclusions. A metastable γ-titanium hydride phase has been reported. When α-titanium hydride with a hydrogen content of 0.02-0.06% is quenched rapidly, it forms into γ-titanium hydride, as the atoms "freeze" in place when the cell structure changes from hcp to fcc. γ-Titanium takes a body centred tetragonal (bct) structure. Moreover, there is no compositional change so the atoms generally retain their same neighbours. Hydrogen embrittlement in titanium and titanium alloys The absorption of hydrogen and the formation of titanium hydride are a source of damage to titanium and titanium alloys. This hydrogen embrittlement process is of particular concern when titanium and alloys are used as structural materials, as in nuclear reactors. Hydrogen embrittlement manifests as a reduction in ductility and eventually spalling of titanium surfaces. The effect of hydrogen is to a large extent determined by the composition, metallurgical history and handling of the Ti and Ti alloy. CP-titanium (commercially pure: ≤99.55% Ti content) is more susceptible to hydrogen attack than pure α-titanium. Embrittlement, observed as a reduction in ductility and caused by the formation of a solid solution of hydrogen, can occur in CP-titanium at concentrations as low as 30-40 ppm. Hydride formation has been linked to the presence of iron in the surface of a Ti alloy. Hydride particles are observed in specimens of Ti and Ti alloys that have been welded, and because of this welding is often carried out under an inert gas shield to reduce the possibility of hydride formation. Ti and Ti alloys form a surface oxide layer, composed of a mixture of Ti(II), Ti(III) and Ti(IV) oxides, which offers a degree of protection to hydrogen entering the bulk. The thickness of this can be increased by anodizing, a process which also results in a distinctive colouration of the material. Ti and Ti alloys are often used in hydrogen containing environments and in conditions where hydrogen is reduced electrolytically on the surface. Pickling, an acid bath treatment which is used to clean the surface can be a source of hydrogen. Uses Common applications include ceramics, pyrotechnics, sports equipment, as a laboratory reagent, as a blowing agent, and as a precursor to porous titanium. When heated as a mixture with other metals in powder metallurgy, titanium hydride releases hydrogen which serves to remove carbon and oxygen, producing a strong alloy. The density of titanium hydride varies based on the alloying constituents, but for pure titanium hydride it ranges between 3.76 and 4.51 g/cm3. Even in the narrow range of concentrations that make up titanium hydride, mixtures of hydrogen and titanium can form a number of different structures, with very different properties. Understanding such properties is essential to making quality titanium hydride. At room temperature, the most stable form of titanium is the hexagonal close-packed (HCP) structure α-titanium. It is a fairly hard metal that can dissolve only a small concentration of hydrogen, no more than 0.20 wt% at , and only 0.02% at . If titanium hydride contains more than 0.20% hydrogen at titanium hydride-making temperatures it transforms into a body-centred cubic (BCC) structure called β-titanium. It can dissolve considerably more hydrogen, more than 2.1% hydrogen at . If titanium hydride contains more than 2.1% at then it transforms into a face-centred cubic (FCC) structure called δ-titanium. It can dissolve even more hydrogen, as much as 4.0% hydrogen , which reflects the upper hydrogen content of titanium hydride. There are many types of heat treating processes available to titanium hydride. The most common are annealing and quenching. Annealing is the process of heating the titanium hydride to a sufficiently high temperature to soften it. This process occurs through three phases: recovery, recrystallization, and grain growth. The temperature required to anneal titanium hydride depends on the type of annealing. Annealing must be done under a hydrogen atmosphere to prevent outgassing. See also Machinability References External links Hydride Metal hydrides Reducing agents Pyrotechnic fuels
Titanium hydride
[ "Chemistry" ]
1,713
[ "Inorganic compounds", "Metal hydrides", "Redox", "Reducing agents" ]
5,302,681
https://en.wikipedia.org/wiki/Electronic%20anticoincidence
Electronic anticoincidence is a method (and its associated hardware) widely used to suppress unwanted, "background" events in high energy physics, experimental particle physics, gamma-ray spectroscopy, gamma-ray astronomy, experimental nuclear physics, and related fields. In the typical case, a desired high-energy interaction or event occurs and is detected by some kind of detector, creating a fast electronic pulse in the associated nuclear electronics. But the desired events are mixed up with a significant number of other events, produced by other particles or processes, which create indistinguishable events in the detector. Very often it is possible to arrange other physical photon or particle detectors to intercept the unwanted background events, producing essentially simultaneous pulses that can be used with fast electronics to reject the unwanted background. Gamma-ray astronomy Early experimenters in X-ray and gamma-ray astronomy found that their detectors, flown on balloons or sounding rockets, were corrupted by the large fluxes of high-energy photon and cosmic-ray charged-particle events. Gamma-rays, in particular, could be collimated by surrounding the detectors with heavy shielding materials made of lead or other such elements, but it was quickly discovered that the high fluxes of very penetrating high-energy radiation present in the near-space environment created showers of secondary particles that could not be stopped by reasonable shielding masses. To solve this problem, detectors operating above 10 or 100 keV were often surrounded by an active anticoincidence shield made of some other detector, which could be used to reject the unwanted background events. An early example of such a system, first proposed by Kenneth John Frost in 1962, is shown in the figure. It has an active CsI(Tl) scintillation shield around the X-ray/gamma-ray detector, also of CsI(Tl), with the two connected in electronic anticoincidence to reject unwanted charged particle events and to provide the required angular collimation. Plastic scintillators are often used to reject charged particles, while thicker CsI, bismuth germanate ("BGO"), or other active shielding materials are used to detect and veto gamma-ray events of non-cosmic origin. A typical configuration might have a NaI scintillator almost completely surrounded by a thick CsI anticoincidence shield, with a hole or holes to allow the desired gamma rays to enter from the cosmic source under study. A plastic scintillator may be used across the front which is reasonably transparent to gamma rays, but efficiently rejects the high fluxes of cosmic-ray protons present in space. Compton suppression In gamma-ray spectroscopy, Compton suppression is a technique that improves the signal by removing data that have been corrupted by the incident gamma ray getting Compton scattered out of the detector before depositing all of its energy. The goal is to minimize the background related to the Compton effect (Compton continuum) in the data. The high-purity solid state germanium (HPGe) detectors used in gamma-ray spectroscopy have a typical size of a few centimeters in diameter and a thickness ranging from a few centimeters to a few millimeters. For detectors of such a size, gamma rays may Compton scatter out of the detector's volume before they deposit their entire energy. In this case, the energy reading by the data acquisition system will come up short: the detector records an energy which is only a fraction of the energy of the incident gamma ray. In order to counteract this, the expensive and small high resolution detector is surrounded by larger and cheaper low resolution detectors, usually a scintillator (NaI and BGO are the most common) The suppression detector is shielded from the source by a thick collimator, and it is operated in anti-coincidence with the main detector: if they both detect a gamma ray, it must have scattered out of the main detector before depositing all of its energy, so the Ge reading is ignored. The cross section for interaction of gamma rays in the suppression detector is larger than that of the main detector, as is its size, thus it is highly unlikely that a gamma ray will escape both devices. Nuclear and particle physics Modern experiments in nuclear and high-energy particle physics almost invariably use fast anticoincidence circuits to veto unwanted events. The desired events are typically accompanied by unwanted background processes that must be suppressed by enormous factors, ranging from thousands to many billions, to permit the desired signals to be detected and studied. Extreme examples of these kinds of experiments may be found at the Large Hadron Collider, where the enormous Atlas and CMS detectors must reject huge numbers of background events at very high rates, to isolate the very rare events being sought. See also Nuclear electronics HEAO 1 HEAO 3 INTEGRAL Uhuru (satellite) Gamma-ray spectroscopy References External links Compton Suppression Nuclear physics
Electronic anticoincidence
[ "Physics" ]
984
[ "Nuclear physics" ]
5,302,890
https://en.wikipedia.org/wiki/Kolk%20%28vortex%29
A kolk is an underwater vortex causing hydrodynamic scour by rapidly rushing water past an underwater obstacle. High-velocity gradients produce a high-shear rotating column of water, similar to a tornado. Kolks can pluck multiple-ton blocks of rock and transport them in suspension for kilometres. Kolks leave clear evidence in the form of kolk lakes, a kind of plucked-bedrock pits or rock-cut basin. Kolks also leave downstream deposits of gravel-supported blocks that show percussion but no rounding. Examples Kolks were first identified by the Dutch, who observed kolks hoisting several-ton blocks of riprap from dikes and transporting them away, suspended above the bottom. The Larrelt kolk near Emden appeared during the 1717 Christmas flood which broke through a long section of the dyke. The newly formed body of water measured roughly 500 × 100 m and was 25 m deep. In spite of the repair to the dyke, another breach occurred in 1721, which produced more kolks between 15 and 18 m deep. In 1825 during the February flood near Emden, a kolk of 31 m depth was created. The soil was saturated from here for a further 5 km inland. Kolks are credited with creating the pothole-like features in the highly jointed basalts in the channeled scablands of the Columbia Basin region in Eastern Washington. Depressions were scoured out within the scablands that resemble virtually circular steep-sided potholes. Examples from the Missoula floods in this area include: The region below Dry Falls includes a number of lakes scoured out by kolks. Sprague Lake is a kolk-formed basin created by a flow estimated to be wide and deep. The Alberton Narrows on the Clark Fork River show evidence that kolks plucked boulders from the canyon and deposited them in a rock and gravel bar immediately downstream of the canyon. The south wall of Hellgate Canyon in Montana shows the rough-plucked surface characteristic of kolk-eroded rock. Both the walls of the Wallula Gap and the Columbia River Gorge also show the rough-plucked surfaces characteristic of kolk-eroded rock. Oswego Lake, in the middle of Lake Oswego, Oregon (a Portland suburb), was an abandoned channel of the Tualatin River that was scoured by a kolk. See also Hydrodynamic scour References External links Video of kolks and rock-cut basins Oceanography Geomorphology Hydrodynamics Vortices
Kolk (vortex)
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
525
[ "Hydrology", "Applied and interdisciplinary physics", "Vortices", "Oceanography", "Hydrodynamics", "Dynamical systems", "Fluid dynamics" ]
5,303,055
https://en.wikipedia.org/wiki/Value%20network%20analysis
Value network analysis (VNA) is a methodology for understanding, using, visualizing, optimizing internal and external value networks and complex economic ecosystems. The methods include visualizing sets of relationships from a dynamic whole systems perspective. Robust network analysis approaches are used for understanding value conversion of financial and non-financial assets, such as intellectual capital, into other forms of value. The value conversion question is critical in both social exchange theory that considers the cost/benefit returns of informal exchanges and more classical views of exchange value where there is concern with conversion of value into financial value or price. Overview Value network analysis offers a taxonomy for non-financial business reporting, which is becoming increasingly important in SEC Filings. In some approaches taxonomies are supported by Extensible Business Reporting Language XBRL. Venture capitalists and investors are concerned with the capability of a firm to create value in future. Financial statements are limited to current and past financial indicators and valuations of capital assets. In contrast, value network analysis is one approach to assessing current and future capability for value creation and to describe and analyze a business model. Advocates of VNA claim that strong value-creating relationships support successful business endeavors at the operational, tactical, and strategic levels. A value network perspective, in this context would encompass both internal and external value networks — loose yet complex configurations of roles within industries, businesses, business units or functions and teams within organizations that engage in mutually beneficial relationships. Tools used in the past to analyze business value creation, such as the value chain and value added, are linear and mechanistic approaches based on a process perspective. These approaches are considered inadequate to address this new level of business complexity where value creating activities occur in complex, interdependent and dynamic relationships between multiple sets of actors. Other claims for value network analysis are that it is an essential skill for a successful enterprise dependent on knowledge exchanges and collaborative relationships, which are seen as critical in almost every industry. that this type of analysis helps individuals and work groups better manage their interactions and address operational issues, such as balancing workflows or improving communication. that the approach also scales up to the business level to help forge stronger value-creating linkages with strategic partners and improve stakeholder relationships. that it also connects with other modeling tools such as Lean Manufacturing, Six Sigma, workflow tools, business process reengineering, business process management, social network analysis tools and system dynamics. Basics of value network analysis Value network analysis addresses both financial and non-financial value. Every business relationship includes contractual or mandated activities between participants — and also informal exchanges of knowledge, favors, and benefits. The analysis begins with a visual map or diagram that first shows the essential contractual, tangible revenue- or funding-related business transactions and exchanges that occur between each node of the networks. Nodes represent real people, typically individuals, groups of individuals such as a business unit or aggregates of groups such as a type of business in an industry network. During analysis when adopting a reflective, double loop or generative learning mode, it is beneficial to regard nodes as role plays (shortened to roles). Practitioners have found {2} that conversation between participants about role plays within a larger whole invariably results in transforming individual behaviour and gaining commitment to implementing needed change as elaborated below. Along with the more traditional business transactions the critical intangible exchanges are also mapped. Intangible exchanges are those mostly informal knowledge exchanges and benefits or support that build relationships and keep things running smoothly. These informal exchanges are actually the key to creating trust and opening pathways for innovation and new ideas. Traditional business practices ignore these important intangible exchanges, but they are made visible with a value network analysis. The visualizations and diagrams link to a variety of assessments, usually handled in Excel type spreadsheets — to increase value outputs, to leverage knowledge and intangibles for improving financial and organizational performance, and to find new value opportunities. When the analysis is complete people gain insights into what is actually happening now, where more value can be realized, and what is required to achieve maximum value benefit across the entire business activity that is the focus of the analysis. It is conventional to refer to the actual "things" that move from one Participant to another as "deliverables," whether tangible or intangible. If users find it conceptually unusual or contextually difficult to refer to an intangible (informal) thing as a "deliverable," they may, instead, prefer to use an alternative descriptive term "contribution." Complementary analysis After the value network diagram has been prepared, it can be used to perform three complementary analyses: exchange analysis: investigation of the general pattern of the exchanges in the network, sufficient reciprocity, existence of weak or inefficient links impact analysis: can an involved party create value from the received inputs value creation analysis: assessment of the value increases that an output triggers for the customer and how the company itself benefits from it See also Value Network Value shop References Value proposition Enterprise modelling
Value network analysis
[ "Engineering" ]
1,012
[ "Systems engineering", "Enterprise modelling" ]
38,714,137
https://en.wikipedia.org/wiki/Analytical%20nebulizer
The general term nebulizer refers to an apparatus that converts liquids into a fine mist. Nozzles also convert liquids into a fine mist, but do so by pressure through small holes. Nebulizers generally use gas flows to deliver the mist. The most common form of nebulizers are medical appliances such as asthma inhalers or paint spray cans. Analytical nebulizers are a special category in that their purpose is to deliver a fine mist to spectrometric instruments for elemental analysis. They are necessary parts of inductively coupled plasma atomic emission spectroscopy (ICP-AES), inductively coupled plasma mass spectrometry (ICP-MS), and atomic absorption spectroscopy (AAS). Applications Analytical nebulizers are used in trace element analysis. This type of work plays an important role in areas of pharmaceutical and clinical study, biological, environmental and agricultural assessment and petroleum testing. They also have nuclear applications. Nebulizer designs Most analytical pneumatic nebulizers use the same essential principle (induction) to atomize the liquid: When gas at a higher pressure exits from a small hole (the orifice) into gas at a lower pressure, it forms a gas jet into the lower pressure zone, and pushes the lower pressure gas away from the orifice. This creates a current in the lower pressure gas zone, and draws some of the lower pressure gas into the higher pressure gas jet. At the orifice, the draw of the lower pressure gas creates considerable suction, the extent depending on the differential pressures, the size of the orifice, and the shape of the orifice and surrounding apparatus. In all pneumatic induction nebulizers, the suction near the orifice is utilized to draw the liquid into the gas jet. The liquid is broken into small droplets in the process. Present induction pneumatic nebulizer designs fit into 5 categories: 1. Concentric: Liquid flow surrounded by a Gas flow or Gas flow surrounded by a Liquid flow; 2. Cross Flow: Gas flow at right angles to the Liquid flow; 3. Entrained: Gas and Liquid mixed in the system and emitted as a combined flow. 4. Babington and V Groove: Liquid is spread over a surface to decrease the surface tension, and passed over a gas orifice; 5. Parallel Path: Liquid is delivered beside a gas orifice and induction pulls the liquid into the gas stream. Newer non-induction nebulizers include 3 more categories: 6. Enhanced Parallel Path: Liquid is delivered beside a gas orifice and drawn into the gas stream by surface tension along a spout; 7: Flow Blurring: liquid is injected by pressure into a gas stream; 8. Vibrating Mesh: liquid is pushed through tiny holes by a vibrating ultrasonic plate. Induction nebulizers Concentric nebulizers Concentric nebulizers have a central capillary with the liquid and an outer capillary with the gas. The gas draws the liquid into the gas stream through induction, and the liquid is broken into a fine mist as it moves into the gas stream. In theory, the gas and liquid may be switched with the gas in the center and the liquid in the outer capillary, but generally they work better with the gas outside and the liquid inside. The first Canadian concentric patent was Canadian Patent #2405 of April 18, 1873. It was designed to deliver a better spray of oil into a burner. The design is larger but essentially the same as modern analytical nebulizers. The first one developed for spectrometers was a glass design developed by Dr. Meinhard of California in 1973. His design enabled early ICP users to have a consistent sample introduction nebulizer, but it plugged easily. Today many companies produce glass concentrics, and since 1997, Teflon concentrics have become available. Cross flow nebulizers Cross flow nebulizers have a gas capillary set at right angles to the liquid capillary. The gas is blown across the liquid capillary and this produces a low pressure that draws the liquid into the gas stream. Generally the suction is similar to what is produced in a concentric nebulizer. The benefit of a cross flow is that the liquid capillary have a larger inside diameter allowing for more particles to pass through without plugging the nebulizer. The disadvantage is that the mist is usually not as fine or as consistent. Entrained nebulizers There are no analytical nebulizers at present using this technique, but some oil burners do. Mainly used in much older designs as newer concentrics and cross flows are much better and easier to make. V-groove nebulizers V Groove nebulizers are similar to a cross flow in that the liquid is delivered in a capillary at right angles to the gas capillary, but the liquid is poured down a vertically orientated groove that flows past a gas orifice. The gas pulls the liquid into the gas flow and forms a fine mist. These allow for very large ID liquid capillaries, but have no suction and require a pump to feed the liquid to the device. They must be correctly orientated or they do not allow the liquid to flow past the gas stream. And their mist usually produces larger droplets than with concentrics or cross flows. Parallel path nebulizers This design was developed by John Burgener of Burgener Research Inc. Here, the gas stream and sample run through the nebulizer in parallel capillaries. At the tip of the nebulizer, the liquid is pulled into the gas stream and then dispersed into the chamber as a mist. Non-induction nebulizers Enhanced parallel path nebulizers This design was developed by John Burgener of Burgener Research Inc. Here, the gas stream and sample run through the nebulizer in parallel capillaries. At the tip of the nebulizer, the liquid is pulled into the gas stream by surface tension along a spout dipping into the gas stream. This allows the gas to impact the liquid, and has the liquid interact in the center of the gas flow where the gas flow speed is highest, producing a better transfer of energy from the gas to the liquid, and producing a finer droplet size. The Burgener Mira Mist nebulizers are the main products using the Enhanced Parallel Path method. Flow blurring nebulizers This is a new type of nebulizer which does not use induction to mix the sample and gas. Instead, pneumatic atomization is employed here, which results in the micro-mixing of fluids using a reflux cell. This means that there is a turbulent mixing of the liquid and gas which results in great sensitivity and is very efficient. The OneNeb is the only example of this sort. Piezoelectric vibrating mesh Since 2011, this variation on ultrasonic nebulizers has been available. There is a vibrating membrane which has micro holes in it. The sample enters through the back and is pushed through the holes as the membrane vibrates. This makes a fine mist with a droplet size proportional to the hole size. This method requires no gas flow, and is used in conjunction with a chamber. If the droplets are less than 5μm then they are too small to stick to the chamber walls and the chamber remains dry while 90–100% of the sample makes it to the torch. Chronology of analytical nebulizer development The early history of medical nebulizers can be read here. The development of analytical nebulizers since the introduction of the ICP / ICP-MS is seen below: 1970s Adjustable Cross flow (US patent #4,344,574) 1974 Meinhard Concentric 1978 V-groove (by Suddendorf and Boyer) (US Patent #4,206,160) 1980 Pillar and Post (by Garbarino and Taylor) 1983 GMK Nebulizer: Glass Babington V-groove 1983 Meinhard C-type nebulizer 1983 Precision glassblowing (similar to Minehard A-type) 1983 Jarrell Ash (Thermo) Sapphire V-groove 1983 Meddings' MAK: glass fixed cross flow 1984 Meinhard K-type: recessed inner capillary 1984 Glass Expansion begins making ICP glassware 1985 Burgener-Legere – first commercial teflon nebulizer – V-groove – no adjustable parts 1986 Direct injection micro nebulizer by Fassel, Rice & Lawrence (US patent #4,575,609) 1986 Hildebrand Grid nebulizer Late 1980s Perkin Elmer Gem Tip cross flow 1988 CETAC Ultrasonic Nebs 1980s Cyclonic chambers 1987 Glass Expansion's first neb – the VeeSpray (ceramic V-groove) 1989 Glass Expansion first concentric – the Conikal (machined instead of glass blown) 1989 Noordermeer Glass V Groove (US patent #4,880,164) 1992 Glass Expansion – non salting Sea Spray 1993 Modified Lichte Glass V-Groove 1993 Burgener BTF – first Parallel Path Neb (US patent #5,411,208) 1994–1995 Main Burgener Parallel Path Nebs – BTS 50, BTN & T2002 Mid 1990s Perkin Elmer GemCone: Miniature V-Groove With the introduction of the ICP-MS to the laboratory, the creation of micro nebulizers became a priority in order to deliver smaller amounts of sample at lower flow rates. 1993 The Meinhard HEN (high efficiency nebulizer) was produced which handled very low flow rates but salted and plugged easily as a result. (25 times less sample than a standard Meinhard) 1997 Cetac Microconcentric Nebulizer – first Teflon concentric 50, 100, 200 or 400 μL/min 1997 Meinhard Direct Injection HEN – (DIHEN) (US Patent #6,166,379) 1999 Elemental Scientific – PFA Concentric Nebs 20, 50, 100 or 400 μL/min 1999 Burgener Micro 1: Parallel Path 2000 Burgener Micro 3: Parallel Path 2001 Burgener Mira Mist: First Enhanced Parallel Path Nebulizer (US patent #6,634,572) 2004 Epond Typhoon: Glass Concentric 2005 Ingeniatrics OneNeb: Flow Blurring Technology 2010 Epond Lucida: Teflon Micro Concentric 2012 Burgener PFA 250: PFA Micro flow Enhanced Parallel Path Nebulizer 2010 – 2013 Meinhard and Glass Expansion: Significant improvements in attachments and designs of glass concentrics. References External links https://web.archive.org/web/20131002010344/http://www.icpnebulizers.com/selection.html Aerosols Respiratory therapy Drug delivery devices Medical equipment Dosage forms
Analytical nebulizer
[ "Chemistry", "Biology" ]
2,252
[ "Pharmacology", "Drug delivery devices", "Colloids", "Medical equipment", "Aerosols", "Medical technology" ]
38,719,870
https://en.wikipedia.org/wiki/Patient-Reported%20Outcomes%20Measurement%20Information%20System
The Patient-Reported Outcomes Measurement Information System (PROMIS) provides clinicians and researchers access to reliable, valid, and flexible measures of health status that assess physical, mental, and social well–being from the patient perspective. PROMIS measures are standardized, allowing for assessment of many patient-reported outcome domains—including pain, fatigue, emotional distress, physical functioning and social role participation—based on common metrics that allow for comparisons across domains, across chronic diseases, and with the general population. Further, PROMIS tools allow for computer adaptive testing, efficiently achieving precise measurement of health status domains with few items. There are PROMIS measures for both adults and children. PROMIS was established in 2004 with funding from the National Institutes of Health (NIH) as one of the initiatives of the NIH Roadmap for Medical Research. Background and history The NIH established the Roadmap for Medical Research in 2004 to identify major opportunities for medical research and the development of new scientific expertise and technology that would lead to tangible benefits for patients. One of the programs within the Roadmap, Re-engineering the Clinical Research Enterprise, called for developing rigorous and systematic infrastructure for clinical research and for translating scientific discoveries into practical applications or tools that can be used by healthcare providers. PROMIS is one initiative within this program. The PROMIS initiative develops and evaluates standard measures for key patient-reported health indicators and symptoms. Patient-reported measures such as pain, fatigue, emotional distress, and physical functioning complement clinical measures (e.g., x-rays and lab tests) by providing healthcare providers with information about what patients are able to do and how they feel. PROMIS has worked to unify the field of patient-reported outcome (PRO) measurement through the promotion of a common, systematic measurement system broadly applicable across clinical research. PROMIS measures are intended to assess the most common or salient dimensions of patient–relevant outcomes for the widest possible range of chronic disorders and diseases, thus they are "generic" measures vs. specific to given disease or condition. Structured as a multi-institutional collaboration with NIH, PROMIS has advanced the consensus process within the field of PRO measurement through the involvement of the funded research collaborative in establishing a rigorous, systematic infrastructure for measure development and psychometric evaluation. PROMIS takes advantage of developments in technology, as well as advances in the sciences of psychometric, qualitative, cognitive, and health survey research, to create new models and methods for collecting PROs for use in clinical research and evaluation of medical care. PROMIS incorporates and translates cutting-edge science into practical, easy to use tools for clinicians: For example, PROMIS implements Computer Adaptive Test (CAT) software which tailors the PRO assessment to the individual patient by selecting the most informative set of questions based on responses to previous questions. CAT questionnaires allow an accurate measurement of health status using the fewest possible questions. Assessment and expansion In November 2012, the PROMIS network held it first international strategy meeting with organizational partners from 8 European countries, China and Canada to develop a strategic action plan for the international spread of PROMIS. In early 2013, PROMIS unveiled new materials to expand its outreach to researchers and clinicians: the PROMIS e-newsletter and two instructional videos series about PROMIS and Item Response Theory. In 2016, an updated PROMIS website at www.HealthMeasures.net was created to provide more information about measure selection, data collection tools, score calculation, score interpretation, item response theory, and support an online forum for posting questions to the PROMIS user community. Affiliates The PROMIS initiative is fulfilled by a network of primary research sites and coordinating centers that collaborate to develop the items and tools to measure PROs, and to evaluate the reliability and validity of these measures. Between 2004 and 2009, PROMIS consisted of a Statistical Coordinating Center, located at Evanston Northwestern Healthcare, and six research sites located at Duke University, University of North Carolina at Chapel Hill, University of Pittsburgh, Stanford University, Stony Brook University, and University of Washington. In 2010, NIH renewed funding for PROMIS and expanded the program to six additional research sites: Children's Hospital of Philadelphia; Boston University / University of Michigan, Ann Arbor; University of California, Los Angeles; Georgetown University; Children's Hospital Medical Center, Cincinnati; and University of Maryland, Baltimore. PROMIS also added a Network Center, operated by the American Institutes for Research, Washington DC as well as a Statistical Center and a Technology Center, both operated by Northwestern University. These centers provided logistical and technical support to PROMIS. In September 2014, the NIH extended its support to PROMIS through funding the National Person Centered Assessment Resource (PCAR/HealthMeasures). Three other measurement systems, Quality of Life in Neurologic Disorders (Neuro-QoL), Adult Sickle Cell Quality of Life Measurement system (ASCQ-Me), and the NIH Toolbox for the Assessment of Neurological and Behavioral Function (NIH Toolbox) are also supported through HealthMeasures. HealthMeasures aims to facilitate the dissemination, implementation, and self-sustainability of these four measurement systems. The HealthMeasures grant was awarded to Northwestern University with additional sites at American Institutes for Research, University of California, Los Angeles, University of California, San Diego, University of North Carolina, Chapel Hill, and University of Pittsburgh. Mission PROMIS uses measurement science to create a state-of-the-science assessment system for self–reported health. Create and promulgate a set of qualitative and quantitative methodological standards for development and validation of PROMIS instruments. Launch a sustainable entity that is able to continue and grow the research, development, and dissemination activities for the network. Identify and prioritize a set of research and development opportunities for PROMIS that include, but are not limited to, clinical applications. Disseminate information on PROMIS to forge strategic alliances with key individuals and organizations that will help PROMIS fulfill its vision and enhance its adoption in research, clinical practice, and policy. Measures PROMIS has self-reported health measures in the domains of physical health, mental health and social health for adult self-reported and pediatric-self and proxy-reported health. Under each main domain (physical health, mental health, social health) are sub-domains associated with symptoms, function, affect, behavior, cognition, relationships or function. The sub-domains developed as of November 2016 are listed below. Domains that are “PROMIS Profile Domains” are included in either PROMIS Adult Profile Instruments (PROMIS-29, PROMIS-43, PROMIS-57) and Pediatric or Parent Proxy Profile Instruments (PROMIS Pediatric/Parent Proxy 25, PROMIS Pediatric/Parent Proxy 37, PROMIS Pediatric/Parent Proxy 49). There are also Sexual Function and Satisfaction Profiles for adults. Adult self-reported health domains Global Health (Mental, Physical) Physical Health Profile Domains: Physical Function Pain Intensity Pain Interference Fatigue Sleep Disturbance Additional Domains: Dyspnea (Activity Motivation, Activity Requirements, Airborne Exposure, Assistive Devices, Characteristics, Emotional Response, Functional Limitations, Task Avoidance, Severity) Gastrointestinal Symptoms (Belly Pain, Bowel Incontinence, Constipation, Diarrhea, Disrupted Swallowing, Gas and Bloating, Gastroesophageal Reflux, Nausea and Vomiting) Pain Behavior Pain Quality (Neuropathic Pain, Nociceptive Pain) Sexual Function (Erectile Function, Global Satisfaction, Interest in Sexual Activity, Lubrication, Vaginal Discomfort, Anal Discomfort, Interfering Factors, Orgasm, Therapeutic Aids, Sexual Activities, Oral Discomfort, Oral Dryness, Bother Regarding Sexual Function, Vulvar Discomfort - Clitoral, - Labial) Sleep-related (daytime) Impairment Upper Extremity Function Mental Health Profile Domains: Depression Anxiety Additional Domains: Alcohol Use, Consequences (Positive, Negative), & Expectancies (Positive, Negative) Anger Cognitive Function Psychosocial Illness Impact (Positive, Negative) Self-Efficacy (General, Manage Daily Activities, Manage Emotions, Manage Medications/Treatment, Manage Social Interactions, Manage Symptoms) Smoking (Coping Expectancies, Emotional/Sensory Expectancies, Health Expectancies, Psychosocial Expectancies, Nicotine Dependence, Social Motivations) Social Health Profile Domains: Ability to Participate in Social Roles and Activities Additional Domains: Companionship Emotional Support Informational Support Instrumental Support Satisfaction with Participation in Discretionary Social Activities (v1.0) Satisfaction with Participation in Social Roles (v1.0) Satisfaction with Social Roles and Activities (v2.0) Social Isolation Pediatric self- and proxy-reported health domains Global Health Physical Health Profile Domains: Mobility Upper Extremity Function Pain Interference Pain Intensity Fatigue Additional Domains: Asthma Impact Pain Behavior Physical Activity Physical Stress Experiences Strength Impact Mental Health Profile Domains: Depressive Symptoms Anxiety Additional Domains: Anger Cognitive Function Life Satisfaction Meaning and Purpose Positive Affect Psychological Stress Experiences Social Health Profile Domains: Peer Relationships See also PhenX Toolkit References External links PROMIS Website More detail on the PROMIS methodology Health software Health informatics Medical assessment and evaluation instruments
Patient-Reported Outcomes Measurement Information System
[ "Biology" ]
1,863
[ "Health informatics", "Medical technology" ]
24,596,774
https://en.wikipedia.org/wiki/Imbert-Fick%20law
Armand Imbert (1850-1922) and Adolf Fick (1829-1901) both demonstrated, independently of each other, that in ocular tonometry the tension of the wall can be neutralized when the application of the tonometer produces a flat surface instead of a convex one, and the reading of the tonometer (P) then equals (T) the IOP," whence all forces cancel each other. This principle was used by Hans Goldmann (1899–1991) who referred to it as the Imbert-Fick "law", thus giving his newly marketed tonometer (with the help of the Haag-Streit Company) a quasi-scientific basis; it is mentioned in the ophthalmic and optometric literature, but not in any books of physics. According to Goldmann, "The law states that the pressure in a sphere filled with liquid and surrounded by an infinitely thin membrane is measured by the counterpressure which just flattens the membrane." "The law presupposes that the membrane is without thickness and without rigidity...practically without any extensibility." A sphere formed from an inelastic membrane and filled with incompressible liquid cannot be indented or applanated even when the pressure inside is zero, because a sphere contains the maximum volume with the minimum surface area. Any deformation necessarily increases surface area, which is impossible if the membrane is inelastic. The physical basis of tonometry is Newton's third law of motion: "If you press an eyeball with an object, the object is also pressed by the eyeball." The law is this: Intraocular pressure = Contact force/Area of contact The law assumes that the cornea is infinitely thin, perfectly elastic, and perfectly flexible. None of these assumptions are accurate. The cornea is a membrane that has thickness and offers resistance when pressed. Therefore, in Goldmann tonometry, readings are normally taken when an area of 3.06mm diameter has been flattened. At this point the opposing forces of corneal rigidity and the tear film are roughly approximate in a normal cornea and cancel each other out allowing the pressure in the eye to be inferred from the force applied. See also Eye examination Optometry Notes Pressure Ophthalmology
Imbert-Fick law
[ "Physics" ]
474
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Pressure", "Wikipedia categories named after physical quantities" ]
24,597,364
https://en.wikipedia.org/wiki/Thermomechanical%20processing
Thermomechanical processing is a metallurgical process that combines mechanical or plastic deformation process like compression or forging, rolling, etc. with thermal processes like heat-treatment, water quenching, heating and cooling at various rates into a single process. Application in rebar steel The quenching process produces a high strength bar from inexpensive low carbon steel. The process quenches the surface layer of the bar, which pressurizes and deforms the crystal structure of intermediate layers, and simultaneously begins to temper the quenched layers using the heat from the bar's core. Steel billets 130mm² ("pencil ingots") are heated to approximately 1200°C to 1250°C in a reheat furnace. Then, they are progressively rolled to reduce the billets to the final size and shape of reinforcing bar. After the last rolling stand, the billet moves through a quench box. The quenching converts the billet's surface layer to martensite, and causes it to shrink. The shrinkage pressurizes the core, helping to form the correct crystal structures. The core remains hot, and austenitic. A microprocessor controls the water flow to the quench box, to manage the temperature difference through the cross-section of the bars. The correct temperature difference assures that all processes occur, and bars have the necessary mechanical properties. The bar leaves the quench box with a temperature gradient through its cross section. As the bar cools, heat flows from the bar's centre to its surface so that the bar's heat and pressure correctly tempers an intermediate ring of martensite and bainite. Finally, the slow cooling after quenching automatically tempers the austenitic core to ferrite and pearlite on the cooling bed. These bars therefore exhibit a variation in microstructure in their cross section, having strong, tough, tempered martensite in the surface layer of the bar, an intermediate layer of martensite and bainite, and a refined, tough and ductile ferrite and pearlite core. When the cut ends of TMT bars are etched in Nital (a mixture of nitric acid and methanol), three distinct rings appear: 1. A tempered outer ring of martensite, 2. A semi-tempered middle ring of martensite and bainite, and 3. a mild circular core of bainite, ferrite and pearlite. This is the desired micro structure for quality construction rebar. In contrast, lower grades of rebar are twisted when cold, work hardening them to increase their strength. However, after thermo mechanical treatment (TMT), bars do not need more work hardening. As there is no twisting during TMT, no torsional stress occurs, and so torsional stress cannot form surface defects in TMT bars. Therefore TMT bars resist corrosion better than cold, twisted and deformed (CTD) bars. After thermomechanical processing, some grades in which TMT Bars can be covered includes Fe: 415 /500 /550/ 600. These are much stronger compared with conventional CTD Bars and give up to 20% more strength to concrete structure with same quantity of steel. References Steelmaking Metallurgical processes
Thermomechanical processing
[ "Chemistry", "Materials_science" ]
683
[ "Metallurgical processes", "Steelmaking", "Metallurgy" ]
24,602,018
https://en.wikipedia.org/wiki/Universal%20differential%20equation
A universal differential equation (UDE) is a non-trivial differential algebraic equation with the property that its solutions can approximate any continuous function on any interval of the real line to any desired level of accuracy. Precisely, a (possibly implicit) differential equation is a UDE if for any continuous real-valued function and for any positive continuous function there exist a smooth solution of with for all . The existence of an UDE has been initially regarded as an analogue of the universal Turing machine for analog computers, because of a result of Shannon that identifies the outputs of the general purpose analog computer with the solutions of algebraic differential equations. However, in contrast to universal Turing machines, UDEs do not dictate the evolution of a system, but rather sets out certain conditions that any evolution must fulfill. Examples Rubel found the first known UDE in 1981. It is given by the following implicit differential equation of fourth-order: Duffin obtained a family of UDEs given by: and , whose solutions are of class for n > 3. Briggs proposed another family of UDEs whose construction is based on Jacobi elliptic functions: , where n > 3. Bournez and Pouly proved the existence of a fixed polynomial vector field p such that for any f and ε there exists some initial condition of the differential equation y' = p(y) that yields a unique and analytic solution satisfying |y(x) − f(x)| < ε(x) for all x in R. See also Zeta function universality Hölder's theorem References External links Wolfram Mathworld page on UDEs Differential equations Approximation theory
Universal differential equation
[ "Mathematics" ]
326
[ "Mathematical analysis", "Mathematical analysis stubs", "Approximation theory", "Mathematical objects", "Differential equations", "Equations", "Mathematical relations", "Approximations" ]
24,602,829
https://en.wikipedia.org/wiki/Glass%20mullion%20system
Glass mullion system or glass fin system is a glazing system in which sheets of tempered glass are suspended from special clamps, stabilized by perpendicular stiffeners of tempered glass, and joined by a structural silicone sealant or by metal patch plates. Notable examples I. M. Pei's National Airlines Sundrome at Terminal 6 of JFK Airport was noted for pioneering the use of glass mullions. The airline terminal has since been closed and demolished, after it and the adjacent TWA Flight Center were replaced by a new Terminal 5. Other buildings employing this system include the Rose Center for Earth and Space, Harvard Medical School in Boston, Massachusetts, NASDAQ Marketsite in New York and the Brooklyn Museum of Art. References External links Apex Tempered Glass Glass architecture
Glass mullion system
[ "Materials_science", "Engineering" ]
159
[ "Glass architecture", "Glass engineering and science", "Architecture stubs", "Architecture" ]
33,229,822
https://en.wikipedia.org/wiki/IsomiR
isomiRs (from iso- + miR) are miRNA sequences that have variations with respect to the reference sequence. The term was coined by Morin et al in 2008. It has been found that isomiR expression profiles can also exhibit race, population, and sex dependencies. There are four main variation types: 5' trimming—the 5' dicing site is upstream or downstream from the reference miRNA sequence 3' trimming—the 3' dicing site is upstream or downstream from the reference miRNA sequence 3' nucleotide addition—nucleotides added to the 3' end of the reference miRNA nucleotide substitution—nucleotides changes from the miRNA precursor. It is thought that may be similar process than post-transcriptional modifications. Discovery miRBase is considered to be the gold-standard miRNA database—it stores miRNA sequences detected by thousand of experiments. In this database each miRNA is associated with a miRNA precursor and with one or two mature miRNA (-5p and -3p). In the past it had always been said that the same miRNA precursor generates the same miRNA sequences. However, the advent of deep sequencing has now allowed researchers to detect a huge variability in miRNA biogenesis, meaning that from the same miRNA precursor many different sequences can be generated potentially have different targets, or even lead to opposite changes in mRNA expression. Biogenesis The advent of sequencing has permitted scientists to elucidate a huge landscape of new miRNAs, to increase our knowledge of the biogenesis involved and to discover putative post-transcriptional editing processes in miRNAs ignored until now. These processes mostly generate variations of the current miRNAs that are annotated in miRBase in the 3' and 5' terminus and in minor frequencies, nucleotide substitution along the miRNA length. The variations are mainly generated by a shift of Drosha and Dicer in the cleavage site, but also by nucleotide additions at the 3'-end, resulting in new sequences different from the annotated miRNA. These were named "isomiRs" by Morin et al., 2008. IsomiRs have been well established along different species in metazoa and deeply described for the first time in human stem cells and human brain samples. Moreover, it has been proven that isomiRs are not caused by RNA degradation during sample preparation for next generation sequencing. Some studies have tried to explain the miRNA diversity by structural bases of precursors but without clear results. The functionality of adenylation or uridynilation at the 3'end (3'addition isomiRs) has been related to alterations in the miRNA-3'-UTR stability. Furthermore, differential expression of isomiRs has been detected during development in D. melanogaster and Hippoglossus hippoglossus L., suggesting a biological function. Trimming variants: these are possible due to slight variations by Drosha and/or Dicer Nucleotide addition: Wyman et al. have described the process of nucleotide transferases adding individual nucleotides to miRNA sequences Nucleotide substitution: there is a huge range of possible changes in such an event, some of them can be explained by current Adenosine_deaminase like A to G or C to U, in a similar way to what happens in post-transcriptional RNA editing events involving mRNA. References External links miRBase MicroRNA RNA Gene expression
IsomiR
[ "Chemistry", "Biology" ]
720
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
33,231,853
https://en.wikipedia.org/wiki/Multipolar%20spindles
Multipolar spindles are spindle formations characteristic of cancer cells. Spindle formation is mostly conducted by the aster of the centrosome which it forms around itself. In a mitotic cell wherever two asters convene the formation of a spindle occurs. Mitosis consists of two independent processes: the intra-chromosomal and the extra-chromosomal (formation of spindle) changes both of these being in total coordination of each other. In cancer cells, it has been observed that the formation of the spindles comes before when compared to the chromosomes. Because the prophase stage is brief, metaphase begins earlier than in normal cells. Chromosomes unable to reach the metaphase plate are stranded behind. These chromosomes still have asters attached to them and when met with other asters, form multiple spindles. Characteristics Cells with multipolar spindles are characterized by more than two centrosomes, usually four, and sometimes have a second metaphase plate. The multiple centrosomes segregate to opposite ends of the cell and the spindles attach to the chromosomes haphazardly. When anaphase occurs in these cells, the chromosomes are separated abnormally and results in aneuploidy of both daughter cells. This can lead to loss of cell viability and chromosomal instability. Presence in cancer cells The presence of multipolar spindles in cancer cells is one of many differences from normal cells which can be seen under a microscope. Cancer is defined by uncontrolled cell growth and malignant cells can undergo cell division with multipolar spindles because they can group multiple centrosomes into two spindles. These multipolar spindles are often assembled early in mitosis and rarely seen towards the later stages. Research has shown possible causes of formation of multipolar spindles. A possible causes of multipolar spindle formation involve regulation of protein kinase family known as Aurora kinase. Aurora kinase has two forms which are designated Aurora kinase A and Aurora kinase B. These proteins play a key role in mitosis and are regulated by phosphorylation and degradation. Deregulation of these proteins can lead to multiple centrosome formation and aneuploidy. In some human cancers, the expression and kinase activity of Aurora kinases have been up-regulated and has been looked into as a possible target for anti-cancer drugs. References Cell cycle Oncology
Multipolar spindles
[ "Biology" ]
493
[ "Cell cycle", "Cellular processes" ]
31,691,219
https://en.wikipedia.org/wiki/Conjugate%20beam%20method
The conjugate-beam methods is an engineering method to derive the slope and displacement of a beam. A conjugate beam is defined as an imaginary beam with the same dimensions (length) as that of the original beam but load at any point on the conjugate beam is equal to the bending moment at that point divided by EI. The conjugate-beam method was developed by Heinrich Müller-Breslau in 1865. Essentially, it requires the same amount of computation as the moment-area theorems to determine a beam's slope or deflection; however, this method relies only on the principles of statics, so its application will be more familiar. The basis for the method comes from the similarity of Eq. 1 and Eq 2 to Eq 3 and Eq 4. To show this similarity, these equations are shown below. Integrated, the equations look like this. Here the shear V compares with the slope θ, the moment M compares with the displacement v, and the external load w compares with the M/EI diagram. Below is a shear, moment, and deflection diagram. A M/EI diagram is a moment diagram divided by the beam's Young's modulus and moment of inertia. To make use of this comparison we will now consider a beam having the same length as the real beam, but referred here as the "conjugate beam." The conjugate beam is "loaded" with the M/EI diagram derived from the load on the real beam. From the above comparisons, we can state two theorems related to the conjugate beam: Theorem 1: The slope at a point in the real beam is numerically equal to the shear at the corresponding point in the conjugate beam. Theorem 2: The displacement of a point in the real beam is numerically equal to the moment at the corresponding point in the conjugate beam. Conjugate-beam supports When drawing the conjugate beam it is important that the shear and moment developed at the supports of the conjugate beam account for the corresponding slope and displacement of the real beam at its supports, a consequence of Theorems 1 and 2. For example, as shown below, a pin or roller support at the end of the real beam provides zero displacement, but a non zero slope. Consequently, from Theorems 1 and 2, the conjugate beam must be supported by a pin or a roller, since this support has zero moment but has a shear or end reaction. When the real beam is fixed supported, both the slope and displacement are zero. Here the conjugate beam has a free end, since at this end there is zero shear and zero moment. Corresponding real and conjugate supports are shown below. Note that, as a rule, neglecting axial forces, statically determinate real beams have statically determinate conjugate beams; and statically indeterminate real beams have unstable conjugate beams. Although this occurs, the M/EI loading will provide the necessary "equilibrium" to hold the conjugate beam stable. Procedure for analysis The following procedure provides a method that may be used to determine the displacement and deflection at a point on the elastic curve of a beam using the conjugate-beam method. Conjugate beam Draw the conjugate beam for the real beam. This beam has the same length as the real beam and has corresponding supports as listed above. In general, if the real support allows a slope, the conjugate support must develop shear; and if the real support allows a displacement, the conjugate support must develop a moment. The conjugate beam is loaded with the real beam's M/EI diagram. This loading is assumed to be distributed over the conjugate beam and is directed upward when M/EI is positive and downward when M/EI is negative. In other words, the loading always acts away from the beam. Equilibrium Using the equations of statics, determine the reactions at the conjugate beams supports. Section the conjugate beam at the point where the slope θ and displacement Δ of the real beam are to be determined. At the section show the unknown shear V' and M' equal to θ and Δ, respectively, for the real beam. In particular, if these values are positive, and slope is counterclockwise and the displacement is upward. See also Cantilever method References Beam theory Mechanics Structural analysis
Conjugate beam method
[ "Physics", "Engineering" ]
919
[ "Structural engineering", "Structural analysis", "Mechanics", "Mechanical engineering", "Aerospace engineering" ]
31,691,263
https://en.wikipedia.org/wiki/Euler%27s%20pump%20and%20turbine%20equation
The Euler pump and turbine equations are the most fundamental equations in the field of turbomachinery. These equations govern the power, efficiencies and other factors that contribute to the design of turbomachines. With the help of these equations the head developed by a pump and the head utilised by a turbine can be easily determined. As the name suggests these equations were formulated by Leonhard Euler in the eighteenth century. These equations can be derived from the moment of momentum equation when applied for a pump or a turbine. Conservation of angular momentum A consequence of Newton's second law of mechanics is the conservation of the angular momentum (or the “moment of momentum”) which is fundamental to all turbomachines. Accordingly, the change of the angular momentum is equal to the sum of the external moments. The variation of angular momentum at inlet and outlet, an external torque and friction moments due to shear stresses act on an impeller or a diffuser. Since no pressure forces are created on cylindrical surfaces in the circumferential direction, it is possible to write: (1.13) Velocity triangles The color triangles formed by velocity vectors u,c and w are called velocity triangles and are helpful in explaining how pumps work. and are the absolute velocities of the fluid at the inlet and outlet respectively. and are the relative velocities of the fluid with respect to the blade at the inlet and outlet respectively. and are the velocities of the blade at the inlet and outlet respectively. is angular velocity. Figures 'a' and 'b' show impellers with backward and forward-curved vanes respectively. Euler's pump equation Based on Eq.(1.13), Euler developed the equation for the pressure head created by an impeller: (1) (2) Yth : theoretical specific supply; Ht : theoretical head pressure; g: gravitational acceleration For the case of a Pelton turbine the static component of the head is zero, hence the equation reduces to: Usage Euler’s pump and turbine equations can be used to predict the effect that changing the impeller geometry has on the head. Qualitative estimations can be made from the impeller geometry about the performance of the turbine/pump. This equation can be written as rothalpy invariance: where is constant across the rotor blade. See also Euler equations (fluid dynamics) List of topics named after Leonhard Euler Rothalpy References Turbines Pumps Gas compressors Ventilation fans Fluid dynamics Leonhard Euler
Euler's pump and turbine equation
[ "Physics", "Chemistry", "Engineering" ]
522
[ "Pumps", "Turbomachinery", "Gas compressors", "Chemical engineering", "Turbines", "Physical systems", "Hydraulics", "Piping", "Fluid dynamics" ]
31,692,171
https://en.wikipedia.org/wiki/False%20sunset
A false sunset can refer to one of two related atmospheric optical phenomena, in which either (1) the Sun appears to be setting into or to have set below the horizon while it is actually still some height above the horizon, or (2) the Sun has already set below the horizon, but still appears to be on or above the horizon (thus representing the reverse of a false sunrise). Depending on circumstances, these phenomena can give the impression of an actual sunset. There are several atmospheric conditions which may cause the effect, most commonly a type of halo, caused by the reflection and refraction of sunlight by small ice crystals in the atmosphere, often in the form of cirrostratus clouds. Depending on which variety of "false sunset" is meant, the halo has to appear either above the Sun (which itself is hidden below the horizon) or below it (in which case the real Sun is obstructed from view, e.g. by clouds or other objects), making the upper and lower tangent arc, upper and lower sun pillars and the subsun the most likely candidates. Similarly to a false sunrise, other atmospheric circumstances may be responsible for the effect as well, such as simple reflection of the sunlight off the bottom of the clouds, or a type of mirage like the Novaya Zemlya effect. See also False sunrise Halo (optical phenomenon) Lower tangent arc Mirage Novaya Zemlya effect Subsun Sun pillar Upper tangent arc References Atmospheric optical phenomena
False sunset
[ "Physics" ]
299
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
31,693,647
https://en.wikipedia.org/wiki/Photooxygenation
A photooxygenation is a light-induced oxidation reaction in which molecular oxygen is incorporated into the product(s). Initial research interest in photooxygenation reactions arose from Oscar Raab's observations in 1900 that the combination of light, oxygen and photosensitizers is highly toxic to cells. Early studies of photooxygenation focused on oxidative damage to DNA and amino acids, but recent research has led to the application of photooxygenation in organic synthesis and photodynamic therapy. Photooxygenation reactions are initiated by a photosensitizer, which is a molecule that enters an excited state when exposed to light of a specific wavelength (e.g. dyes and pigments). The excited sensitizer then reacts with either a substrate or ground state molecular oxygen, starting a cascade of energy transfers that ultimately result in an oxygenated molecule. Consequently, photooxygenation reactions are categorized by the type and order of these intermediates (as type I, type II, or type III reactions). Background Terminology Photooxygenation reactions are easily confused with a number of processes baring similar names (i.e. photosensitized oxidation). Clear distinctions can be made based on three attributes: oxidation, the involvement of light, and the incorporation of molecular oxygen into the products: Sensitizers Sensitizers (denoted "Sens") are compounds, such as fluorescein dyes, methylene blue, and polycyclic aromatic hydrocarbons, which are able to absorb electromagnetic radiation (usually in the visible range of the spectrum) and eventually transfer that energy to molecular oxygen or the substrate of photooxygenation process. Many sensitizers, both naturally occurring and synthetic, rely on extensive aromatic systems to absorb light in the visible spectrum. When sensitizers are excited by light, they reach a singlet state, 1Sens*. This singlet is then converted into a triplet state (which is more stable), 3Sens*, via intersystem crossing. The 3Sens* is what reacts with either the substrate or 3O2 in the three types of photooxygenation reactions. Sens ->[hv] {^1Sens^\ast} -> {^3Sens^\ast} States of molecular oxygen In classical Lewis structures, molecular oxygen, O2, is depicted as having a double bond between the two oxygen atoms. However, the molecular orbitals of O2 are actually more complex than Lewis structures seem to suggest. The highest occupied molecular orbital (HOMO) of O2 is a pair of degenerate antibonding π orbitals, π2px* and π2py*, which are both singly occupied by spin unpaired electrons. These electrons are the cause of O2 being a triplet diradical in the ground state (indicated as 3O2). While many stable molecules’ HOMOs consist of bonding molecular orbitals and therefore require a moderate energy jump from bonding to antibonding to reach their first excited state, the antibonding nature of molecular oxygen’s HOMO allows for a lower energy gap between its ground state and first excited state. This makes excitation of O2 a less energetically restrictive process. In the first excited state of O2, a 22 kcal/mol energy increase from the ground state, both electrons in the antibonding orbitals occupy a degenerate π* orbital, and oxygen is now in a singlet state (indicated as 1O2). 1O2 is very reactive with a lifetime between 10-100 μs. Types of photooxygenation The three types of photooxygenation reactions are distinguished by the mechanisms that they proceed through, as they are capable of yielding different or similar products depending on environmental conditions. Type I and II reactions proceed through neutral intermediates, while type III reactions proceed through charged species. The absence or presence of 1O2 is what distinguishes type I and type II reactions, respectively. Type I In type I reactions, the photoactivated 3Sens* interacts with the substrate to yield a radical substrate, usually through the homolytic bond breaking of a hydrogen bond on the substrate. This substrate radical then interacts with 3O2 (ground state) to yield a substrate-O2 radical. Such a radical is generally quenched by abstracting a hydrogen from another substrate molecule or from the solvent. This process allows for chain propagation of the reaction. Example: Oxygen trapping of diradical intermediates Type I photooxygenation reactions are frequently used in the process of forming and trapping diradical species. Mirbach et al. reported on one such reaction in which an azo compound is lysed via photolysis to form the diradical hydrocarbon and then trapped in a stepwise fashion by molecular oxygen: Type II In type II reactions, the 3Sens* transfers its energy directly with 3O2 via a radiation-less transition to create 1O2. 1O2 then adds to the substrate in a variety of ways including: cycloadditions (most commonly [4+2]), addition to double bonds to yield 1,2-dioxetanes, and ene reactions with olefins (the Schenck ene reaction). Example: precursor to prostaglandin synthesis The [4+2] cycloaddition of singlet oxygen to cyclopentadiene to create cis-2-cyclopentene-1,4-diol is a common step involved in the synthesis of prostaglandins. The initial addition singlet oxygen, through the concerted [4+2] cycloaddition, forms an unstable endoperoxide. Subsequent reduction of the peroxide bound produces the two alcohol groups. Type III In type III reactions, there is an electron transfer that occurs between the 3Sens* and the substrate resulting in an anionic Sens and a cationic substrate. Another electron transfer then occurs where the anionic Sens transfers an electron to 3O2 to form the superoxide anion, O2−. This transfer returns the Sens to its ground state. The superoxide anion and cationic substrate then interact to form the oxygenated product. Example: indolizine photooxygenation Photooxygenation of indolizines (heterocyclic aromatic derivates of indole) has been investigated in both mechanistic and synthetic contexts. Rather than proceeding through a Type I or Type II photooxygenation mechanism, some investigators have chosen to use 9,10-dicyanoanthracene (DCA) as a photosensitizer, leading to the reaction of an indolizine derivative with the superoxide anion radical. Note that the reaction proceeds through an indolizine radical cation intermediate that has not been isolated (and thus is not depicted): Applications Organic synthesis All 3 types of photooxygenation have been applied in the context of organic synthesis. In particular, type II photooxygenations have proven to be the most widely used (due to the low amount of energy required to generate singlet oxygen) and have been described as "one of the most powerful methods for the photochemical oxyfunctionalization of organic compounds." These reactions can proceed in all common solvents and with a broad range of sensitizers. Many of the applications of type II photooxygenations in organic synthesis come from Waldemar Adam's investigations into the ene-reaction of singlet oxygen with acyclic alkenes. Through the cis effect and the presence of appropriate steering groups the reaction can even provide high regioselectively and diastereoselectivity - two valuable stereochemical controls. Photodynamic therapy Photodynamic therapy (PDT) uses photooxygenation to destroy cancerous tissue. A photosensitizer is injected into the tumor and then specific wavelengths of light are exposed to the tissue to excite the Sens. The excited Sens generally follows a type I or II photooxygenation mechanism to result in oxidative damage to cells. Extensive oxidative damage to tumor cells will kill tumor cells. Also, oxidative damage to nearby blood vessels will cause local agglomeration and cut off nutrient supply to the tumor, thus starving the tumor. An important consideration when selecting the Sens to be used in PDT is the specific wavelength of light the Sens will absorb to reach an excited state. Since the maximum penetration of tissues is achieved around wavelengths of 800 nm, selecting Sens that absorb around this range is advantageous as it allows for PDT to be affective on tumors beneath the outer most layer of the dermis. The window of 800 nm light is most effective at penetrating tissues because at wavelengths shorter than 800 nm the light starts to be scattered by the macromolecules of cells and at wavelengths longer than 800 nm water molecules will begin to absorb the light and convert it into heat. References Reaction mechanisms Organic reactions Photochemistry
Photooxygenation
[ "Chemistry" ]
1,900
[ "Reaction mechanisms", "Organic reactions", "nan", "Physical organic chemistry", "Chemical kinetics" ]
31,694,592
https://en.wikipedia.org/wiki/Wozencraft%20ensemble
In coding theory, the Wozencraft ensemble is a set of linear codes in which most of codes satisfy the Gilbert-Varshamov bound. It is named after John Wozencraft, who proved its existence. The ensemble is described by , who attributes it to Wozencraft. used the Wozencraft ensemble as the inner codes in his construction of strongly explicit asymptotically good code. Existence theorem Theorem: Let For a large enough , there exists an ensemble of inner codes of rate , where , such that for at least values of has relative distance . Here relative distance is the ratio of minimum distance to block length. And is the q-ary entropy function defined as follows: In fact, to show the existence of this set of linear codes, we will specify this ensemble explicitly as follows: for , define the inner code Here we can notice that and . We can do the multiplication since is isomorphic to . This ensemble is due to Wozencraft and is called the Wozencraft ensemble. For all , we have the following facts: For any So is a linear code for every . Now we know that Wozencraft ensemble contains linear codes with rate . In the following proof, we will show that there are at least those linear codes having the relative distance , i.e. they meet the Gilbert-Varshamov bound. Proof To prove that there are at least number of linear codes in the Wozencraft ensemble having relative distance , we will prove that there are at most number of linear codes having relative distance i.e., having distance Notice that in a linear code, the distance is equal to the minimum weight of all codewords of that code. This fact is the property of linear code. So if one non-zero codeword has weight , then that code has distance Let be the set of linear codes having distance Then there are linear codes having some codeword that has weight Lemma. Two linear codes and with distinct and non-zero, do not share any non-zero codeword. Proof. Suppose there exist distinct non-zero elements such that the linear codes and contain the same non-zero codeword Now since for some and similarly for some Moreover since is non-zero we have Therefore , then and This implies , which is a contradiction. Any linear code having distance has some codeword of weight Now the Lemma implies that we have at least different such that (one such codeword for each linear code). Here denotes the weight of codeword , which is the number of non-zero positions of . Denote Then: So , therefore the set of linear codes having the relative distance has at least elements. See also Hamming bound Justesen code Linear code References . . External links Lecture 28: Justesen Code. Coding theory's course. Prof. Atri Rudra. Lecture 9: Bounds on the Volume of a Hamming Ball. Coding theory's course. Prof. Atri Rudra. Coding Theory's Notes: Gilbert-Varshamov Bound. Venkatesan Guruswami Error detection and correction
Wozencraft ensemble
[ "Engineering" ]
624
[ "Error detection and correction", "Reliability engineering" ]
31,694,741
https://en.wikipedia.org/wiki/Gilbert%E2%80%93Varshamov%20bound%20for%20linear%20codes
The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field . This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of q-ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method, and thus is not constructive. The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49. For larger alphabets, algebraic geometry codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert–Varshamov bound. Gilbert–Varshamov bound theorem Theorem: Let . For every and there exists a -ary linear code with rate and relative distance Here is the q-ary entropy function defined as follows: The above result was proved by Edgar Gilbert for general codes using the greedy method. Rom Varshamov refined the result to show the existence of a linear code. The proof uses the probabilistic method. High-level proof: To show the existence of the linear code that satisfies those constraints, the probabilistic method is used to construct the random linear code. Specifically, the linear code is chosen by picking a generator matrix whose entries are randomly chosen elements of . The minimum Hamming distance of a linear code is equal to the minimum weight of a nonzero codeword, so in order to prove that the code generated by has minimum distance , it suffices to show that for any . We will prove that the probability that there exists a nonzero codeword of weight less than is exponentially small in . Then by the probabilistic method, there exists a linear code satisfying the theorem. Formal proof: By using the probabilistic method, to show that there exists a linear code that has a Hamming distance greater than , we will show that the probability that the random linear code having the distance less than is exponentially small in . The linear code is defined by its generator matrix, which we choose to be a random generator matrix; that is, a matrix of elements which are chosen independently and uniformly over the field . Recall that in a linear code, the distance equals the minimum weight of a nonzero codeword. Let be the weight of the codeword . So The last equality follows from the definition: if a codeword belongs to a linear code generated by , then for some vector . By Boole's inequality, we have: Now for a given message we want to compute Let be a Hamming distance of two messages and . Then for any message , we have: . Therefore: Due to the randomness of , is a uniformly random vector from . So Let be the volume of a Hamming ball with the radius . Then: By choosing , the above inequality becomes Finally , which is exponentially small in n, that is what we want before. Then by the probabilistic method, there exists a linear code with relative distance and rate at least , which completes the proof. Comments The Varshamov construction above is not explicit; that is, it does not specify the deterministic method to construct the linear code that satisfies the Gilbert–Varshamov bound. A naive approach is to search over all generator matrices of size over the field to check if the linear code associated to achieves the predicted Hamming distance. This exhaustive search requires exponential runtime in the worst case. There also exists a Las Vegas construction that takes a random linear code and checks if this code has good Hamming distance, but this construction also has an exponential runtime. For sufficiently large non-prime q and for certain ranges of the variable δ, the Gilbert–Varshamov bound is surpassed by the Tsfasman–Vladut–Zink bound. See also Gilbert–Varshamov bound due to Gilbert construction for the general code Hamming bound Probabilistic method References Lecture 11: Gilbert–Varshamov Bound. Coding Theory Course. Professor Atri Rudra Lecture 9: Bounds on the Volume of Hamming Ball. Coding Theory Course. Professor Atri Rudra Coding Theory Notes: Gilbert–Varshamov Bound. Venkatesan Guruswami Coding theory
Gilbert–Varshamov bound for linear codes
[ "Mathematics" ]
926
[ "Discrete mathematics", "Coding theory" ]
37,273,576
https://en.wikipedia.org/wiki/Well%20cementing
Well cementing is the process of introducing cement to the annular space between the well-bore and casing or to the annular space between two successive casing strings. Personnel who conduct this job are called "Cementers". Cementing Principle To support the vertical and radial loads applied to the casing Isolate porous formations from the producing zone formations Exclude unwanted sub-surface fluids from the producing interval Protect casing from corrosion Resist chemical deterioration of cement Confine abnormal pore pressure To increase the possibility to hit the target Cement is introduced into the well by means of a cementing head. It helps in pumping cement between the running of the top and bottom plugs. The most important function of cementing is to achieve zonal isolation. Another purpose of cementing is to achieve a good cement-to-pipe bond. Too low an effective confining pressure may cause the cement to become ductile. For cement, one thing to note is that there is no correlation between the shear and compressive strength. Another fact to note is that cement strength ranges between 1000 and 1800 psi, and for reservoir pressures > 1000 psi; this means that the pipe cement bond will fail first. This would lead to the development of micro-annuli along the pipe. Cement Classes A. 0–6000 ft used when special properties are not required. B. 0–6000 ft used when conditions require moderate to high sulfate resistance C. 0–6000 ft used when conditions require high early strength D. 6000–10000 ft used under moderately high temperatures and pressures E. 10000–14000 ft used under conditions of high temperatures and pressures F. 10000–16000 ft used under conditions of extremely high temperatures and pressures G. 0–8000 ft can be used with accelerators and retarders to cover a wide range of well depths and temperatures. H. 0–8000 ft can be used with accelerators and retarders to cover a wide range of well depths and temperatures. J. 12000–16000 ft can be used under conditions of extremely high temperatures and pressures or can be mixed with accelerators and retarders to cover a range of well depth and temperatures. API cement grades A, B and C correspond to ASTM type I, II and III. Cement parameters Particle size distribution. Distribution of silicate and aluminate phases. Reactivity of hydrating phases. Gypsum/hemihydrates ratio and total sulphate content. Free alkali content. Chemical nature quantity and specific surface are of initial hydration products. Given the multitude of cement parameters the best and most thorough and practical method of designing a cement blend is through laboratory testing. The tests should be conducted on a sample that represents the cement to be used on the job site. Additives and mechanism of action There are 8 general categories of additives. Accelerators reduce setting time and increases the rate of compressive strength build up. Retarders extend the setting time. Extenders lower the density Weighting Agents increase density. Dispersants reduce viscosity. Fluid loss control agents. Lost circulation control agents. Specialty agents. Accelerators Can be added to shorten the setting time or to accelerate the hardening process. Calcium chloride, under the right conditions, tends to improve the compressive strength and significantly reduces the thickening and setting time. Used in concentrations of up to 4.0%. The mechanism of this process is debated, but there are four major theories put forward. It affects the hydration phase by one of the following theories: Chlorine () ions enhance the formation of ettingite (crystalline) Tenoutasse 1978. Increases the hydration of Aluminate phase/gypsum system. Traettenber & Gratten Bellow 1975. Accelerates the hydration n of C3S. Stein 1961 Changes the C-S-H structure. Controls the diffusion of water and ionic species. C-S-H gel has a higher area and will react faster. Diffusion of the chloride ions; Cl− ions diffuse into the C-S-H gel faster, this process producing the precipitation of portlandite sooner. The smaller size of the Cl− ions causes a greater tendency to diffuse into the C-S-H membrane. Eventually the C-S-H membrane bursts and the hydration process is accelerated. Changes the aqueous phase composition. Calcium chloride also produces a high amount of heat during hydration. This heat could accelerate the hydration process. This heat causes the casing to expand and contract as it dissipates. The differing rates of expansion and contraction could result in the casing pulling away from the cement and lead to the formation of micro-annuli. It also has the ability to affect the cement rheology, the compressive strength development, produce shrinkage by 10–15%, increases the permeability with time, and lowers the sulphate resistance. Retarders They work by one of 4 main theories; Adsorption theory: the retarder is adsorbed & inhibits water content. Precipitation theory: reacts with aqueous phase to form an impermeable and insoluble layer around the cement grains. Nucleation theory: retarder poisons the hydration product and prevents future growth. Complexation theory: Ca+ ions are chelated by the retarder. A nucleus can't be properly formed. Lignosulphonates: Wood pulp derived polymers. Effective in all Portland cements and added in concentrations of 0.1% to 1.5% BWOC. It absorbs into the C-S-H gel and causes a change of morphology to a more impermeable structure. Hydroxycarboxylic Acids – They have hydroxyl carboxyl groups in their molecular structure. Below 93 °C they can cause over-retardation. They are efficient to a temperature of 150 °C. One acid used in citric acid with an effective concentration of 0.1% to 0.3% BWOC. Saccharide Compounds: sugars are excellent retarders of Portland cement. Such compounds are not commonly used due to the degree of retardation being very sensitive to variation of concentration. It also depends on the compound's susceptibility to alkaline hydrolysis. Cellulose Derivatives: Polysaccharides derived from wood or vegetable matter, and are stable to the alkali conditions of the cement slurry. Organophosphates: Alkylene phosphonic acids. Inorganic Compounds: Acids and accompanying salts Sodium chloride, used in concentrations of up to 5.0% and with bottom hole temperatures less than 160 deg F. It will improve compressive strength and reduce thickening and setting time. Oxides of zinc and lead. Extenders Reduce slurry density – reduces hydrostatic pressure during cementing. Increases slurry yield – reduces the amount of cement required to produce a given volume. Water extenders – Allow/facilitate the addition of water to help extend the cement blend/slurry. Low-density aggregates – Materials with densities less than Portland cement (3.15 g/cm3) Hollow Glass Microspheres - engineered high strength (unicellular) low density (average true densities as low as 0.3 g/cc for the medium strength versions), non porous hollow glass spheres, usually below 40 μm in average particle size, enable hydraulic cement slurries as low as 8 PpG (960 Kg/m^3) Gaseous extenders – Nitrogen or air can be used to prepare foam. Clays – Hydrous aluminum silicates. Most common is bentonite (85% mineral clay smectite). Can be used to obtain a cement of density 11.5 to 15.0 ppg, with concentrations up to 20%. Used with an API ratio of 5.3% water to 1.0% bentonite. Bentonite – this is added in conjunction with additional water, used for specific weight control but makes poor cement. Pozzolan – finely ground pumice of fly ash. Pozzolan costs very little, but does not achieve much weight reduction of the slurry. Diatomaceous earth – also requires additional water to be added. Properties are similar to those of bentonite. Silica – α quartz and condensed silica fume. α quartz is used to prevent strength retrogression in thermal wells. Silica fume (micro fume) is highly reactive, and is regarded as the most effective pozzolanic material available. The high surface area increases the water demand to get pumpable slurry. Such a mixture can produce a cement slurry as low as 11.0 ppg. Normal concentration = 15% BWOC but can be as high as 28% BWOC. Can sometimes be used to prevent annular fluid migration. Expanded Perlite—Used to reduce the weight as water is added with its addition. Without bentonite the perlite separates and floats to the upper part of the slurry. Can be used to achieve a slurry weight as low as 12.0ppg. Bentonite in concentrations of 2–4% is also added to prevent segregation of particles and slurry. Gilsonite – Used to obtain slurry weights as low as 12.0ppg. In high concentrations, mixing is a problem. Powdered coal – Can be used to obtain a slurry with a density as low as 11.9ppg, 12.5–25 lbs per sack are usually added. Particulate materials Uses latex additives to achieve fluid loss. Emulsion polymers are supplied as suspensions of polymer particles. They contain about 50% solids. Such particles can physically plug the pores in the filter cake. Water-soluble polymers They increase the viscosity of the aqueous phase and decrease the filter cake permeability. Cellulose derivatives Organic proteins (polypeptides). Not used above temperatures of 93 °C. Non-ionic synthetic polymers Can lower fluid loss rates from 500 ml/30 min to 20 ml/30 min. There are also anionic synthetic polymers and cationic polymers. Bridging agents The addition of materials that can physically bridge fractured or weak zones. E.g. gilsonite and cellophane flakes added in quantities of 0.125–0.500 lbs/sack. Thixotropic Cement These are cement slurries that upon entering the formation begin to gel and eventually become self-supporting. References Marca, C. (1990). Remedial Cementing. Sugar Land, Texas: Schlumberger Educational Services. Nelson, E. B. (1990). Well Cementing. (E. B. Nelson, Ed.) Sugar Land Texas 77478: Schlumberger Educational Services. Rae, P. (1990). Cement Job Design. Sugar Land, Texas: Schulumberger Educational Services. External links How Does Cementing Work? Cementing Operations: An Integrated Approach Oil wells
Well cementing
[ "Chemistry" ]
2,275
[ "Petroleum technology", "Oil wells" ]
37,276,553
https://en.wikipedia.org/wiki/ACAMPs
Apoptotic-cell associated molecular patterns (ACAMPs) are molecular markers present on cells which are going through apoptosis, i.e. programmed cell death (similarly, Pathogen-associated molecular patterns (PAMPs) are markers of invading pathogens and Damage-associated molecular patterns (DAMPs) are markers of damaged tissue). The term was used for the first time by C. D. Gregory in 2000. Recognition of these patterns by the pattern recognition receptors (PRRs) of phagocytes then leads to phagocytosis of the apoptotic cell. These patterns include eat-me signals on the apoptotic cells, loss of don’t-eat-me signals on viable cells and come-get-me signals (also find-me signals)) secreted by the apoptotic cells in order to attract phagocytes (mostly macrophages and immature dendritic cells). Thanks to these markers, apoptotic cells, unlike necrotic cells, do not trigger the unwanted immune response. Eat-me signals Eat-me signals mark the apoptotic cells for phagocytes which can subsequently engulf them and actively prevent the inflammation. Various molecular markers can serve as eat-me signals, particularly a change in composition of the cell membrane, modifications of molecules on the cell surface, changed charge on the plasma membrane, or indirectly the extracellular bridging molecules. Cell membrane composition Deposition of different phospholipids in the phospholipid bilayer of the cell membrane is strictly asymmetric. On a viable cell, phosphatidylserine is only present in the inner layer of the cell membrane – this is maintained by aminophospholipid translocase. During apoptosis, the phospholipid scrambling activity occurs and the aminophospholipid translocase activity is reduced. Consequently, the phosphatidylserine content in the outer leaflet of the membrane is quickly increased. It is then recognized by one or more receptors of the phagocytes. The phosphatidylserine molecules can also be oxidized and contribute to the induction of engulfment. Surface molecules Some molecules naturally present on cells can also work as eat-me signals after certain modifications. The externalized phospholipids can be oxidized and recognized by scavenger receptors of the phagocytes. Similarly, adhesion molecule ICAM3, normally recognized by macrophage integrins, is after alteration bound by macrophage CD14. Additionally, some intracellular molecules are displayed on the cell surface after induction of the apoptotic program to ease the recognition. As an example, annexin I is externalized in the same locations as phosphatidylserine and helps with clustering phagocytic phosphatidylserine receptors around the apoptotic cell. Another externalized molecule marking apoptotic cells is calreticulin. Generally, the ability of apoptotic cells to change their charge with polyanionic structures marks them as a target for phagocytosis. Extracellular bridging molecules Extracellular bridging molecules are serum proteins which facilitate connection between apoptotic cell and phagocyte. They can also be seen as secreted forms of pattern recognition receptors (PRRs).7 These include collectins, components of complement pathways (e.g. C1q, C3b) and other molecules found in extracellular space. Collectins (e.g. mannose-binding lectin and surfactant protein A) bind the altered surface sugars on apoptotic cell and enable easier uptake by phagocytes which recognize their complex with calreticulin. Besides complement particles C1q and C3b which help to opsonize the apoptotic cells, also thrombospondin, pentraxins (C-reactive protein and serum amyloid P), β2GP1, MFG-E8 and GAS-6 are also capable of creating a bridge between macrophage and apoptotic cell. Don't-eat-me signals Don’t-eat-me signals (also SAMPs = self-associated molecular patterns) are present on all host viable cells and actively protect the cells from engulfment. They achieve this by facilitating a detachment of phagocytes from the cell (CD31-CD31 interaction) or even sending repulsive signals towards the phagocyte (CD47-SIRPα interaction). Another molecule, CD300a binds the externalized phospholipids and prevents the phagocytosis. During apoptosis, these signals must be removed or changed in order not to block the ingestion by phagocyte. Another marker of non-apoptotic cells is specific surface molecules glycosylation. The sugar chains are usually terminated with sialic acid which then binds various molecules and receptors and efficiently prevents the cell from phagocytosis. Non-apoptotic cells also express complement inhibitors, preventing the assembly of C3 convertase or the lytic pore. Among soluble inhibitors there are factor H, C1 inhibitor, C4b-binding protein, factor I, S protein or clusterin, the membrane-bound inhibitors are CR1, membrane cofactor protein (MCF), decay accelerating factor (DAF) or protectin (CD59). Come-get-me signals Phagocytes are attracted to the site with apoptotic cells by so-called come-get-me or find-me signals. During apoptosis, caspase 3 activates the Ca2+-independent phospholipase A2, leading to release of lysophosphatidylcholine which acts as such attractant. Other find-me signals include fractalkine, sphingosine-1-phosphate, ATP and UTP nucleotides, or endothelial monocyte-activating polypeptide II (EMAP II). Phagocyte receptors involved in ACAPMs recognition Diversity of ACAMPs requires many receptor families for their recognition. These include scavenger receptors (e.g. CD36, CD68, LOX-1 recognizing oxidized LDL), integrins (e.g. αvβ3recognizing MFG-E8 or thrombospondin), lectins (binding the altered sugars), the receptor tyrosine kinase MER (recognizing GAS-6), LRP1 (interacts with calreticulin which is a known C1q receptor), or complement receptors (CR3 and CR4). There is a variety of receptors which recognize the externalized phosphatidylserine. Among others, brain-specific angiogenesis inhibitor 1 (BAI1), T-cell immunoglobulin and mucin-domain-containing molecule 4 (TIM-4) and TIM-1, stabilin-2, receptor for advanced glycation end products (RAGE), The phosphatidylserine receptor (PSR), previously thought to mediate the engulfment of apoptotic cells, was shown to only indirectly contribute to the process. Certain molecules which are numbered among ACAMPs are recognized by pattern recognition receptors (PRR) because they share structural characteristics with PAMPs. As an example, CD14 normally binds lipopolysaccharide (LPS) on the surface of gram-negative bacteria but can also recognize LPS-like structures on apoptotic cells. C1q and collectins are other PRRs which could potentially recognize both PAMPs and ACAMPs structures. It is necessary to additionally use the unique recognition pathways for distinguishing the two cases (for example, the Toll-like receptors signalling directs the proinflammatory response triggered by PAMPs). References Apoptosis
ACAMPs
[ "Chemistry" ]
1,664
[ "Apoptosis", "Signal transduction" ]
37,278,175
https://en.wikipedia.org/wiki/Phenylpropanoic%20acid
Phenylpropanoic acid or hydrocinnamic acid is a carboxylic acid with the formula C9H10O2 belonging to the class of phenylpropanoids. It is a white, crystalline solid with a sweet, floral scent at room temperature. Phenylpropanoic acid has a wide variety of uses including cosmetics, food additives, and pharmaceuticals. Preparation and reactions Phenylpropanoic acid can be prepared from cinnamic acid by hydrogenation. Originally it was prepared by reduction with sodium amalgam in water and by electrolysis. A characteristic reaction of phenylpropanoic acid is its cyclization to 1-indanone. When the side chain is homologated by the Arndt–Eistert reaction, subsequent cyclization affords 2-tetralone, derivatives. Uses Phenylpropanoic acid is widely used for flavoring, food additives, spices, fragrance, and medicines as it acts as a fixative agent, or a preservative. Food industry Phenylpropanoic acid is used in the food industry to preserve and maintain the original aroma quality of frozen foods. It can also be used to add or restore original color to food. Shelved foods are protected from microorganism by adding phenylpropanoic acid to prevent deterioration to the food by microorganisms as well as acting as an antioxidant to prolong shelf life foods. This compound is used as a sweetener as well to sweeten food and can be found in table top sweeteners. It can also act as an emulsifier, to keep oil and water mixtures from separating. Phenylpropanoic acid is also added to food for technological purposes in a wide variety including manufacturing, processing, preparation, treatment, packaging, transportation or storage, and food additives. It also provides flavorings for ice cream, bakery, and confectionery. Cosmetics This compound is used frequently in cosmetic products such as perfumes, bath gels, detergent powders, liquid detergents, fabric softeners, and soaps as it gives off a floral scent. The acid is commonly used as flavoring for toothpastes and mouthwashes in addition to providing floral scents and possible fruity, minty, spearmint, strawberry, lychee, and herbal flavorings. References Phenyl alkanoic acids Phenylpropanoids Sweet-smelling chemicals
Phenylpropanoic acid
[ "Chemistry" ]
515
[ "Biomolecules by chemical classification", "Phenylpropanoids" ]
37,279,342
https://en.wikipedia.org/wiki/Zariski%27s%20lemma
In algebra, Zariski's lemma, proved by , states that, if a field is finitely generated as an associative algebra over another field , then is a finite field extension of (that is, it is also finitely generated as a vector space). An important application of the lemma is a proof of the weak form of Hilbert's Nullstellensatz: if I is a proper ideal of (k an algebraically closed field), then I has a zero; i.e., there is a point x in such that for all f in I. (Proof: replacing I by a maximal ideal , we can assume is maximal. Let and be the natural surjection. By the lemma is a finite extension. Since k is algebraically closed that extension must be k. Then for any , ; that is to say, is a zero of .) The lemma may also be understood from the following perspective. In general, a ring R is a Jacobson ring if and only if every finitely generated R-algebra that is a field is finite over R. Thus, the lemma follows from the fact that a field is a Jacobson ring. Proofs Two direct proofs are given in Atiyah–MacDonald; the one is due to Zariski and the other uses the Artin–Tate lemma. For Zariski's original proof, see the original paper. Another direct proof in the language of Jacobson rings is given below. The lemma is also a consequence of the Noether normalization lemma. Indeed, by the normalization lemma, K is a finite module over the polynomial ring where are elements of K that are algebraically independent over k. But since K has Krull dimension zero and since an integral ring extension (e.g., a finite ring extension) preserves Krull dimensions, the polynomial ring must have dimension zero; i.e., . The following characterization of a Jacobson ring contains Zariski's lemma as a special case. Recall that a ring is a Jacobson ring if every prime ideal is an intersection of maximal ideals. (When A is a field, A is a Jacobson ring and the theorem below is precisely Zariski's lemma.) Proof: 2. 1.: Let be a prime ideal of A and set . We need to show the Jacobson radical of B is zero. For that end, let f be a nonzero element of B. Let be a maximal ideal of the localization . Then is a field that is a finitely generated A-algebra and so is finite over A by assumption; thus it is finite over and so is finite over the subring where . By integrality, is a maximal ideal not containing f. 1. 2.: Since a factor ring of a Jacobson ring is Jacobson, we can assume B contains A as a subring. Then the assertion is a consequence of the next algebraic fact: (*) Let be integral domains such that B is finitely generated as A-algebra. Then there exists a nonzero a in A such that every ring homomorphism , K an algebraically closed field, with extends to . Indeed, choose a maximal ideal of A not containing a. Writing K for some algebraic closure of , the canonical map extends to . Since B is a field, is injective and so B is algebraic (thus finite algebraic) over . We now prove (*). If B contains an element that is transcendental over A, then it contains a polynomial ring over A to which φ extends (without a requirement on a) and so we can assume B is algebraic over A (by Zorn's lemma, say). Let be the generators of B as A-algebra. Then each satisfies the relation where n depends on i and . Set . Then is integral over . Now given , we first extend it to by setting . Next, let . By integrality, for some maximal ideal of . Then extends to . Restrict the last map to B to finish the proof. Notes Sources Lemmas in algebra Theorems about algebras
Zariski's lemma
[ "Mathematics" ]
859
[ "Theorems in algebra", "Lemmas in algebra", "Lemmas" ]
37,280,120
https://en.wikipedia.org/wiki/Calix%20Limited
Calix Limited is an Australian technology company whose core technology is a "kiln" built in Bacchus Marsh that produces "mineral honeycomb". Calix's technology includes work on CO2 capture to address global sustainability challenges across several industries including wastewater treatment, aquaculture, advanced energy storage and cement / lime production. Products Calix's products include ACTI-Mag, BOOSTER-Mag and AQUA-Cal+. ACTI-Mag - Wastewater management AQUA-Cal+ - Sustainable aquaculture farming, lake & pond remediation BOOSTER-Mag - Crop protection History The origins of Calix began in 2005, with work on flash calcining. In 2007 their first pilot calciner facility was built to test what was possible with a fully established CFC 850 calciner plant at Bacchus Marsh in 2011. In 2012 Calix acquired mining tenements at Myrtle Springs, South Australia. Between 2013 and 2017 Calix established a plant in South East Asia and won several awards and funding for a variety of sustainable technologies including Advanced Battery research, and Agricultural applications. Calix listed publicly in the Australian Securities Exchange (ASX) in 2018 and in coordination with the LEILAC (Low Emissions Intensity Lime & Cement) project in Europe, began construction of a pilot plant in Lixhe, Belgium. In 2019 Calix acquired the US-based company Inland Empire Resources. Project LEILAC Calix is a key partner in, and provides the core technology towards, Project LEILAC (Low Emissions Intensity Lime & Cement), as part of Horizon 2020, a programme established by the European Commission. Financial information Calix Limited is a listed Australian public company, having listed on the Australian Securities Exchange (ASX) in 2018. Prior to its listing, Calix was backed by investors including Och-Ziff Capital Management Group and Washington H Soul Pattinson. Since its foundation in 2005, Calix Limited has to date committed more than $40 million to commercialising its technologies and processes. References Building a Better World With Green Cement, By Michael Rosenwald. Smithsonian, December 2011 External links LEILAC Project Climate change mitigation Energy storage Biotechnology Companies based in Sydney Companies of Australia Chemical companies of Australia Technology companies established in 2005 Australian companies established in 2005
Calix Limited
[ "Biology" ]
460
[ "Biotechnology", "nan" ]
37,280,933
https://en.wikipedia.org/wiki/Stellar%20chemistry
Stellar chemistry is the study of chemical composition of astronomical objects; stars in particular, hence the name stellar chemistry. The significance of stellar chemical composition is an open ended question at this point. Some research asserts that a greater abundance of certain elements (such as carbon, sodium, silicon, and magnesium) in the stellar mass are necessary for a star's inner solar system to be habitable over long periods of time. The hypothesis being that the "abundance of these elements make the star cooler and cause it to evolve more slowly, thereby giving planets in its habitable zone more time to develop life as we know it." Stellar abundance of oxygen also appears to be critical to the length of time newly developed planets exist in a habitable zone around their host star. Researchers postulate that if our own sun had a lower abundance of oxygen, the Earth would have ceased to "live" in a habitable zone a billion years ago, long before complex organisms had the opportunity to evolve. Other research Other research is being or has been done in numerous areas relating to the chemical nature of stars. The formation of stars is of particular interest. Research published in 2009 presents spectroscopic observations of so-called "young stellar objects" viewed in the Large Magellanic Cloud with the Spitzer Space Telescope. This research suggests that water, or, more specifically, ice, plays a large role in the formation of these eventual stars Others are researching much more tangible ideas relating to stars and chemistry. Research published in 2010 studied the effects of a strong stellar flare on the atmospheric chemistry of an Earth-like planet orbiting an M dwarf star, specifically, the M dwarf AD Leonis. This research simulated the effects an observed flare produced by AD Leonis on April 12, 1985 would have on a hypothetical Earth-like planet. After simulating the effects of both UV radiation and protons on the hypothetical planet's atmosphere, the researchers concluded that "flares may not present a direct hazard for life on the surface of an orbiting habitable planet. Given that AD Leo[nis] is one of the most magnetically active M dwarfs known, this conclusion should apply to planets around other M dwarfs with lower levels of chromospheric activity." See also Abundance of the chemical elements Astrochemistry Cosmochemistry Metallicity Molecules in stars References Astrochemistry Molecules
Stellar chemistry
[ "Physics", "Chemistry", "Astronomy" ]
474
[ "Astronomical sub-disciplines", "Molecular physics", "Molecules", "Astrochemistry", "Physical objects", "nan", "Atoms", "Matter" ]
2,933,361
https://en.wikipedia.org/wiki/Quantum%20calculus
Quantum calculus, sometimes called calculus without limits, is equivalent to traditional infinitesimal calculus without the notion of limits. The two types of calculus in quantum calculus are q-calculus and h-calculus. The goal of both types is to find "analogs" of mathematical objects, where, after taking a certain limit, the original object is returned. In q-calculus, the limit as q tends to 1 is taken of the q-analog. Likewise, in h-calculus, the limit as h tends to 0 is taken of the h-analog. The parameters and can be related by the formula . Differentiation The q-differential and h-differential are defined as: and , respectively. The q-derivative and h-derivative are then defined as and respectively. By taking the limit as of the q-derivative or as of the h-derivative, one can obtain the derivative: Integration q-integral A function F(x) is a q-antiderivative of f(x) if DqF(x) = f(x). The q-antiderivative (or q-integral) is denoted by and an expression for F(x) can be found from:, which is called the Jackson integral of f(x). For , the series converges to a function F(x) on an interval (0,A] if |f(x)xα| is bounded on the interval for some . The q-integral is a Riemann–Stieltjes integral with respect to a step function having infinitely many points of increase at the points qj..The jump at the point qj is qj. Calling this step function gq(t) gives dgq(t) = dqt. h-integral A function F(x) is an h-antiderivative of f(x) if DhF(x) = f(x). The h-integral is denoted by . If a and b differ by an integer multiple of h then the definite integral is given by a Riemann sum of f(x) on the interval , partitioned into sub-intervals of equal width h. The motivation of h-integral comes from the Riemann sum of f(x). Following the idea of the motivation of classical integrals, some of the properties of classical integrals hold in h-integral. This notion has broad applications in numerical analysis, and especially finite difference calculus. Example In infinitesimal calculus, the derivative of the function is (for some positive integer ). The corresponding expressions in q-calculus and h-calculus are: where is the q-bracket and respectively. The expression is then the q-analog and is the h-analog of the power rule for positive integral powers. The q-Taylor expansion allows for the definition of q-analogs of all of the usual functions, such as the sine function, whose q-derivative is the q-analog of cosine. History The h-calculus is the calculus of finite differences, which was studied by George Boole and others, and has proven useful in combinatorics and fluid mechanics. In a sense, q-calculus dates back to Leonhard Euler and Carl Gustav Jacobi, but has only recently begun to find usefulness in quantum mechanics, given its intimate connection with commutativity relations and Lie algebras, specifically quantum groups. See also Noncommutative geometry Quantum differential calculus Time scale calculus q-analog Basic hypergeometric series Quantum dilogarithm References Further reading George Gasper, Mizan Rahman, Basic Hypergeometric Series, 2nd ed, Cambridge University Press (2004), ,
Quantum calculus
[ "Physics", "Mathematics" ]
749
[ "Theoretical physics", "Differential calculus", "Quantum mechanics", "Calculus" ]
2,934,261
https://en.wikipedia.org/wiki/Gaussian%20polar%20coordinates
In the theory of Lorentzian manifolds, spherically symmetric spacetimes admit a family of nested round spheres. In each of these spheres, every point can be carried to any other by an appropriate rotation about the centre of symmetry. There are several different types of coordinate chart that are adapted to this family of nested spheres, each introducing a different kind of distortion. The best known alternative is the Schwarzschild chart, which correctly represents distances within each sphere, but (in general) distorts radial distances and angles. Another popular choice is the isotropic chart, which correctly represents angles (but in general distorts both radial and transverse distances). A third choice is the Gaussian polar chart, which correctly represents radial distances, but distorts transverse distances and angles. There are other possible charts; the article on spherically symmetric spacetime describes a coordinate system with intuitively appealing features for studying infalling matter. In all cases, the nested geometric spheres are represented by coordinate spheres, so we can say that their roundness is correctly represented. Definition In a Gaussian polar chart (on a static spherically symmetric spacetime), the metric (aka line element) takes the form Depending on context, it may be appropriate to regard and as undetermined functions of the radial coordinate . Alternatively, we can plug in specific functions (possibly depending on some parameters) to obtain an isotropic coordinate chart on a specific Lorentzian spacetime. Applications Gaussian charts are often less convenient than Schwarzschild or isotropic charts. However, they have found occasional application in the theory of static spherically symmetric perfect fluids. See also Static spacetime Static spherically symmetric perfect fluids Schwarzschild coordinates Isotropic coordinates Frame fields in general relativity for more about frame fields and coframe fields. Coordinate charts in general relativity Lorentzian manifolds
Gaussian polar coordinates
[ "Physics", "Mathematics" ]
387
[ "Coordinate systems", "Relativity stubs", "Coordinate charts in general relativity", "Theory of relativity" ]
3,936,576
https://en.wikipedia.org/wiki/Canonical%20quantum%20gravity
In physics, canonical quantum gravity is an attempt to quantize the canonical formulation of general relativity (or canonical gravity). It is a Hamiltonian formulation of Einstein's general theory of relativity. The basic theory was outlined by Bryce DeWitt in a seminal 1967 paper, and based on earlier work by Peter G. Bergmann using the so-called canonical quantization techniques for constrained Hamiltonian systems invented by Paul Dirac. Dirac's approach allows the quantization of systems that include gauge symmetries using Hamiltonian techniques in a fixed gauge choice. Newer approaches based in part on the work of DeWitt and Dirac include the Hartle–Hawking state, Regge calculus, the Wheeler–DeWitt equation and loop quantum gravity. Canonical quantization In the Hamiltonian formulation of ordinary classical mechanics the Poisson bracket is an important concept. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson-bracket relations, where the Poisson bracket is given by for arbitrary phase space functions and . With the use of Poisson brackets, the Hamilton's equations can be rewritten as, These equations describe a "flow" or orbit in phase space generated by the Hamiltonian . Given any phase space function , we have In canonical quantization the phase space variables are promoted to quantum operators on a Hilbert space and the Poisson bracket between phase space variables is replaced by the canonical commutation relation: In the so-called position representation this commutation relation is realized by the choice: and The dynamics are described by Schrödinger equation: where is the operator formed from the Hamiltonian with the replacement and . Canonical quantization with constraints Canonical classical general relativity is an example of a fully constrained theory. In constrained theories there are different kinds of phase space: the unrestricted (also called kinematic) phase space on which constraint functions are defined and the reduced phase space on which the constraints have already been solved. For canonical quantization in general terms, phase space is replaced by an appropriate Hilbert space and phase space variables are to be promoted to quantum operators. In Dirac's approach to quantization the unrestricted phase space is replaced by the so-called kinematic Hilbert space and the constraint functions replaced by constraint operators implemented on the kinematic Hilbert space; solutions are then searched for. These quantum constraint equations are the central equations of canonical quantum general relativity, at least in the Dirac approach which is the approach usually taken. In theories with constraints there is also the reduced phase space quantization where the constraints are solved at the classical level and the phase space variables of the reduced phase space are then promoted to quantum operators, however this approach was thought to be impossible in General relativity as it seemed to be equivalent to finding a general solution to the classical field equations. However, with the fairly recent development of a systematic approximation scheme for calculating observables of General relativity (for the first time) by Bianca Dittrich, based on ideas introduced by Carlo Rovelli, a viable scheme for a reduced phase space quantization of Gravity has been developed by Thomas Thiemann. However it is not fully equivalent to the Dirac quantization as the `clock-variables' must be taken to be classical in the reduced phase space quantization, as opposed to the case in the Dirac quantization. A common misunderstanding is that coordinate transformations are the gauge symmetries of general relativity, when actually the true gauge symmetries are diffeomorphisms as defined by a mathematician (see the Hole argument) – which are much more radical. The first class constraints of general relativity are the spatial diffeomorphism constraint and the Hamiltonian constraint (also known as the Wheeler–De Witt equation) and imprint the spatial and temporal diffeomorphism invariance of the theory respectively. Imposing these constraints classically are basically admissibility conditions on the initial data, also they generate the 'evolution' equations (really gauge transformations) via the Poisson bracket. Importantly the Poisson bracket algebra between the constraints fully determines the classical theory – this is something that must in some way be reproduced in the semi-classical limit of canonical quantum gravity for it to be a viable theory of quantum gravity. In Dirac's approach it turns out that the first class quantum constraints imposed on a wavefunction also generate gauge transformations. Thus the two step process in the classical theory of solving the constraints (equivalent to solving the admissibility conditions for the initial data) and looking for the gauge orbits (solving the `evolution' equations) is replaced by a one step process in the quantum theory, namely looking for solutions of the quantum equations . This is because it obviously solves the constraint at the quantum level and it simultaneously looks for states that are gauge invariant because is the quantum generator of gauge transformations. At the classical level, solving the admissibility conditions and evolution equations are equivalent to solving all of Einstein's field equations, this underlines the central role of the quantum constraint equations in Dirac's approach to canonical quantum gravity. Canonical quantization, diffeomorphism invariance and manifest finiteness A diffeomorphism can be thought of as simultaneously 'dragging' the metric (gravitational field) and matter fields over the bare manifold while staying in the same coordinate system, and so are more radical than invariance under a mere coordinate transformation. This symmetry arises from the subtle requirement that the laws of general relativity cannot depend on any a-priori given space-time geometry. This diffeomorphism invariance has an important implication: canonical quantum gravity will be manifestly finite as the ability to `drag' the metric function over the bare manifold means that small and large `distances' between abstractly defined coordinate points are gauge-equivalent! A more rigorous argument has been provided by Lee Smolin: “A background independent operator must always be finite. This is because the regulator scale and the background metric are always introduced together in the regularization procedure. This is necessary, because the scale that the regularization parameter refers to must be described in terms of a background metric or coordinate chart introduced in the construction of the regulated operator. Because of this the dependence of the regulated operator on the cutoff, or regulator parameter, is related to its dependence on the background metric. When one takes the limit of the regulator parameter going to zero one isolates the non-vanishing terms. If these have any dependence on the regulator parameter (which would be the case if the term is blowing up) then it must also have dependence on the background metric. Conversely, if the terms that are nonvanishing in the limit the regulator is removed have no dependence on the background metric, it must be finite.” In fact, as mentioned below, Thomas Thiemann has explicitly demonstrated that loop quantum gravity (a well developed version of canonical quantum gravity) is manifestly finite even in the presence of all forms of matter! So there is no need for renormalization and the elimination of infinities. However, in other work, Thomas Thiemann admitted the need for renormalization as a way to fix quantization ambiguities. In perturbative quantum gravity (from which the non-renormalization arguments originate), as with any perturbative scheme, one makes the reasonable assumption that the space time at large scales should be well approximated by flat space; one scatters gravitons on this approximately flat background and one finds that their scattering amplitude has divergences which cannot be absorbed into the redefinition of the Newton constant. Canonical quantum gravity theorists do not accept this argument; however they have not so far provided an alternative calculation of the graviton scattering amplitude which could be used to understand what happens with the terms found non-renormalizable in the perturbative treatment. A long-held expectation is that in a theory of quantum geometry such as canonical quantum gravity, geometric quantities such as area and volume become quantum observables and take non-zero discrete values, providing a natural regulator which eliminates infinities from the theory including those coming from matter contributions. This 'quantization' of geometric observables is in fact realized in loop quantum gravity (LQG). Canonical quantization in metric variables The quantization is based on decomposing the metric tensor as follows, where the summation over repeated indices is implied, the index 0 denotes time , Greek indices run over all values 0, . . . ,3 and Latin indices run over spatial values 1, . . ., 3. The function is called the lapse function and the functions are called the shift functions. The spatial indices are raised and lowered using the spatial metric and its inverse : and , , where is the Kronecker delta. Under this decomposition the Einstein–Hilbert Lagrangian becomes, up to total derivatives, where is the spatial scalar curvature computed with respect to the Riemannian metric and is the extrinsic curvature, where denotes Lie-differentiation, is the unit normal to surfaces of constant and denotes covariant differentiation with respect to the metric . Note that . DeWitt writes that the Lagrangian "has the classic form 'kinetic energy minus potential energy,' with the extrinsic curvature playing the role of kinetic energy and the negative of the intrinsic curvature that of potential energy." While this form of the Lagrangian is manifestly invariant under redefinition of the spatial coordinates, it makes general covariance opaque. Since the lapse function and shift functions may be eliminated by a gauge transformation, they do not represent physical degrees of freedom. This is indicated in moving to the Hamiltonian formalism by the fact that their conjugate momenta, respectively and , vanish identically (on shell and off shell). These are called primary constraints by Dirac. A popular choice of gauge, called synchronous gauge, is and , although they can, in principle, be chosen to be any function of the coordinates. In this case, the Hamiltonian takes the form where and is the momentum conjugate to . Einstein's equations may be recovered by taking Poisson brackets with the Hamiltonian. Additional on-shell constraints, called secondary constraints by Dirac, arise from the consistency of the Poisson bracket algebra. These are and . This is the theory which is being quantized in approaches to canonical quantum gravity. It can be shown that six Einstein equations describing time evolution (really a gauge transformation) can be obtained by calculating the Poisson brackets of the three-metric and its conjugate momentum with a linear combination of the spatial diffeomorphism and Hamiltonian constraint. The vanishing of the constraints, giving the physical phase space, are the four other Einstein equations. That is, we have: Spatial diffeomorphisms constraints of which there are an infinite number – one for value of , can be smeared by the so-called shift functions to give an equivalent set of smeared spatial diffeomorphism constraints, These generate spatial diffeomorphisms along orbits defined by the shift function . Hamiltonian constraints of which there are an infinite number, can be smeared by the so-called lapse functions to give an equivalent set of smeared Hamiltonian constraints, as mentioned above, the Poisson bracket structure between the (smeared) constraints is important because they fully determine the classical theory, and must be reproduced in the semi-classical limit of any theory of quantum gravity. The Wheeler–DeWitt equation The Wheeler–DeWitt equation (sometimes called the Hamiltonian constraint, sometimes the Einstein–Schrödinger equation) is rather central as it encodes the dynamics at the quantum level. It is analogous to Schrödinger's equation, except as the time coordinate, , is unphysical, a physical wavefunction can't depend on and hence Schrödinger's equation reduces to a constraint: Using metric variables lead to seemingly unsurmountable mathematical difficulties when trying to promote the classical expression to a well-defined quantum operator, and as such decades went by without making progress via this approach. This problem was circumvented and the formulation of a well-defined Wheeler–De-Witt equation was first accomplished with the introduction of Ashtekar–Barbero variables and the loop representation, this well defined operator formulated by Thomas Thiemann. Before this development the Wheeler–De-Witt equation had only been formulated in symmetry-reduced models, such as quantum cosmology. Canonical quantization in Ashtekar–Barbero variables and LQG Many of the technical problems in canonical quantum gravity revolve around the constraints. Canonical general relativity was originally formulated in terms of metric variables, but there seemed to be insurmountable mathematical difficulties in promoting the constraints to quantum operators because of their highly non-linear dependence on the canonical variables. The equations were much simplified with the introduction of Ashtekars new variables. Ashtekar variables describe canonical general relativity in terms of a new pair canonical variables closer to that of gauge theories. In doing so it introduced an additional constraint, on top of the spatial diffeomorphism and Hamiltonian constraint, the Gauss gauge constraint. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation, in the context of Yang–Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of Gauss gauge invariant states. The use of this representation arose naturally from the Ashtekar–Barbero representation as it provides an exact non-perturbative description and also because the spatial diffeomorphism constraint is easily dealt with within this representation. Within the loop representation Thiemann has provided a well defined canonical theory in the presence of all forms of matter and explicitly demonstrated it to be manifestly finite! So there is no need for renormalization. However, as LQG approach is well suited to describe physics at the Planck scale, there are difficulties in making contact with familiar low energy physics and establishing it has the correct semi-classical limit. The problem of time All canonical theories of general relativity have to deal with the problem of time. In quantum gravity, the problem of time is a conceptual conflict between general relativity and quantum mechanics. In canonical general relativity, time is just another coordinate as a result of general covariance. In quantum field theories, especially in the Hamiltonian formulation, the formulation is split between three dimensions of space, and one dimension of time. Roughly speaking, the problem of time is that there is none in general relativity. This is because in general relativity the Hamiltonian is a constraint that must vanish. However, in any canonical theory, the Hamiltonian generates time translations. Therefore, we arrive at the conclusion that "nothing moves" ("there is no time") in general relativity. Since "there is no time", the usual interpretation of quantum mechanics measurements at given moments of time breaks down. This problem of time is the broad banner for all interpretational problems of the formalism. A canonical formalism of James York's conformal decomposition of geometrodynamics, leading to the "York time" of general relativity, has been developed by Charles Wang. This work has later been further developed by him and his collaborators to an approach of identifying and quantizing time amenable to a large class of scale-invariant dilaton gravity-matter theories. The problem of quantum cosmology The problem of quantum cosmology is that the physical states that solve the constraints of canonical quantum gravity represent quantum states of the entire universe and as such exclude an outside observer, however an outside observer is a crucial element in most interpretations of quantum mechanics. See also ADM formalism Ashtekar variables Canonical quantization Canonical coordinates Diffeomorphism Hole argument Regge Calculus Loop quantum gravity is one of this family of theories. Loop quantum cosmology (LQC) is a finite, symmetry reduced model of loop quantum gravity. Problem of time Notes References Sources Mathematical methods in general relativity Quantum gravity Physics beyond the Standard Model Physical cosmology
Canonical quantum gravity
[ "Physics", "Astronomy" ]
3,319
[ "Astronomical sub-disciplines", "Theoretical physics", "Unsolved problems in physics", "Astrophysics", "Quantum gravity", "Particle physics", "Physics beyond the Standard Model", "Physical cosmology" ]
3,939,149
https://en.wikipedia.org/wiki/Gate%20%28hydraulic%20engineering%29
In hydraulic engineering, a gate is a rotating or sliding structure, supported by hinges or by a rotating horizontal or vertical axis, that can be located at an extreme of a large pipe or canal in order to control the flow of water or any fluid from one side to the other. It is usually placed at the mouth of irrigation channels to avoid water loss or at the end of drainage channels to elude water entrance. Gate Valve When using a gate, one thing that is used in certain applications such as manufacturing, mining, and others is the gate valve. Fluids will run through the valve to help lubricate the moving parts of a machine, transmit power, close off openings to moving parts, and to assist evaporating the amount of heat coming through. These fluids will flow throughout the gate valve with little resistance to flow and there is additionally small drops in pressure. Within the gate valve, there is a gatelike disk that moves up and down perpendicular to the path of flow and seats against two seat faces to shut off flow. The velocity of the fluid against a partly opened disk may cause vibration and chattering which will ultimately lead to damage to the seating surfaces. This is a common way that gate valves fail. See also Sluice Valve References Hydraulic engineering
Gate (hydraulic engineering)
[ "Physics", "Engineering", "Environmental_science" ]
255
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Civil engineering stubs", "Hydraulic engineering" ]
3,942,111
https://en.wikipedia.org/wiki/Uncompetitive%20inhibition
Uncompetitive inhibition (which Laidler and Bunting preferred to call anti-competitive inhibition, but this term has not been widely adopted) is a type of inhibition in which the apparent values of the Michaelis–Menten parameters and are decreased in the same proportion. It can be recognized by two observations: first, it cannot be reversed by increasing the substrate concentration , and second, linear plots show effects on and , seen, for example, in the Lineweaver–Burk plot as parallel rather than intersecting lines. It is sometimes explained by supposing that the inhibitor can bind to the enzyme-substrate complex but not to the free enzyme. This type of mechanism is rather rare, and in practice uncompetitive inhibition is mainly encountered as a limiting case of inhibition in two-substrate reactions in which one substrate concentration is varied and the other is held constant at a saturating level. Mathematical definition In uncompetitive inhibition at an inhibitor concentration of the Michaelis–Menten equation takes the following form: in which is the rate at concentrations of substrate and of inhibitor, for limiting rate , Michaelis constant and uncompetitive inhibition constant . This has exactly the form of the Michaelis–Menten equation, as may be seen by writing it in terms of apparent kinetic constants: in which It is important to note that and decrease in the same proportions as a result of the inhibition. This is apparent when viewing a Lineweaver-Burk plot of uncompetitive enzyme inhibition: the ratio between V and Km remains the same with or without an inhibitor present. This may be seen in any of the common ways of plotting Michaelis–Menten data, such as the Lineweaver–Burk plot, for which for uncompetitive inhibition produces a line parallel to the original enzyme-substrate plot, but with a higher intercept on the ordinate: Implications and uses in biological systems The unique traits of uncompetitive inhibition lead to a variety of implications for the inhibition's effects within biological and biochemical systems. Uncompetitive inhibition is present within biological systems in a number of ways. In fact, it often becomes clear that the traits of inhibition specific to uncompetitive inhibitors, such as their tendency to act at their best at high concentrations of substrate, are essential to some important bodily functions operating properly. Involvement in cancer mechanisms Uncompetitive mechanisms are involved with certain types of cancer. Some human alkaline phosphatases have been found to be over-expressed in certain types of cancers, and those phosphatases often operate via uncompetitive inhibition. It has also been found that a number of the genes that code for human alkaline phosphatases are inhibited uncompetitively by amino acids such as leucine and phenylalanine. Studies of the involved amino acid residues have been undertaken in attempts to regulate alkaline phosphatase activity and learn more about said activity's relevance to cancer. Additionally, uncompetitive inhibition works alongside transformation-related protein 53 to help repress the activity of cancer cells and prevent tumorigenesis in certain forms of the illness, as it inhibits glucose-6-phosphate dehydrogenase, an enzyme of the pentose phosphate pathway). One of the side roles this enzyme is responsible for is helping to regulate is the control of reactive oxygen levels, as reactive oxygen species must be kept at appropriate levels to allow cells to survive. When concentration of glucose 6-phosphate, the substrate of the enzyme, is high, uncompetitive inhibition of the enzyme becomes far more effective. This extreme sensitivity to substrate concentration within the cancer mechanism implicates uncompetitive inhibition rather than mixed inhibition, which displays similar traits but is often less sensitive to substrate concentration due to some inhibitor binding to free enzymes regardless of the substrate's presence. As such, the extreme strength of uncompetitive inhibitors at high substrate concentrations and the overall sensitivity to substrate amount indicates that only uncompetitive inhibition can make this type of process possible. Importance in cell and organelle membranes Although uncompetitive inhibition is present in various diseases within biological systems, it does not necessarily only relate to pathologies. It can be involved in typical bodily functions. For example, active sites capable of uncompetitive inhibition appear to be present in membranes, as removing lipids from cell membranes and making active sites more accessible through conformational changes has been shown to invoke elements resembling the effects of uncompetitive inhibition (i.e. both and decrease). In mitochondrial membrane lipids specifically, removing lipids decreases the α-helix content in mitochondria and leads to changes in ATPase resembling uncompetitive inhibition. This presence of uncompetitive enzymes in membranes has also been supported in a number of other studies. For example, in studies of the protein ADP ribosylation factor, which is involved in regulating membrane activity, it was found that brefeldin A, a lactone antiviral trapped one of the protein's intermediates via uncompetitive inhibition. This made it clear that this type of inhibition exists within various types of cells and organelles as opposed to just in pathological cells. In fact, brefeldin A was found to relate to the activity of the Golgi apparatus and its role in regulating movement across the cell membrane. Presence in the cerebellar granule layer Uncompetitive inhibition can play roles in various other parts of the body as well. It is part of the mechanism by which N-methyl-D-aspartate glutamate receptors are inhibited in the brain, for example. Specifically, this type of inhibition impacts the granule cells that make up a layer of the cerebellum. These cells have the receptors mentioned, and their activity typically increases as ethanol is consumed. This often leads to withdrawal symptoms if ethanol is removed. Various uncompetitive blockers act as antagonists at the receptors and modify the process, with one example being the inhibitor memantine. In fact, in similar cases (involving over-expression of N-methyl-D-aspartate glutamate receptors, though not necessarily via ethanol), uncompetitive inhibition helps in nullifying the over-expression due to its particular properties. Since uncompetitive inhibitors block high concentrations of substrates very efficiently, their traits alongside the innate characteristics of the receptors themselves lead to very effective blocking of N-methyl-D-aspartate glutamate channels when they are excessively open due to massive amounts of agonists. Examples for uncompetitive inhibition Investigations of phenol-based HSD17B13 inhibitors indicate an uncompetitive mode of inhibition against NAD+. References Enzyme kinetics Enzyme inhibitors
Uncompetitive inhibition
[ "Chemistry" ]
1,401
[ "Chemical kinetics", "Enzyme kinetics" ]
3,942,941
https://en.wikipedia.org/wiki/Method%20of%20averaging
In mathematics, more specifically in dynamical systems, the method of averaging (also called averaging theory) exploits systems containing time-scales separation: a fast oscillation versus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution. More precisely, the system has the following form of a phase space variable The fast oscillation is given by versus a slow drift of . The averaging method yields an autonomous dynamical system which approximates the solution curves of inside a connected and compact region of the phase space and over time of . Under the validity of this averaging technique, the asymptotic behavior of the original system is captured by the dynamical equation for . In this way, qualitative methods for autonomous dynamical systems may be employed to analyze the equilibria and more complex structures, such as slow manifold and invariant manifolds, as well as their stability in the phase space of the averaged system. In addition, in a physical application it might be reasonable or natural to replace a mathematical model, which is given in the form of the differential equation for , with the corresponding averaged system , in order to use the averaged system to make a prediction and then test the prediction against the results of a physical experiment. The averaging method has a long history, which is deeply rooted in perturbation problems that arose in celestial mechanics (see, for example in ). First example Consider a perturbed logistic growth and the averaged equation The purpose of the method of averaging is to tell us the qualitative behavior of the vector field when we average it over a period of time. It guarantees that the solution approximates for times Exceptionally: in this example the approximation is even better, it is valid for all times. We present it in a section below. Definitions We assume the vector field to be of differentiability class with (or even we will only say smooth), which we will denote . We expand this time-dependent vector field in a Taylor series (in powers of ) with remainder . We introduce the following notation: where is the -th derivative with . As we are concerned with averaging problems, in general is zero, so it turns out that we will be interested in vector fields given by Besides, we define the following initial value problem to be in the standard form: Theorem: averaging in the periodic case Consider for every connected and bounded and every there exist and such that the original system (a non-autonomous dynamical system) given by has solution , where is periodic with period and both with bounded on bounded sets. Then there exists a constant such that the solution of the averaged system (autonomous dynamical system) is is for and . Remarks There are two approximations in this what is called first approximation estimate: reduction to the average of the vector field and negligence of terms. Uniformity with respect to the initial condition : if we vary this affects the estimation of and . The proof and discussion of this can be found in J. Murdock's book. Reduction of regularity: there is a more general form of this theorem which requires only to be Lipschitz and continuous. It is a more recent proof and can be seen in Sanders et al.. The theorem statement presented here is due to the proof framework proposed by Krylov-Bogoliubov which is based on an introduction of a near-identity transformation. The advantage of this method is the extension to more general settings such as infinite-dimensional systems - partial differential equation or delay differential equations. J. Hale presents generalizations to almost periodic vector-fields. Strategy of the proof Krylov-Bogoliubov realized that the slow dynamics of the system determines the leading order of the asymptotic solution. In order to proof it, they proposed a near-identity transformation, which turned out to be a change of coordinates with its own time-scale transforming the original system to the averaged one. Sketch of the proof Determination of a near-identity transformation: the smooth mapping where is assumed to be regular enough and periodic. The proposed change of coordinates is given by . Choose an appropriate solving the homological equation of the averaging theory: . Change of coordinates carries the original system to Estimation of error due to truncation and comparison to the original variable. Non-autonomous class of systems: more examples Along the history of the averaging technique, there is class of system extensively studied which give us meaningful examples we will discuss below. The class of system is given by: where is smooth. This system is similar to a linear system with a small nonlinear perturbation given by : differing from the standard form. Hence there is a necessity to perform a transformation to make it in the standard form explicitly. We are able to change coordinates using variation of constants method. We look at the unperturbed system, i.e. , given by which has the fundamental solution corresponding to a rotation. Then the time-dependent change of coordinates is where is the coordinates respective to the standard form. If we take the time derivative in both sides and invert the fundamental matrix we obtain Remarks The same can be done to time-dependent linear parts. Although the fundamental solution may be non-trivial to write down explicitly, the procedure is similar. See Sanders et al. for further details. If the eigenvalues of are not all purely imaginary this is called hyperbolicity condition. For this occasion, the perturbation equation may present some serious problems even whether is bounded, since the solution grows exponentially fast. However, qualitatively, we may be able to know the asymptotic solution, such as Hartman-Grobman results and more. Occasionally, polar coordinates may yield standard forms that are simpler to analyze. Consider , which determines the initial condition and the system If we may apply averaging so long as a neighborhood of the origin is excluded (since the polar coordinates fail): where the averaged system is Example: Misleading averaging results The method contains some assumptions and restrictions. These limitations play important role when we average the original equation which is not into the standard form, and we can discuss counterexample of it. The following example in order to discourage this hurried averaging: where we put following the previous notation. This systems corresponds to a damped harmonic oscillator where the damping term oscillates between and . Averaging the friction term over one cycle of yields the equation: The solution is which the convergence rate to the origin is . The averaged system obtained from the standard form yields: which in the rectangular coordinate shows explicitly that indeed the rate of convergence to the origin is differing from the previous crude averaged system: Example: Van der Pol Equation Van der Pol was concerned with obtaining approximate solutions for equations of the type where following the previous notation. This system is often called the Van der Pol oscillator. Applying periodic averaging to this nonlinear oscillator provides qualitative knowledge of the phase space without solving the system explicitly. The averaged system is and we can analyze the fixed points and their stability. There is an unstable fixed point at the origin and a stable limit cycle represented by . The existence of such stable limit-cycle can be stated as a theorem. Theorem (Existence of a periodic orbit): If is a hyperbolic fixed point of Then there exists such that for all , has a unique hyperbolic periodic orbit of the same stability type as . The proof can be found at Guckenheimer and Holmes, Sanders et al. and for the angle case in Chicone. Example: Restricting the time interval The average theorem assumes existence of a connected and bounded region which affects the time interval of the result validity. The following example points it out. Consider the where . The averaged system consists of which under this initial condition indicates that the original solution behaves like where it holds on a bounded region over . Damped Pendulum Consider a damped pendulum whose point of suspension is vibrated vertically by a small amplitude, high frequency signal (this is usually known as dithering). The equation of motion for such a pendulum is given by where describes the motion of the suspension point, describes the damping of the pendulum, and is the angle made by the pendulum with the vertical. The phase space form of this equation is given by where we have introduced the variable and written the system as an autonomous, first-order system in -space. Suppose that the angular frequency of the vertical vibrations, , is much greater than the natural frequency of the pendulum, . Suppose also that the amplitude of the vertical vibrations, , is much less than the length of the pendulum. The pendulum's trajectory in phase space will trace out a spiral around a curve , moving along at the slow rate but moving around it at the fast rate . The radius of the spiral around will be small and proportional to . The average behaviour of the trajectory, over a timescale much larger than , will be to follow the curve . Extension error estimates Average technique for initial value problems has been treated up to now with an validity error estimates of order . However, there are circumstances where the estimates can be extended for further times, even the case for all times. Below we deal with a system containing an asymptotically stable fixed point. Such situation recapitulates what is illustrated in Figure 1. Theorem (Eckhaus /Sanchez-Palencia ) Consider the initial value problem Suppose exists and contains an asymptotically stable fixed point in the linear approximation. Moreover, is continuously differentiable with respect to in and has a domain of attraction . For any compact and for all with in the general case and in the periodic case. References Dynamical systems
Method of averaging
[ "Physics", "Mathematics" ]
2,029
[ "Mechanics", "Dynamical systems" ]
3,943,236
https://en.wikipedia.org/wiki/Synaptic%20pharmacology
Synaptic pharmacology is the study of drugs that act on the synapses. It deals with the composition, uses, and effects of drugs that may enhance (receptor) or diminish (blocker) activity at the synapse, which is the junction across which a nerve impulse passes from an axon terminal to a neuron, muscle cell, or gland cell. A partial list of pharmacological agents that act at synapses follows. References Neuropharmacology
Synaptic pharmacology
[ "Chemistry" ]
105
[ "Pharmacology", "Pharmacology stubs", "Neuropharmacology", "Medicinal chemistry stubs" ]
21,620,243
https://en.wikipedia.org/wiki/Affinity%20electrophoresis
Affinity electrophoresis is a general name for many analytical methods used in biochemistry and biotechnology. Both qualitative and quantitative information may be obtained through affinity electrophoresis. Cross electrophoresis, the first affinity electrophoresis method, was created by Nakamura et al. Enzyme-substrate complexes have been detected using cross electrophoresis. The methods include the so-called electrophoretic mobility shift assay, charge shift electrophoresis and affinity capillary electrophoresis. The methods are based on changes in the electrophoretic pattern of molecules (mainly macromolecules) through biospecific interaction or complex formation. The interaction or binding of a molecule, charged or uncharged, will normally change the electrophoretic properties of a molecule. Membrane proteins may be identified by a shift in mobility induced by a charged detergent. Nucleic acids or nucleic acid fragments may be characterized by their affinity to other molecules. The methods have been used for estimation of binding constants, as for instance in lectin affinity electrophoresis or characterization of molecules with specific features like glycan content or ligand binding. For enzymes and other ligand-binding proteins, one-dimensional electrophoresis similar to counter electrophoresis or to "rocket immunoelectrophoresis", affinity electrophoresis may be used as an alternative quantification of the protein. Some of the methods are similar to affinity chromatography by use of immobilized ligands. Types and methods Currently, there is ongoing research in developing new ways of utilizing the knowledge already associated with affinity electrophoresis to improve its functionality and speed, as well as attempts to improve already established methods and tailor them towards performing specific tasks. Agarose gel electrophoresis A type of electrophoretic mobility shift assay (AMSA), agarose gel electrophoresis is used to separate protein-bound amino acid complexes from free amino acids. Using a low voltage (~10 V/cm) to minimize the risk for heat damage, electricity is run across an agarose gel. When dissolved in a hot buffered solution (50 to 55 degrees Celsius), it produces a viscous solution, but when cooled, it solidifies as a gel. Serum proteins, hemoglobin, nucleic acids, polymerase chain reaction products, etc. are all separated using this method. Agarose's fixed sulfate groups can cause enhanced electroendosmosis, which lowers band resolution. Utilizing ultrapure agarose gel with little sulfate content can stop this. Rapid agarose gel electrophoresis This technique utilizes a high voltage () with a 0.5× Tris-borate buffer run across an agarose gel. This method differs from the traditional agarose gel electrophoresis by utilizing a higher voltage to facilitate a shorter run time as well as yield a higher band resolution. Other factors included in developing the technique of rapid agarose gel electrophoresis are gel thickness, and the percentage of agarose within the gel. Boronate affinity electrophoresis Boronate affinity electrophoresis utilizes boronic acid infused acrylamide gels to purify NAD-RNA. This purification allows for researchers to easily measure the kinetic activity of NAD-RNA decapping enzymes. Affinity capillary electrophoresis Affinity capillary electrophoresis (ACE) refers to a number of techniques which rely on specific and nonspecific binding interactions to facilitate separation and detection through a formulary approach in accordance with the theory of electromigration. Using the intermolecular interactions between molecules occurring in free solution or mobilized onto a solid support, ACE allows for the separation and quantitation of analyte concentrations and binding and dissociation constants between molecules. As affinity probes in CAE, fluorophore-labeled compounds with affinities for the target molecules are employed. With ACE, scientists hope to develop strong binding drug candidates, understand and measure enzymatic activity, and characterize the charges on proteins. Affinity capillary electrophoresis can be divided into three distinct techniques: non-equilibrium electrophoresis of equilibrated sample mixtures, dynamic equilibrium ACE, and affinity-based ACE. Nonequilibrium electrophoresis of equilibrated sample mixtures is generally used in the separation and study of binding interactions of large proteins and involves combining both the analyte and its receptor molecule in a premixed sample. These receptor molecules often take the form of affinity probes consisting of fluorophore-labeled molecules that will bind to target molecules that are mixed with the sample being tested. This mixture, and its subsequent complexes, are then separated through capillary electrophoresis. Because the original mixture of analyte and receptor molecule were bound together in an equilibrium, the slow dissociation of these two bound molecules during the electrophoretic experiment will result in their separation and a subsequent shift in equilibrium towards further dissociation. The characteristic smear pattern produced by the slow release of the analyte from the complex during the experiment can be used to calculate the dissociation constant of the complex. Dynamic equilibrium ACE involves the combination of the analyte found in the sample and its receptor molecule found in the buffered solution in the capillary tube so that binding and separation only occur in the instrument. It is assumed for dynamic equilibrium affinity capillary electrophoresis that ligand-receptor binding occurs rapidly when the analyte and buffer are mixed. Binding constants are generally derived from this technique based upon the peak migration shift of the receptor which is dependent upon the concentration of the analyte in the sample. Affinity-based capillary electrophoresis, also known as capillary electroaffinity chromatography (CEC), involves the binding of analyte in sample to an immobilized receptor molecule on the capillary wall, microbeads, or microchannels. CEC offers the highest separation efficacy of all three ACE techniques as non-matrixed sample components are washed away and the ligand then be released and analyzed.   Affinity capillary electrophoresis takes the advantages of capillary electrophoresis and applies them to the study of protein interactions. ACE is advantageous because it has a high separation efficiency, has a shorter analysis time, can be run at physiological pH, and involves low consumption of ligand/molecules. In addition, the composition of the protein of interest does not have to be known in order to run ACE studies. The main disadvantage, though, is that it does not give much stoichiometric information about the reaction being studied. Affinity-trap polyacrylamide gel electrophoresis Affinity-trap polyacrylamide gel electrophoresis (PAGE) has become one of the most popular methods of protein separation. This is not only due to its separation qualities, but also because it can be used in conjunction with a variety of other analytic methods, such as mass spectrometry, and western blotting. In addition to helping isolate and purify proteins from biological samples, AT-PAGE is anticipated to be helpful in analyses of variations in the expression of particular proteins as well as in investigations of posttranslational modifications of proteins. This method utilizes a two-step approach. First, a protein sample is run through a polyacrylamide gel using electrophoresis. Then, the sample is transferred to a different polyacrylamide gel (the affinity-trap gel) where affinity probes are immobilized. The proteins that do not have affinity for the affinity probes pass through the affinity-trap gel, and proteins with affinity for the probes will be "trapped" by the immobile affinity probes. These trapped proteins are then visualized and identified using mass spectrometry after in-gel digestion. Phosphate affinity electrophoresis Phosphate affinity electrophoresis utilizes an affinity probe which consists of a molecule that binds specifically to divalent phosphate ions in neutral aqueous solution, known as a "Phos-Tag". This methods also utilizes a separation gel made of an acrylamide-pendent Phos-Tag monomer that is copolymerized. Phosphorylated proteins migrate slowly in the gel compared to non-phosphorylated proteins. This technique gives the researcher the ability to observe the differences in the phosphorylation states of any given protein. This technique allows for the detection of distinct bands even in protein molecules that have the same amount of phosphorylated amino acid residues but are phosphorylated at different amino acid locations. See also Immunoelectrophoresis References External links Comprehensive texts edited by Niels H. Axelsen in Scandinavian Journal of Immunology, 1975 Volume 4 Supplement Electrophoresis Molecular biology Protein methods Laboratory techniques Protein–protein interaction assays
Affinity electrophoresis
[ "Chemistry", "Biology" ]
1,859
[ "Biochemistry methods", "Protein–protein interaction assays", "Instrumental analysis", "Protein methods", "Protein biochemistry", "Biochemical separation processes", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry", "Electrophoresis" ]
21,622,707
https://en.wikipedia.org/wiki/Pneurop
PNEUROP is the European Association of manufacturers of compressors, vacuum pumps, pneumatic tools and allied equipment, represented by their national associations. History The compressed air trade associations of France, Germany and the UK formed a trading organization in 1958. In 1960, more countries joined and it became PNEUROP. PNEUROP celebrated its formal 50th anniversary on 6 November 2008 by organizing a conference in Brussels with its programme and presentations by the different committees. Function PNEUROP speaks on behalf of its members in European and international forums concerning the harmonisation of technical, normative and legislative developments in the field of compressors, vacuum pumps, pneumatic tools and allied equipment. The association is cooperating with and contributing to several ISO committees including TC 112 (Vacuum Technology) and TC 118 (Compressor). PNEUROP is committed to develop energy efficiency and environmental protection among its member organizations and members, as explained in a recent official statement. Pneurop supports the overall sustainable energy policy objectives of the European Union and its member companies actively promote energy efficiency as part of their daily business activity. Pneurop is registered under the European Union 'Transparency Register' - ID number: 67236492080-88 "Air Everywhere" The PNEUROP General Assembly officially approved the launch of a new project entitled "Pneurop Green Challenge 2020". Through this project, manufacturers, represented by their national associations, wish to draw attention to the importance of their equipment which is vital in reaching Europe's 2020 goals in terms of energy savings, development of renewable energy and a reduction in emissions. Compressors and vacuum pumps are used not only in projects for storage of CO2 but also in the manufacture of photovoltaic cells and in the transport and storage of hydrogen. These examples are part of the vast list of products and applications which professionals would like to make better known to the general public. This determination to develop and promote these technologies of the future is the consequence of the industry's long-held commitment to reduce the environmental impact of the different production phases, to develop an ecologically sound approach to product design, and to encourage recycling of products. Sustainability projects and low carbon initiatives need compressed air, gas and vacuum equipment. This is formalized under a signature and a logo "Air Everywhere". Ecodesign Pneurop is an active stakeholder of the Ecodesign Preparatory Study on Electric motor systems/compressors (Ener Lot 31) Structure PNEUROP's office is located in BluePoint in Brussels. The work is achieved through product network : Compressors Tools Vacuum technology Pressure equipment Air treatment Process compressors PNEUROP is an international brand/trade name that was registered in 2007 with its logo. Member organizations Orgalime FMMI Agoria Technology Industries of Finland Profluid VDMA - Compressors, Compressed Air and Vacuum Technology Teknikforetagen Swissmem Association of Machine Manufacturers MÝB British Compressed Air Society Limited - BCAS ISO International Organization for Standardization ISO References External links Pneurop website Vacuum pumps Pneumatics Trade associations based in Belgium Pan-European trade and professional organizations Organizations established in 1960
Pneurop
[ "Physics", "Engineering" ]
649
[ "Vacuum pumps", "Vacuum systems", "Vacuum", "Matter" ]
35,901,925
https://en.wikipedia.org/wiki/Reeves%20AN/MSQ-77%20Bomb%20Directing%20Central
The Reeves AN/MSQ-77 Bomb Directing Central, Radar (nickname "Miscue 77") was a United States Air Force automatic tracking radar/computer system for command guidance of aircraft. It was often used during Vietnam War bomb runs at nighttime and during bad weather. Developed from the Reeves AN/MSQ-35, the AN/MSQ-77 reversed the process of Radar Bomb Scoring by continually estimating the bomb impact point before bomb release with a vacuum tube ballistic computer. Unlike "Course Directing Central" systems which guided aircraft to a predetermined release point, the AN/MSQ-77 algorithm continuously predicted bomb impact points during the radar track while the AN/MSQ-77's control commands adjusted the aircraft course. A close air support regulation prohibited AN/MSQ-77 Combat Skyspot bombing within of friendly forces unless authorized by a Forward Air Controller, and "on several occasions" strikes were as close as . Post-war the MSQ-77 was used on US and other training ranges for Radar Bomb Scoring (RBS). The AN/MSQ-77 was also periodically used for post-Vietnam commanding of bombers during simulated ground directed bombing to maintain aircrew and radar crew GDB proficiency (RBS could be used to score the simulated GDB mission). Most AN/MSQ-77s were replaced by solid-state equipment near the end of the Cold War. History Ground radar systems for automated guidance of aircraft to a predetermined point (e.g., for bomb release using a bombsight or avionics radar) included the July 1951 AN/MPQ-14 Radar Course Directing Central. By 1954, the MARC (Matador Airborne Radio Control) used the AN/MSQ-1A for missile guidance to the terminal dive point, and SAGE GCI provided computer-controlled guidance of aircraft to continuously computed interception points (1958 AN/FSQ-7 Bomarc missile guidance and the later Ground to Air Data Link Subsystem for fighters). Despite the availability of solid-state military guidance computers in 1961, planning for a USAF vacuum-tube trajectory computer/radar system began in early 1965. In October 1965, F-100s tested the AN/MSQ-77 at Matagorda Island General Bombing and Gunnery Range on the Texas Gulf Coast. In March 1966, AN/MSQ-77 operations using the "reverse MSQ method" began and continued through August 1973 for guiding B-52s and tactical fighters and bombers ("chiefly flown by F-100's"). By March 1967, 15,000 Skyspot sorties had been flown, and raids controlled by AN/MSQ-77s included those of Operation Menu from Bien Hoa Air Base, Operation Niagara, and Operation Arc Light. Additional AN/MSQ-77 missions included those with MC-130 Commando Vault aircraft to clear landing zones and at least 1 helicopter evacuation of wounded on August 13, 1966. Commando Club To allow command guidance bombing of Hanoi and Haiphong targets out of range of the initial Skyspot AN/MSQ-77 sites, the "1st CEVG began "Combat Keel" tests using F-4s guided by an MSQ-77 on the USS Thomas J. Gary" in the Gulf of Tonkin during late 1967 after the March 1967 "Combat Target" task force recommended a closer site. By 1 November 1967, the USAF Heavy Green operation had prepared a Laos mountaintop site and installed an MSQ-77 variant in rugged shelters without trailer frames, wheels, etc. for helicopter transport. Although the central's range was limited by the UHF radio reliability for A/C commands during the bomb run, "Commando Club" used a relay aircraft to retransmit communications between Lima Site 85 and the bomber. LS-85's operations ended with the 1968 Battle of Lima Site 85 defeat by sappers after North Vietnam had correlated bombings were occurring during LS-85 transmissions (the site's central and other buildings were destroyed by a later U.S. air raid.) Additional casualties of AN/MSQ-77 personnel included 1 killed in an enemy rocket attack and 6 Skyspot personnel killed in a 1966 ambush on a survey mission. Following -77 modifications in 1968, subsequent changes included a solid-state digital printer for RBS ("Digital Data System") and implementation of a USAF suggestion for RBS to use a late-1970s programmable calculator to supersede the Bomb Trajectory Group, eliminating alignment procedures for its amplifiers. In 1989, remains of an F-4C Weapon System Officer shot down during a November 10, 1967, AN/MSQ-77 bomb run were recovered in Southeast Asia. Developed from the AN/MSQ-77 and also used in Vietnam was the monopulse India-band Reeves AN/TSQ-96 Bomb Directing Central with a solid state Univac 1219B ballistic computer (Mark 152 fire control computer), and the AN/MSQ-77/96 systems for GDB were replaced by the US Dynamics AN/TPQ-43 Radar Bomb Scoring Set ("Seek Score"). There were 5 MSQ-77s at Nellis Air Force Base in 1994, and the "MSQ-77 or equivalent" was still listed in 2005 as support equipment for airdrops from Ground Radar Aerial Delivery System (GRADS) aircraft. The AN/MSQ-77 antenna at the "Combat Skyspot Memorial" on Andersen Air Force Base was destroyed by a typhoon . Locations Initial AN/MSQ-77 sites were the production plant Reeves-Ely had built in 1958 at Roosevelt Field on East Gate Blvd in Garden City, New York; and the Matagorda Island test site also used for "Busy Skyspot" training of Vietnam crews (moved to Bergstrom AFB in 1970). Deployment sites were the Vietnam War operating locations, the wartime site at the Nellis Range, and post-war CONUS RBS and overseas sites (e.g., Korea). The last AN/MSQ-77 locations (e.g., at museums after retirements) included the Ellsworth Air Force Base Museum (near the Antelope Butte, Belle Fourche, Conner, & Horman RBS sites) and: Serial Number 1: ... Serial Number ? Detachment 7, !CEVG Ashland, Maine Serial number 7: Detachment 12, 1CEVG Hawthorne, Nevada from June 1982 until site decommissioning in August 1986 and relocation to Detachment 17, 1CEVG Havre, Montana. Upon decommissioning of Havre, MT Serial Number 7 went to Detachment 18, Forsyth, Montana for SAC Bomb-Comp 1987 and then to Detachment 20, 1CEVG Conrad, Montana in early 1988. Was later modified by replacing the Cassegrain antenna with a Fresnel lens antenna, as was used on the MSQ-46. After the antenna (and associated equipment) modification, S/N 7 was used by the HQ 1CEVG "Bug" team to validate new site locations. Serial number 10: Eighth Air Force Museum (Barksdale AFB) after use at Hawthorne Bomb Plot (), Guam, and Vietnam War OL 24 Served as "Secondary" radar at Mobile Duty Location (MDL) 35 Climax, KS from Jan - Jun 1982 Serial Number 11: ... Serial Number 12: TBD after use at Nakhon Phanom RTAFB (NKP) Vietnam War OL-27 (BROMO), and initial testing at Bryan Field, Texas (1st AN/TSQ-81: variant of AN/MSQ-77) Serial Number 13: destroyed Lima Site 85 variant previously used at Vietnam War OL-23 and tested at Bryan Field Serial Number 21: Used as "Primary" radar at Mobile Duty Location (MDL) 35 Climax, KS from Jan - Jun 1982. MSQ-77 S/N 10 served as "Secondary" for the ORI/BUY NONE missions at MDL 35 during that timeframe. Equipment and functions In addition to the communication and maintenance van, other AN/MSQ-77 trailers were the radar van with roof-mounted Cassegrain antenna, "control and plotting van, two diesel generator vans, [and] an administrative and supply van" which were emplaced as a military installation at the surveyed site. The primary modification for the AN/MSQ-77 was the control equipment for aircraft guidance (ballistic computer, guidance/release circuitry, and UHF command equipment). The central also had an added beacon tracking capability used when the aircraft had a receiver/transmitter (e.g., Motorola SST-181 X Band Beacon Transponder) to increase the range, so the radar site could be located farther from the hostile region of bombing targets. Beacon track upgrades included radar circuitry to switch the heterodyne receiver to demodulate the transponder frequency, compensation for the transponder delay, and modification of the central's plotting board circuitry to allow display for increased ranges. The plots were of tracks calculated by the computer's Aircraft Coordinates and Plotting Group which converted radar spherical data to plotting board cartesian coordinates (non-inertial east, north, up coordinate system) using sine/cosine voltages and radar-estimated range respectively from the Antenna Group (azimuth/elevation resolvers) and from the Track Range Computer. Additional A/C Coordinates amplifiers computed the velocity components (not plotted) which along with the track position components were provided as initial bomb conditions to the ballistic computer (Bomb Trajectory Group). Bomb Trajectory Group The Bomb Trajectory Group (BTG) was the AN/MSQ-77's analog ballistic computer using 3-dimensional double-integration to continually predict the bomb impact point from an aircraft track during a bomb run. The Cartesian aircraft data were propagated by the BTG mathematical modeling which included aerodynamics for different bombs, Earth "curvature and Coriolis corrections", and vacuum tube integrating amplifiers. The integration was based on the varying aircraft position and velocity prior to the bomb release, so as with the use of the Norden bombsight analog computer in World War II, a nearly steady bomb run was required for the AN/MSQ-77 to provide sufficient bombing accuracy. As in the 1950s Nike missile guidance system(s), electro-mechanical servos controlled sine/cosine resolvers in a feedback loop for computing the simulated bomb's horizontal velocity and along with the drop rate, the simulated bomb's airspeed and dive angle ("Pitch Servo"). Likewise, a "Z servo" allowed the Air Resistance Circuits to adjust for altitude-varying air density, and the drag aerodynamics were vectorized by a servo operating potentiometers to pick-off 3 bomb-specific deceleration voltages based on each cartesian velocity voltage. Ground directed bombing The AN/MSQ-77 radar track began after the aircraft (A/C) arrived near the Initial Point (IP) on a heading toward the target. When the computer's groundspeed and elevation rate servos had stabilized to the A/C cartesian velocity from the differentiating amplifiers, an operator placed the central into "computer track" to provide rate-aided tracking signals to the radar. With the computer track and the central having target position, A/C heading, & bomb type information; and with the Bomb Trajectory Group's servos tracking the bomb-in-aircraft course and pitch, the operator then activated the BTG integrators for the computer simulation to begin integrating a bomb trajectory from the A/C coordinates at that integration start point. Acceleration voltages from the BTG dynamic models were double-integrated by the 6 computer amplifiers which generated 3 voltages for the simulated bomb displacement (altitude, north, & east deltas) which were summed to the A/C position (simulated bomb release point, BRP). Use of the continually-changing current A/C position as the simulated BRP ensured a more accurate Earth Curvature Correction (ECC) was generated for the simulated bomb's horizontal range from the radar. When the simulated bomb's altitude (simulated BRP altitude - integrated altitude delta + altitude ECC) equalled the target height, the integration automatically stopped, and the integrated displacements were held as constant altitude, north, and east delta voltages. Subsequent summing of more current simulated bomb release points (A/C bomb run positions after the integration ended) with the integrator deltas generated a path of simulated bomb impact (SBI) points that moved relative to the A/C position throughout the remainder of the bomb run. The latest SBI was the AN/MSQ-77's best estimate of the impact position if bomb release was from the current A/C position: The AN/MSQ-77 control algorithm continually commanded the A/C so the BTG simulated bomb impact point, which was plotted separately from the A/C track, would move toward the target. While the A/C was being guided, an AN/MSQ-77 bomb release algorithm used a model for the future path of simulated bomb impact points to predict the nearest impact to the target (a No-Go condition aborted before effecting an outlying bomb release). Instead of releasing from the A/C position corresponding to the nearest predicted impact point, the AN/MSQ-77 began the bomb release sequence just prior, which accounted for the delay in generating the radio command, in transmitting the command, and in the A/C effecting the mechanical release. The delay time was based on calibration testing of the AN/MSQ-77 with A/C bomb release circuitry (e.g., mean bomb release time for salvo drops from B-52s). Accuracy Although the 1967 Commando Club missions against North Vietnam by the 7th Air Force were temporarily suspended due to successful enemy defenses on November 18, the AN/MSQ-77 variant at LS-85 had effected a direct hit (zero miss distance) as well as a miss—its Commando Club CEP through November 16 for "14 runs was 867 feet". The suspension period for modifying attack tactics was used to reduce GDB errors of LS-85, since other Skyspot sites had been more accurate. AN/MSQ-77 errors included the typical automatic tracking radar errors such as the antenna lag due to the conical scan tracking, Track Range Computer error, any inaccuracy of the A/C transponder delay value used by the central, and the range offset of the A/C transponder antenna from the actual position of the bomb release point(s) on the A/C (particularly negligible when the radar was tracking from the side of the A/C). The AN/MSQ-77 compensation for antenna lag during rate-aided computer track used a telescopic CCTV system with operator's joystick to aim the antenna axis toward the A/C (e.g., bomb bay section of the fuselage). Additional AN/MSQ-77 errors were in the bomb trajectory algorithm (e.g., different simulation rates for each of 6 integrating amplifiers) and in the bomb release algorithm. See also List of radars List of military electronics of the United States References Computer systems of the United States Air Force Radars of the United States Air Force 1965 establishments in the United States 1965 in military history Computer-related introductions in 1965 1990 in military history Analog computers Aviation ground support equipment Ballistics Ground radars Military equipment of the Vietnam War Military equipment introduced in the 1960s Military electronics of the United States Aerial warfare ground equipment
Reeves AN/MSQ-77 Bomb Directing Central
[ "Physics" ]
3,238
[ "Applied and interdisciplinary physics", "Ballistics" ]
2,113,111
https://en.wikipedia.org/wiki/PerkinElmer
PerkinElmer, Inc., previously styled Perkin-Elmer, is an American global corporation that was founded in 1937 and originally focused on precision optics. Over the years it went into and out of several different businesses via acquisitions and divestitures; these included defense products, semiconductors, computer systems, and others. By the 21st century, PerkinElmer was focused in the business areas of diagnostics, life science research, food, environmental and industrial testing. Its capabilities include detection, imaging, informatics, and service. It produced analytical instruments, genetic testing and diagnostic tools, medical imaging components, software, instruments, and consumables for multiple end markets. PerkinElmer was part of the S&P 500 Index and operated in 190 countries. Over its history, PerkinElmer has been split in two twice. In 1999, PerkinElmer merged with EG&G, with the ongoing Analytical Instruments Division of Perkin-Elmer keeping that name, while the life sciences division of the company became the separate PE Corporation. In 2022, a split of PerkinElmer resulted in one part, comprising applied, food and enterprise services businesses, being sold to the private equity firm New Mountain Capital for $2.45 billion and thus no longer being public but kept the PerkinElmer name. The other part, comprising life sciences and diagnostics businesses, remained public but required a new name, which in 2023 was announced as Revvity, Inc. History Founding Richard Perkin was attending the Pratt Institute in Brooklyn to study chemical engineering, but left after a year to try his hand on Wall Street. Still interested in the sciences, he gave public lectures on various topics. Charles Elmer ran a firm that supplied court reporters and was nearing retirement when he attended one of Perkin's lectures on astronomy being held at the Brooklyn Institute of Arts and Sciences. The two struck up a friendship over their shared interest in astronomy, and eventually came up with the idea of starting a firm to produce precision optics. Perkin raised US$15,000 from his relatives, while Elmer added US$5,000, and the firm was initially set up as a partnership on 19 April 1937. Initially, they worked from a small office in Manhattan, but soon opened a production facility in Jersey City. They incorporated the growing firm on 13 December 1939. A further move to Glenbrook in Connecticut in 1941 was quickly followed by another move to Norwalk, Connecticut, where the company remained until 2000. The opening of World War II led to significant expansion as the company produced optics for range finders, bombsights, and reconnaissance systems. This work led to the U.S. Navy awarding them the first "E" for Excellence award in 1942. Perkin-Elmer retained a strong presence in the military field through the 1960s and at the same time was significantly involved with OAO-3 a 36-inch Ultra Violet Space Telescope, Skylab and their major contribution to the Apollo program was the sensor that saved the astronauts during the Apollo 13 failure. They were a primary supplier of the optical systems used in many reconnaissance platforms, first in aircraft and high-altitude balloons, and then in reconnaissance satellites. A significant advance was 1955's Transverse Panoramic Camera, which took images on wide frames that provided single-frame images from horizon to horizon from an aircraft flying at 40,000 ft altitude. Such systems remained a major part of the company's income, capped by the installation of laser retroreflectors on the Moon as part of the Apollo 11 mission. Elmer died at age 83 in 1954, and the company began trading shares over the counter. The company was listed on the New York Stock Exchange on 13 December 1960. Perkin remained as president and CEO until June 1961, when Robert Lewis, previously of Argus Camera and Sylvania Electric Products, took over these roles. Perkin remained the chairman of the board until his death in 1969. Semiconductor manufacturing In 1967, the U.S. Air Force asked Perkin-Elmer to produce an all-optical "masking" system for semiconductor fabrication. Previous systems used a pattern, the "mask", which was pressed onto the surface of the silicon wafer as part of the photolithography process. Small bits of dirt or photoresist would stick the mask and ruin the patterning for subsequent chips, and it was not uncommon for the vast majority of the chips from a given wafer to malfunction. The Air Force, which by the late 1960s was highly reliant on integrated circuits, desired a more reliable system. Perkin-Elmer responded with the Microprojector, which was essentially a large photocopier system. The mask was placed in a holder and never touched the surface of the chip. Instead, the image was projected onto the surface. Making this work required a complex 16-element lens system that focussed a narrow range of wavelengths of light onto the mask. The remainder of the light from the 1,000 watt mercury-vapor lamp was filtered out. Harold Hemstreet was convinced that the concept could be simplified, and Abe Offner began the development of a system using mirrors instead of lenses, which did not suffer from the multispectral focussing problems of lenses. The result of this research was the Projection Scanning Aligner, or Micralign, which made chip making an assembly-line task and improved the number of working chips from perhaps 10% to 70% overnight. Chip prices plummeted as a result, with examples like the MOS 6502 selling for about US$20 while the previous generation of designs like the Motorola 6800 sold for around US$250. The Micralign was so successful that Perkin-Elmer became the largest single vendor in the chip space in three years. In spite of this success, the company was largely a has-been by the 1980s due to their late response to the introduction of the stepping aligner, which allowed a single small mask to be stepped across the wafer, rather than requiring a single large mask covering the entire wafer. The company never regained their lead, and sold the division to The Silicon Valley Group. Lab equipment In the early 1990s, partnered with Cetus Corporation (and later Hoffmann-La Roche) to pioneer the polymerase chain reaction (PCR) equipment industry. Analytical-instruments business was also operated from 1954 to 2001 in Germany, by the Bodenseewerk Perkin-Elmer GmbH located in Überlingen at Lake Constance, and England (Perkin Elmer Ltd) at Beaconsfield in Buckinghamshire. Computer Systems Division Perkin-Elmer was involved in computer manufacture for a time. The Perkin-Elmer Computer Systems Division was formed through the purchase of Interdata, Inc., an independent computer manufacturer, in 1973–1974 for some US$63 million. This merger made Perkin-Elmer's annual sales rise to over US$200 million. This was also known as Perkin-Elmer's Data Systems Group. The 32-bit computers were very similar to an IBM System/370, but ran the OS/32MT operating system. The Computer Systems Division had a large presence in Monmouth County, New Jersey, with some 1,700 staff making it one of the county's largest private employers. Its plant in Oceanport had 800 employees alone. By the early-mid-1980s the computing group had sales of $259 million; while profitable, it tended to have reduced visibility within the computing industry due to being owned by a diversified parent. The Wollongong Group provided the commercial version of the Unix port to the Interdata 7/32 hardware, known as Edition 7 Unix. The port was originally done by the University of Wollongong in New South Wales, Australia, and was the first UNIX port to hardware other than the Digital Equipment Corporation PDP family. By 1982, the Wollongong Group Edition 7 Unix and Programmer's Workbench (PWB) were available on models such as the Perkin-Elmer 3210 and 3240 minicomputers. In 1985, the computing division of Perkin-Elmer was spun off as Concurrent Computer Corporation, with the goal of giving it and the parallel processing product a clearer identification within the computer industry. At first, the new company was a wholly owned subsidiary of Perkin-Elmer, but with the intentions of putting a minority ownership in the company up for a public stock sale. This was done one in February 1986, with Perkin-Elmer retaining an 82 percent stake in Concurrent. In 1988, there was a merger between Concurrent Computer Corporation and MASSCOMP; as part of the deal, Perkin-Elmer's share in Concurrent was bought out. At that point, Perkin-Elmer said they had culminated their multi-year process of exiting from computer market, allowing them to focus on their primary business segments. 1999 Modern PerkinElmer traces its history back to a merger between divisions of what had been two S&P 500 companies, EG&G Inc. (formerly ) of Wellesley, Massachusetts and Perkin-Elmer (formerly ) of Norwalk, Connecticut. On May 28, 1999, the non-government side of EG&G Inc. purchased the Analytical Instruments Division of Perkin-Elmer, its traditional business segment, for US$425 million, also assuming the Perkin-Elmer name and forming the new PerkinElmer company, with new officers and a new board of directors. At the time, EG&G made products for diverse industries including automotive, medical, aerospace and photography. The old Perkin-Elmer Board of Directors and Officers remained at that reorganized company under its new name, PE Corporation. It had been the Life Sciences division of Perkin-Elmer, and its two component tracking stock business groups, Celera Genomics () and PE Biosystems (formerly ), were centrally involved in the highest profile biotechnology events of the decade, the intense race against the Human Genome Project consortium, which then resulted in the genomics segment of the technology bubble. Perkin-Elmer purchased the Boston operations of NEN Life Sciences in 2001. Recently In 1992, the company merged with Applied Biosystems. In 1997 they merged with PerSeptive Biosystems. On July 14, 1999, the new analytical instruments maker PerkinElmer cut 350 jobs, or 12%, in its cost reduction reorganization. In 2006, PerkinElmer sold off the Fluid Sciences division for approximately US$400 million; the aim of the selloff was to increase the strategic focus on its higher-growth health sciences and photonic markets. Following on from the selloff, a number of small businesses were acquired, including Spectral Genomics, Improvision, Evotec-Technologies, Euroscreen, ViaCell, and Avalon Instruments. The brand "Evotec-Technologies" remains the property of Evotec, the former owner company. PerkinElmer had a license to use the brand until the end of year 2007. PerkinElmer has continued to expand its interest in medicine with the acquisitions of clinical laboratories, In July 2006, it acquired NTD Labs located on Long Island, New York. The laboratory specializes in prenatal screening during the first trimester of pregnancy. In 2007, it purchased ViaCell, Inc. for US$300 million, which included its offices in Boston and cord blood storage facility in Kentucky near Cincinnati. The company was renamed ViaCord. In 2001 Perkin Elmer acquired Packard Bioscience Inc from its majority shareholder, Dick McKernen. This acquisition also came with Agincourt Technologies Inc and consolidated Perkin Elmer's position in laboratory robotics, in particular, liquid handling robots which were to prove essential for the high-throughput sequencing needed for the Human Genome Project. In March 2008, PerkinElmer purchased Pediatrix Screening (formerly Neo Gen Screening), a laboratory located in Bridgeville, Pennsylvania specializing in screening newborns for various inborn errors of metabolism such as phenylketonuria, hypothyroidism, and sickle-cell disease. It renamed the laboratory PerkinElmer Genetics, Inc. In May 2011, PerkinElmer announced the signature of an agreement to acquire CambridgeSoft, and the successful acquisition of ArtusLabs. In September 2011, PerkinElmer bought Caliper Life Sciences for US$600 million. In December 2014 PerkinElmer acquired Perten Instruments for US$266 million to expand in food testing. In January 2016, PerkinElmer acquired Swedish firm Vanadis Diagnostics. In February 2016 PerkinElmer acquired Delta Instruments. In January 2017, the company announced it would acquire the Indian in vitro diagnostic company, Tulip Diagnostics. In May 2017, the company acquired Euroimmun Medical Laboratory Diagnostics for approximately US$1.3 billion. In 2018, the company acquired Australian biotech company, RHS Ltd., Chinese manufacturer of analytical instruments, Shanghai Spectrum Instruments Co. Ltd., and France-based company Cisbio Bioassays, which specializes in diagnostics and drug discovery solutions. In November 2020, PerkinElmer announced it would acquire Horizon Discovery Group for around US$383 million. In March 2021, PerkinElmer announced that the company has completed its acquisition of Oxford Immunotec Global PLC (Oxford Immunotec). In May of the same year, the business announced it would purchase Nexcelom Bioscience for $260 million and Immunodiagnostic Systems Holdings PLC for $155 million. In June the company announced it would acquire SIRION Biotech, a specialist in viral vector gene delivery methods. In July the business announced it would acquire BioLegend for $5.25 billion. Acquisition history PerkinElmer (Est. 1935, modern company formed from EG&G Inc. purchase of Perkin-Elmer, Analytical Instruments Division) Applied Biosystems (Merged 1992) PerSeptive Biosystems. (Acq. 1997) Spectral Genomics Improvision Evotec-Technologies Euroscreen ViaCell Avalon Instruments Packard Bioscience Inc (Acq. 2003) NTD Labs (Acq. 2006) ViaCell, Inc. (Acq. 2007) Pediatrix Screening (Acq. 2008) CambridgeSoft (Acq. 2011) ArtusLabs (Acq. 2011) Caliper Life Sciences (Acq. 2011) Zymark (Acq. 2003) NovaScreen Biosciences Corporation (Acq. 2005) Xenogen Corporation (Acq. 2006) Xenogen Biosciences Cambridge Research & Instrumentation Inc. (Acq. 2010) Xenogen Corporation (Acq. 2006) Xenogen Corporation (Acq. 2006) Perten Instruments (Acq. 2014) Vanadis Diagnostics (Acq. 2016) Delta Instruments (Acq. 2016) Tulip Diagnostics (Acq. 2017) Euroimmun Medical Laboratory Diagnostics (Acq. 2017) RHS Ltd (Acq. 2018) Shanghai Spectrum Instruments Co. Ltd (Acq. 2018) Cisbio Bioassays (Acq. 2018) Horizon Discovery Group (Acq. 2020) Oxford Immunotec Global PLC (Acq. 2021) Nexcelom Bioscience (Acq. 2021) Immunodiagnostic Systems Holdings PLC (Acq. 2021) SIRION Biotech (Acq. 2021) BioLegend (Acq. 2021) BioLegend Japan KK BioLegend UK Ltd BioLegend GmbH Programs Hubble optics project Perkin-Elmer's Danbury Optical System unit was commissioned to build the optical components of the Hubble Space Telescope. The construction of the main mirror began in 1979 and completed in 1981. The polishing process ran over budget and behind schedule, producing significant friction with NASA. Due to a miscalibrated null corrector, the primary mirror was also found to have a significant spherical aberration after reaching orbit on STS-31. Perkin-Elmer's own calculations and measurements revealed the primary mirror's surface discrepancies, but the company chose to withhold that data from NASA. A NASA investigation heavily criticized Perkin-Elmer for management failings, disregarding written quality guidelines, and ignoring test data that revealed the miscalibration. Corrective optics were installed on the telescope during the first Hubble service and repair mission STS-61. The correction, Corrective Optics Space Telescope Axial Replacement, was applied entirely to the secondary mirror and replaced existing instrumentation; the aberration of the primary mirror remained uncorrected. The company agreed to pay US$15 million, essentially forgoing its fees in polishing the mirror, to avoid a threatened liability lawsuit under the False Claims Act by the Federal government. Hughes Aircraft, which acquired the Danbury Optical System unit one month after the launch of the telescope, paid US$10 million. The Justice Department asserted that the companies should have known about the flawed testing. Trade group Aerospace Industries Association protested when concerns were raised in the aerospace industry that aerospace companies might be held liable for failed equipment. KH-9 Hexagon Perkin-Elmer built the optical systems for the KH-9 Hexagon series of spy satellites at a facility in Danbury, Connecticut. References External links PerkinElmer Announces New Business Alignment Focused on Improving Human and Environmental Health SEC filings for PerkinElmer, Inc. Photographs from the Perkin-Elmer-Applera Collection Science History Institute Digital Collections (Extensive collection of print photographs and slides depicting the staff, facilities, and instrumentation of the Perkin-Elmer Corporation predominately dating from the 1960s and 1970s) Companies formerly listed on the New York Stock Exchange Design companies established in 1931 Technology companies of the United States Life science companies based in Massachusetts Instrument-making corporations Companies based in Waltham, Massachusetts Technology companies established in 1931 Defunct computer companies of the United States Defunct computer hardware companies Optics manufacturing companies of the United States
PerkinElmer
[ "Biology" ]
3,679
[ "Life science companies based in Massachusetts", "Life sciences industry" ]
2,113,121
https://en.wikipedia.org/wiki/Steady%20state%20%28chemistry%29
In chemistry, a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state, i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance). A simple example of such a system is the case of a bathtub with the tap running but with the drain unplugged: after a certain time, the water flows in and out at the same rate, so the water level (the state variable Volume) stabilizes and the system is in a steady state. The steady state concept is different from chemical equilibrium. Although both may create a situation where a concentration does not change, in a system at chemical equilibrium, the net reaction rate is zero (products transform into reactants at the same rate as reactants transform into products), while no such limitation exists in the steady state concept. Indeed, there does not have to be a reaction at all for a steady state to develop. The term steady state is also used to describe a situation where some, but not all, of the state variables of a system are constant. For such a steady state to develop, the system does not have to be a flow system. Therefore, such a steady state can develop in a closed system where a series of chemical reactions take place. Literature in chemical kinetics usually refers to this case, calling it steady state approximation. In simple systems the steady state is approached by state variables gradually decreasing or increasing until they reach their steady state value. In more complex systems state variables might fluctuate around the theoretical steady state either forever (a limit cycle) or gradually coming closer and closer. It theoretically takes an infinite time to reach steady state, just as it takes an infinite time to reach chemical equilibrium. Both concepts are, however, frequently used approximations because of the substantial mathematical simplifications these concepts offer. Whether or not these concepts can be used depends on the error the underlying assumptions introduce. So, even though a steady state, from a theoretical point of view, requires constant drivers (e.g. constant inflow rate and constant concentrations in the inflow), the error introduced by assuming steady state for a system with non-constant drivers may be negligible if the steady state is approached fast enough (relatively speaking). Steady state approximation in chemical kinetics The steady state approximation, occasionally called the stationary-state approximation or Bodenstein's quasi-steady state approximation, involves setting the rate of change of a reaction intermediate in a reaction mechanism equal to zero so that the kinetic equations can be simplified by setting the rate of formation of the intermediate equal to the rate of its destruction. In practice it is sufficient that the rates of formation and destruction are approximately equal, which means that the net rate of variation of the concentration of the intermediate is small compared to the formation and destruction, and the concentration of the intermediate varies only slowly, similar to the reactants and products (see the equations and the green traces in the figures below). Its use facilitates the resolution of the differential equations that arise from rate equations, which lack an analytical solution for most mechanisms beyond the simplest ones. The steady state approximation is applied, for example, in Michaelis-Menten kinetics. As an example, the steady state approximation will be applied to two consecutive, irreversible, homogeneous first order reactions in a closed system. (For heterogeneous reactions, see reactions on surfaces.) This model corresponds, for example, to a series of nuclear decompositions like . If the rate constants for the following reaction are and ; , combining the rate equations with a mass balance for the system yields three coupled differential equations: Reaction rates For species A: For species B: Here the first (positive) term represents the formation of B by the first step , whose rate depends on the initial reactant A. The second (negative) term represents the consumption of B by the second step , whose rate depends on B as the reactant in that step. For species C: Analytical solutions The analytical solutions for these equations (supposing that initial concentrations of every substance except for A are zero) are: Steady state If the steady state approximation is applied, then the derivative of the concentration of the intermediate is set to zero. This reduces the second differential equation to an algebraic equation which is much easier to solve. Therefore, so that Since the concentration of the reaction intermediate B changes with the same time constant as [A] and is not in a steady state in that sense. Validity The analytical and approximated solutions should now be compared in order to decide when it is valid to use the steady state approximation. The analytical solution transforms into the approximate one when because then and Therefore, it is valid to apply the steady state approximation only if the second reaction is much faster than the first ( is a common criterion), because that means that the intermediate forms slowly and reacts readily so its concentration stays low. The graphs show concentrations of A (red), B (green) and C (blue) in two cases, calculated from the analytical solution. When the first reaction is faster it is not valid to assume that the variation of [B] is very small, because [B] is neither low or close to constant: first A transforms into B rapidly and B accumulates because it disappears slowly. As the concentration of A decreases its rate of transformation decreases, at the same time the rate of reaction of B into C increases as more B is formed, so a maximum is reached when From then on the concentration of B decreases. When the second reaction is faster, after a short induction period during which the steady state approximation does not apply, the concentration of B remains low (and more or less constant in an absolute sense) because its rates of formation and disappearance are almost equal and the steady state approximation can be used. The equilibrium approximation can sometimes be used in chemical kinetics to yield similar results to the steady state approximation. It consists in assuming that the intermediate arrives rapidly at chemical equilibrium with the reactants. For example, Michaelis-Menten kinetics can be derived assuming equilibrium instead of steady state. Normally the requirements for applying the steady state approximation are laxer: the concentration of the intermediate is only needed to be low and more or less constant (as seen, this has to do only with the rates at which it appears and disappears) but it is not required to be at equilibrium. Example The reaction has the following mechanism: The rate of each species are: These equations cannot be solved, because each one has values that change with time. For example, the first equation contains the concentrations of [Br], and , which depend on time, as can be seen in their respective equations. To solve the rate equations the steady state approximation can be used. The reactants of this reaction are and , the intermediates are H and Br, and the product is HBr. For solving the equations, the rates of the intermediates are set to 0 in the steady state approximation: From the reaction rate of H, , so the reaction rate of Br can be simplified: The reaction rate of HBr can also be simplifed, changing to , since both values are equal. The concentration of H from equation 1 can be isolated: The concentration of this intermediate is small and changes with time like the concentrations of reactants and product. It is inserted into the last differential equation to give Simplifying the equation leads to The experimentally observed rate is The experimental rate law is the same as rate obtained with the steady state approximation, if is and is . See also Lindemann mechanism Reaction progress kinetic analysis Steady state (biochemistry) Notes and references External links https://chem.libretexts.org/Core/Physical_and_Theoretical_Chemistry/Kinetics/Reaction_Mechanisms/Steady-State_Approximation Chemical kinetics Physical chemistry
Steady state (chemistry)
[ "Physics", "Chemistry" ]
1,612
[ "Chemical reaction engineering", "Applied and interdisciplinary physics", "nan", "Chemical kinetics", "Physical chemistry" ]
2,114,155
https://en.wikipedia.org/wiki/Duoprism
In geometry of 4 dimensions or higher, a double prism or duoprism is a polytope resulting from the Cartesian product of two polytopes, each of two dimensions or higher. The Cartesian product of an -polytope and an -polytope is an -polytope, where and are dimensions of 2 (polygon) or higher. The lowest-dimensional duoprisms exist in 4-dimensional space as 4-polytopes being the Cartesian product of two polygons in 2-dimensional Euclidean space. More precisely, it is the set of points: where and are the sets of the points contained in the respective polygons. Such a duoprism is convex if both bases are convex, and is bounded by prismatic cells. Nomenclature Four-dimensional duoprisms are considered to be prismatic 4-polytopes. A duoprism constructed from two regular polygons of the same edge length is a uniform duoprism. A duoprism made of n-polygons and m-polygons is named by prefixing 'duoprism' with the names of the base polygons, for example: a triangular-pentagonal duoprism is the Cartesian product of a triangle and a pentagon. An alternative, more concise way of specifying a particular duoprism is by prefixing with numbers denoting the base polygons, for example: 3,5-duoprism for the triangular-pentagonal duoprism. Other alternative names: q-gonal-p-gonal prism q-gonal-p-gonal double prism q-gonal-p-gonal hyperprism The term duoprism is coined by George Olshevsky, shortened from double prism. John Horton Conway proposed a similar name proprism for product prism, a Cartesian product of two or more polytopes of dimension at least two. The duoprisms are proprisms formed from exactly two polytopes. Example 16-16 duoprism Geometry of 4-dimensional duoprisms A 4-dimensional uniform duoprism is created by the product of a regular n-sided polygon and a regular m-sided polygon with the same edge length. It is bounded by n m-gonal prisms and m n-gonal prisms. For example, the Cartesian product of a triangle and a hexagon is a duoprism bounded by 6 triangular prisms and 3 hexagonal prisms. When m and n are identical, the resulting duoprism is bounded by 2n identical n-gonal prisms. For example, the Cartesian product of two triangles is a duoprism bounded by 6 triangular prisms. When m and n are identically 4, the resulting duoprism is bounded by 8 square prisms (cubes), and is identical to the tesseract. The m-gonal prisms are attached to each other via their m-gonal faces, and form a closed loop. Similarly, the n-gonal prisms are attached to each other via their n-gonal faces, and form a second loop perpendicular to the first. These two loops are attached to each other via their square faces, and are mutually perpendicular. As m and n approach infinity, the corresponding duoprisms approach the duocylinder. As such, duoprisms are useful as non-quadric approximations of the duocylinder. Nets Perspective projections A cell-centered perspective projection makes a duoprism look like a torus, with two sets of orthogonal cells, p-gonal and q-gonal prisms. The p-q duoprisms are identical to the q-p duoprisms, but look different in these projections because they are projected in the center of different cells. Orthogonal projections Vertex-centered orthogonal projections of p-p duoprisms project into [2n] symmetry for odd degrees, and [n] for even degrees. There are n vertices projected into the center. For 4,4, it represents the A3 Coxeter plane of the tesseract. The 5,5 projection is identical to the 3D rhombic triacontahedron. Related polytopes The regular skew polyhedron, {4,4|n}, exists in 4-space as the n2 square faces of a n-n duoprism, using all 2n2 edges and n2 vertices. The 2n n-gonal faces can be seen as removed. (skew polyhedra can be seen in the same way by a n-m duoprism, but these are not regular.) Duoantiprism Like the antiprisms as alternated prisms, there is a set of 4-dimensional duoantiprisms: 4-polytopes that can be created by an alternation operation applied to a duoprism. The alternated vertices create nonregular tetrahedral cells, except for the special case, the 4-4 duoprism (tesseract) which creates the uniform (and regular) 16-cell. The 16-cell is the only convex uniform duoantiprism. The duoprisms , t0,1,2,3{p,2,q}, can be alternated into , ht0,1,2,3{p,2,q}, the "duoantiprisms", which cannot be made uniform in general. The only convex uniform solution is the trivial case of p=q=2, which is a lower symmetry construction of the tesseract , t0,1,2,3{2,2,2}, with its alternation as the 16-cell, , s{2}s{2}. The only nonconvex uniform solution is p=5, q=5/3, ht0,1,2,3{5,2,5/3}, , constructed from 10 pentagonal antiprisms, 10 pentagrammic crossed-antiprisms, and 50 tetrahedra, known as the great duoantiprism (gudap). Ditetragoltriates Also related are the ditetragoltriates or octagoltriates, formed by taking the octagon (considered to be a ditetragon or a truncated square) to a p-gon. The octagon of a p-gon can be clearly defined if one assumes that the octagon is the convex hull of two perpendicular rectangles; then the p-gonal ditetragoltriate is the convex hull of two p-p duoprisms (where the p-gons are similar but not congruent, having different sizes) in perpendicular orientations. The resulting polychoron is isogonal and has 2p p-gonal prisms and p2 rectangular trapezoprisms (a cube with D2d symmetry) but cannot be made uniform. The vertex figure is a triangular bipyramid. Double antiprismoids Like the duoantiprisms as alternated duoprisms, there is a set of p-gonal double antiprismoids created by alternating the 2p-gonal ditetragoltriates, creating p-gonal antiprisms and tetrahedra while reinterpreting the non-corealmic triangular bipyramidal spaces as two tetrahedra. The resulting figure is generally not uniform except for two cases: the grand antiprism and its conjugate, the pentagrammic double antiprismoid (with p = 5 and 5/3 respectively), represented as the alternation of a decagonal or decagrammic ditetragoltriate. The vertex figure is a variant of the sphenocorona. k_22 polytopes The 3-3 duoprism, -122, is first in a dimensional series of uniform polytopes, expressed by Coxeter as k22 series. The 3-3 duoprism is the vertex figure for the second, the birectified 5-simplex. The fourth figure is a Euclidean honeycomb, 222, and the final is a paracompact hyperbolic honeycomb, 322, with Coxeter group [32,2,3], . Each progressive uniform polytope is constructed from the previous as its vertex figure. See also Polytope and 4-polytope Convex regular 4-polytope Duocylinder Tesseract Notes References Regular Polytopes, H. S. M. Coxeter, Dover Publications, Inc., 1973, New York, p. 124. Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues) Coxeter, H. S. M. Regular Skew Polyhedra in Three and Four Dimensions. Proc. London Math. Soc. 43, 33-62, 1937. John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26) N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 Uniform 4-polytopes
Duoprism
[ "Physics" ]
1,939
[ "Uniform 4-polytopes", "Uniform polytopes", "Symmetry" ]
2,115,745
https://en.wikipedia.org/wiki/Cadmium%20selenide
Cadmium selenide is an inorganic compound with the formula CdSe. It is a black to red-black solid that is classified as a II-VI semiconductor of the n-type. It is a pigment, but applications are declining because of environmental concerns. Structure Three crystalline forms of CdSe are known which follow the structures of: wurtzite (hexagonal), sphalerite (cubic) and rock-salt (cubic). The sphalerite CdSe structure is unstable and converts to the wurtzite form upon moderate heating. The transition starts at about 130 °C, and at 700 °C it completes within a day. The rock-salt structure is only observed under high pressure. Production The production of cadmium selenide has been carried out in two different ways. The preparation of bulk crystalline CdSe is done by the High-Pressure Vertical Bridgman method or High-Pressure Vertical Zone Melting. Cadmium selenide may also be produced in the form of nanoparticles. (see applications for explanation) Several methods for the production of CdSe nanoparticles have been developed: arrested precipitation in solution, synthesis in structured media, high temperature pyrolysis, sonochemical, and radiolytic methods are just a few. Production of cadmium selenide by arrested precipitation in solution is performed by introducing alkylcadmium and trioctylphosphine selenide (TOPSe) precursors into a heated solvent under controlled conditions. Me2Cd + TOPSe → CdSe + (byproducts) CdSe nanoparticles can be modified by production of two phase materials with ZnS coatings. The surfaces can be further modified, e.g. with mercaptoacetic acid, to confer solubility. Synthesis in structured environments refers to the production of cadmium selenide in liquid crystal or surfactant solutions. The addition of surfactants to solutions often results in a phase change in the solution leading to a liquid crystallinity. A liquid crystal is similar to a solid crystal in that the solution has long range translational order. Examples of this ordering are layered alternating sheets of solution and surfactant, micelles, or even a hexagonal arrangement of rods. High temperature pyrolysis synthesis is usually carried out using an aerosol containing a mixture of volatile cadmium and selenium precursors. The precursor aerosol is then carried through a furnace with an inert gas, such as hydrogen, nitrogen, or argon. In the furnace the precursors react to form CdSe as well as several by-products. CdSe nanoparticles CdSe-derived nanoparticles with sizes below 10 nm exhibit a property known as quantum confinement. Quantum confinement results when the electrons in a material are confined to a very small volume. Quantum confinement is size dependent, meaning the properties of CdSe nanoparticles are tunable based on their size. One type of CdSe nanoparticle is a CdSe quantum dot. This discretization of energy states results in electronic transitions that vary by quantum dot size. Larger quantum dots have closer electronic states than smaller quantum dots which means that the energy required to excite an electron from HOMO to the LUMO is lower than the same electronic transition in a smaller quantum dot. This quantum confinement effect can be observed as a red shift in absorbance spectra for nanocrystals with larger diameters. Quantum confinement effects in quantum dots can also result in fluorescence intermittency, called "blinking." CdSe quantum dots have been implemented in a wide range of applications including solar cells, light emitting diodes, and biofluorescent tagging. CdSe-based materials also have potential uses in biomedical imaging. Human tissue is permeable to near infra-red light. By injecting appropriately prepared CdSe nanoparticles into injured tissue, it may be possible to image the tissue in those injured areas. CdSe quantum dots are usually composed of a CdSe core and a ligand shell. Ligands play important roles in the stability and solubility of the nanoparticles. During synthesis, ligands stabilize growth to prevent aggregation and precipitation of the nanocrystals. These capping ligands also affect the quantum dot's electronic and optical properties by passivating surface electronic states. An application that depends on the nature of the surface ligands is the synthesis of CdSe thin films. The density of the ligands on the surface and the length of the ligand chain affect the separation between nanocrystal cores which in turn influence stacking and conductivity. Understanding the surface structure of CdSe quantum dots in order to investigate the structure's unique properties and for further functionalization for greater synthetic variety requires a rigorous description of the ligand exchange chemistry on the quantum dot surface. A prevailing belief is that trioctylphosphine oxide (TOPO) or trioctylphosphine (TOP), a neutral ligand derived from a common precursor used in the synthesis of CdSe dots, caps the surface of CdSe quantum dots. However, results from recent studies challenge this model. Using NMR, quantum dots have been shown to be nonstoichiometric meaning that the cadmium to selenide ratio is not one to one. CdSe dots have excess cadmium cations on the surface that can form bonds with anionic species such as carboxylate chains. The CdSe quantum dot would be charge unbalanced if TOPO or TOP were indeed the only type of ligand bound to the dot. The CdSe ligand shell may contain both X type ligands which form covalent bonds with the metal and L type ligands that form dative bonds. It has been shown that these ligands can undergo exchange with other ligands. Examples of X type ligands that have been studied in the context of CdSe nanocrystal surface chemistry are sulfides and thiocyanates. Examples of L type ligands that have been studied are amines and phosphines (ref). A ligand exchange reaction in which tributylphosphine ligands were displaced by primary alkylamine ligands on chloride terminated CdSe dots has been reported. Stoichiometry changes were monitored using proton and phosphorus NMR. Photoluminescence properties were also observed to change with ligand moiety. The amine bound dots had significantly higher photoluminescent quantum yields than the phosphine bound dots. Applications CdSe material is transparent to infra-red (IR) light and has seen limited use in photoresistors and in windows for instruments utilizing IR light. The material is also highly luminescent. CdSe is a component of the pigment cadmium orange. CdSe can also serve as the n-type semiconductor layer in photovoltaic cells. Natural occurrence CdSe occurs in the nature as the very rare mineral cadmoselite. Safety information Cadmium is a toxic heavy metal and appropriate precautions should be taken when handling it and its compounds. Selenides are toxic in large amounts. Cadmium selenide is a known carcinogen to humans and medical attention should be sought if swallowed, dust inhaled, or if contact with skin or eyes occurs. References External links National Pollutant Inventory – Cadmium and compounds Nanotechnology Structures – Quantum Confinement thin-film transistors (TFTs). * Cadmium compounds Selenides II-VI semiconductors Optical materials Zincblende crystal structure Wurtzite structure type
Cadmium selenide
[ "Physics", "Chemistry" ]
1,549
[ "Inorganic compounds", "Semiconductor materials", "Materials", "II-VI semiconductors", "Optical materials", "Matter" ]
2,116,008
https://en.wikipedia.org/wiki/Microprofessor%20III
Microprofessor III (MPF III), introduced in 1983, was Multitech's (later renamed Acer) third branded computer product and also (arguably) one of the first Apple IIe clones. Unlike the two earlier computers, its design was influenced by the IBM personal computer. Because of some additional functions in the ROM and different graphics routines, the MPF III was not totally compatible with the original Apple IIe computer. One key feature of the MPF III in some models was its Chinese BASIC, a version of Chinese-localized BASIC based on Applesoft BASIC. The MPF III included an optional Z80 CP/M emulator card. It permitted the computer to switch to the Z80 processor and run programs developed under the CP/M operating system. In 1984 the machine had a retail price of $699 in Australia. The different models in the MPF-III brand were the Multitech MPF-III/311 in NTSC countries (mainly in the United States and Canada) and the MPF-III/312 in PAL countries (mainly in Australia, Sweden, Spain, Finland, Italy and Singapore). It was also sold in Latin America as the Latindata MPF-III. Technical specifications CPU: MOS Technology 6502, 1 MHz Memory (RAM): 64KB dynamic RAM and 2KB static RAM ROM: 24KB, including MBASIC (MPF-III BASIC), monitor, sound, display and printer programs and drivers Operating system: DOS 3.3 or ProDOS Input/Output: NTSC composite video jack (MPF-III/311), TV RF modulator port, cassette port, printer port, joystick 9-pin D-type port, earphone, and external speaker jacks Expandability: internal slots (6), optional Z80 CP/M emulator card, one external Apple II type card slot Screen display: Text modes: 40×24, 80x24 (with 80 columns card) Graphics modes: 40×48 (16 col), 280×192 (6 col) Sound: one channel Storage: 2 optional 5.25 inch 140 KB diskette drives Keyboard: 90 keys keyboard with numeric keypad See also Microprofessor I — unrelated Z80 programming education device Microprofessor II References Acer Inc. computers Home computers Apple II clones Computer-related introductions in 1983
Microprofessor III
[ "Technology" ]
494
[ "Computing stubs", "Computer hardware stubs" ]
2,116,830
https://en.wikipedia.org/wiki/Trajectory%20optimization
Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC). Although the idea of trajectory optimization has been around for hundreds of years (calculus of variations, brachystochrone problem), it only became practical for real-world problems with the advent of the computer. Many of the original applications of trajectory optimization were in the aerospace industry, computing rocket and missile launch trajectories. More recently, trajectory optimization has also been used in a wide variety of industrial process and robotics applications. History Trajectory optimization first showed up in 1697, with the introduction of the Brachystochrone problem: find the shape of a wire such that a bead sliding along it will move between two points in the minimum time. The interesting thing about this problem is that it is optimizing over a curve (the shape of the wire), rather than a single number. The most famous of the solutions was computed using calculus of variations. In the 1950s, the digital computer started to make trajectory optimization practical for solving real-world problems. The first optimal control approaches grew out of the calculus of variations, based on the research of Gilbert Ames Bliss and Bryson in America, and Pontryagin in Russia. Pontryagin's maximum principle is of particular note. These early researchers created the foundation of what we now call indirect methods for trajectory optimization. Much of the early work in trajectory optimization was focused on computing rocket thrust profiles, both in a vacuum and in the atmosphere. This early research discovered many basic principles that are still used today. Another successful application was the climb to altitude trajectories for the early jet aircraft. Because of the high drag associated with the transonic drag region and the low thrust of early jet aircraft, trajectory optimization was the key to maximizing climb to altitude performance. Optimal control based trajectories were responsible for some of the world records. In these situations, the pilot followed a Mach versus altitude schedule based on optimal control solutions. One of the important early problems in trajectory optimization was that of the singular arc, where Pontryagin's maximum principle fails to yield a complete solution. An example of a problem with singular control is the optimization of the thrust of a missile flying at a constant altitude and which is launched at low speed. Here the problem is one of a bang-bang control at maximum possible thrust until the singular arc is reached. Then the solution to the singular control provides a lower variable thrust until burnout. At that point bang-bang control provides that the control or thrust go to its minimum value of zero. This solution is the foundation of the boost-sustain rocket motor profile widely used today to maximize missile performance. Applications There are a wide variety of applications for trajectory optimization, primarily in robotics: industry, manipulation, walking, path-planning, and aerospace. It can also be used for modeling and estimation. Robotic manipulators Depending on the configuration, open-chain robotic manipulators require a degree of trajectory optimization. For instance, a robotic arm with 7 joints and 7 links (7-DOF) is a redundant system where one cartesian position of an end-effector can correspond to an infinite number of joint angle positions, thus this redundancy can be used to optimize a trajectory to, for example, avoid any obstacles in the workspace or minimize the torque in the joints. Quadrotor helicopters Trajectory optimization is often used to compute trajectories for quadrotor helicopters. These applications typically used highly specialized algorithms. One interesting application shown by the U.Penn GRASP Lab is computing a trajectory that allows a quadrotor to fly through a hoop as it is thrown. Another, this time by the ETH Zurich Flying Machine Arena, involves two quadrotors tossing a pole back and forth between them, with it balanced like an inverted pendulum. The problem of computing minimum-energy trajectories for a quadcopter, has also been recently studied. Manufacturing Trajectory optimization is used in manufacturing, particularly for controlling chemical processes or computing the desired path for robotic manipulators. Walking robots There are a variety of different applications for trajectory optimization within the field of walking robotics. For example, one paper used trajectory optimization of bipedal gaits on a simple model to show that walking is energetically favorable for moving at a low speed and running is energetically favorable for moving at a high speed. Like in many other applications, trajectory optimization can be used to compute a nominal trajectory, around which a stabilizing controller is built. Trajectory optimization can be applied in detailed motion planning complex humanoid robots, such as Atlas. Finally, trajectory optimization can be used for path-planning of robots with complicated dynamics constraints, using reduced complexity models. Aerospace For tactical missiles, the flight profiles are determined by the thrust and lift histories. These histories can be controlled by a number of means including such techniques as using an angle of attack command history or an altitude/downrange schedule that the missile must follow. Each combination of missile design factors, desired missile performance, and system constraints results in a new set of optimal control parameters. Terminology Decision variables The set of unknowns to be found using optimization. Trajectory optimization problem A special type of optimization problem where the decision variables are functions, rather than real numbers. Parameter optimization Any optimization problem where the decision variables are real numbers. Nonlinear program A class of constrained parameter optimization where either the objective function or constraints are nonlinear. Indirect method An indirect method for solving a trajectory optimization problem proceeds in three steps: 1) Analytically construct the necessary and sufficient conditions for optimality, 2) Discretize these conditions, constructing a constrained parameter optimization problem, 3) Solve that optimization problem. Direct method A direct method for solving a trajectory optimization problem consists of two steps: 1) Discretize the trajectory optimization problem directly, converting it into a constrained parameter optimization problem, 2) Solve that optimization problem. Transcription The process by which a trajectory optimization problem is converted into a parameter optimization problem. This is sometimes referred to as discretization. Transcription methods generally fall into two categories: shooting methods and collocation methods. Shooting method A transcription method that is based on simulation, typically using explicit Runge--Kutta schemes. Collocation method (Simultaneous Method) A transcription method that is based on function approximation, typically using implicit Runge--Kutta schemes. Pseudospectral method (Global Collocation) A transcription method that represents the entire trajectory as a single high-order orthogonal polynomial. Mesh (Grid) After transcription, the formerly continuous trajectory is now represented by a discrete set of points, known as mesh points or grid points. Mesh refinement The process by which the discretization mesh is improved by solving a sequence of trajectory optimization problems. Mesh refinement is either performed by sub-dividing a trajectory segment or by increasing the order of the polynomial representing that segment. Multi-phase trajectory optimization problem Trajectory optimization over a system with hybrid dynamics can be achieved by posing it as a multi-phase trajectory optimization problem. This is done by composing a sequence of standard trajectory optimization problems that are connected using constraints. Trajectory optimization techniques The techniques to any optimization problems can be divided into two categories: indirect and direct. An indirect method works by analytically constructing the necessary and sufficient conditions for optimality, which are then solved numerically. A direct method attempts a direct numerical solution by constructing a sequence of continually improving approximations to the optimal solution. The optimal control problem is an infinite-dimensional optimization problem, since the decision variables are functions, rather than real numbers. All solution techniques perform transcription, a process by which the trajectory optimization problem (optimizing over functions) is converted into a constrained parameter optimization problem (optimizing over real numbers). Generally, this constrained parameter optimization problem is a non-linear program, although in special cases it can be reduced to a quadratic program or linear program. Single shooting Single shooting is the simplest type of trajectory optimization technique. The basic idea is similar to how you would aim a cannon: pick a set of parameters for the trajectory, simulate the entire thing, and then check to see if you hit the target. The entire trajectory is represented as a single segment, with a single constraint, known as a defect constraint, requiring that the final state of the simulation matches the desired final state of the system. Single shooting is effective for problems that are either simple or have an extremely good initialization. Both the indirect and direct formulation tend to have difficulties otherwise. Multiple shooting Multiple shooting is a simple extension to single shooting that renders it far more effective. Rather than representing the entire trajectory as a single simulation (segment), the algorithm breaks the trajectory into many shorter segments, and a defect constraint is added between each. The result is large sparse non-linear program, which tends to be easier to solve than the small dense programs produced by single shooting. Direct collocation Direct collocation methods work by approximating the state and control trajectories using polynomial splines. These methods are sometimes referred to as direct transcription. Trapezoidal collocation is a commonly used low-order direct collocation method. The dynamics, path objective, and control are all represented using linear splines, and the dynamics are satisfied using trapezoidal quadrature. Hermite-Simpson Collocation is a common medium-order direct collocation method. The state is represented by a cubic-Hermite spline, and the dynamics are satisfied using Simpson quadrature. Orthogonal collocation Orthogonal collocation is technically a subset of direct collocation, but the implementation details are so different that it can reasonably be considered its own set of methods. Orthogonal collocation differs from direct collocation in that it typically uses high-order splines, and each segment of the trajectory might be represented by a spline of a different order. The name comes from the use of orthogonal polynomials in the state and control splines. Pseudospectral discretization In pseudospectral discretization the entire trajectory is represented by a collection of basis functions in the time domain (independent variable). The basis functions need not be polynomials. Pseudospectral discretization is also known as spectral collocation. When used to solve a trajectory optimization problem whose solution is smooth, a pseudospectral method will achieve spectral (exponential) convergence. If the trajectory is not smooth, the convergence is still very fast, faster than Runge-Kutta methods. Temporal Finite Elements In 1990 Dewey H. Hodges and Robert R. Bless proposed a weak Hamiltonian finite element method for optimal control problems. The idea was to derive a weak variational form of first order necessary conditions for optimality, discretise the time domain in finite intervals and use a simple zero order polynomial representation of states, controls and adjoints over each interval. Differential dynamic programming Differential dynamic programming, is a bit different than the other techniques described here. In particular, it does not cleanly separate the transcription and the optimization. Instead, it does a sequence of iterative forward and backward passes along the trajectory. Each forward pass satisfies the system dynamics, and each backward pass satisfies the optimality conditions for control. Eventually, this iteration converges to a trajectory that is both feasible and optimal. Diffusion-based trajectory optimization In contrast to the aforementioned classical methods, generative machine learning methods may be used to generate a desirable trajectory. In particular, diffusion models learn to iteratively reverse a destructive forward process in which noise is added to data until it becomes noise itself by estimating the noise to remove at every time step. Thus given easily to sample random noise as input, the diffusion process will recover a plausible corresponding noise-free data point. Recent methods have parameterized trajectories as matrices of state-action pairs at consecutive time steps and trained a diffusion model to generate such a matrix. To address the issue of controllability of the generated samples, the Diffuser method proposes two techniques to steer the generated sample, thereby reducing the optimization problem to a sampling problem. First, guided diffusion can be used to incorporate a cost (or reward) function into the generation process. For this purpose the gradient of the cost function modifies the mean of the estimated noise at every time step. Second, for motion planning problems in which the start and the end states of the trajectory are known, and the trajectory needs to comply with constraints to find a viable path, an inpainting approach can be used. Similar to the first technique, a prior modifies the distribution of trajectories, which in this case assigns high probability to trajectories satisfying the constraints (e.g. arriving at a state at time step ), and zero probability to all other trajectories. As a result, sampling from this distribution will produce trajectories that satisfy the constraints. Comparison of techniques There are many techniques to choose from when solving a trajectory optimization problem. There is no best method, but some methods might do a better job on specific problems. This section provides a rough understanding of the trade-offs between methods. Indirect vs. direct methods When solving a trajectory optimization problem with an indirect method, you must explicitly construct the adjoint equations and their gradients. This is often difficult to do, but it gives an excellent accuracy metric for the solution. Direct methods are much easier to set up and solve, but do not have a built-in accuracy metric. As a result, direct methods are more widely used, especially in non-critical applications. Indirect methods still have a place in specialized applications, particularly aerospace, where accuracy is critical. One place where indirect methods have particular difficulty is on problems with path inequality constraints. These problems tend to have solutions for which the constraint is partially active. When constructing the adjoint equations for an indirect method, the user must explicitly write down when the constraint is active in the solution, which is difficult to know a priori. One solution is to use a direct method to compute an initial guess, which is then used to construct a multi-phase problem where the constraint is prescribed. The resulting problem can then be solved accurately using an indirect method. Shooting vs. collocation Single shooting methods are best used for problems where the control is very simple (or there is an extremely good initial guess). For example, a satellite mission planning problem where the only control is the magnitude and direction of an initial impulse from the engines. Multiple shooting tends to be good for problems with relatively simple control, but complicated dynamics. Although path constraints can be used, they make the resulting nonlinear program relatively difficult to solve. Direct collocation methods are good for problems where the accuracy of the control and the state are similar. These methods tend to be less accurate than others (due to their low-order), but are particularly robust for problems with difficult path constraints. Orthogonal collocation methods are best for obtaining high-accuracy solutions to problems where the accuracy of the control trajectory is important. Some implementations have trouble with path constraints. These methods are particularly good when the solution is smooth. See also Differential game Optimal control Pursuit curve References External links Applied Trajectory Optimization Ballistics Mathematical optimization Aerospace engineering Aerodynamics Pursuit–evasion
Trajectory optimization
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,226
[ "Mathematical analysis", "Applied and interdisciplinary physics", "Aerodynamics", "Aerospace engineering", "Mathematical optimization", "Ballistics", "Fluid dynamics" ]
40,057,861
https://en.wikipedia.org/wiki/Systems%20and%20Synthetic%20Biology
Systems and Synthetic Biology is a peer-reviewed scientific journal covering systems and synthetic biology. It was established in 2007 and was published by Springer Science+Business Media. The editors-in-chief were Pawan K. Dhar (University of Kerala) and Ron Weiss (Massachusetts Institute of Technology). The journal's last volume was in 2015. Abstracting and indexing The journal is abstracted and indexed in: References External links Molecular and cellular biology journals Systems biology Synthetic biology English-language journals Hybrid open access journals Quarterly journals Academic journals established in 2007 Springer Science+Business Media academic journals
Systems and Synthetic Biology
[ "Chemistry", "Engineering", "Biology" ]
120
[ "Synthetic biology", "Biological engineering", "Bioinformatics", "Molecular genetics", "Molecular and cellular biology journals", "Molecular biology", "Systems biology" ]
40,058,616
https://en.wikipedia.org/wiki/Sustainable%20Transport%20Award
The Sustainable Transport Award (STA) is presented annually to a city that has shown leadership and vision in the field of sustainable transportation and urban livability in the preceding year. Nominations are accepted from anyone, and winners and honorable mentions are chosen by the Sustainable Transport Award Steering Committee. Since 2005, the award has been given out annually to a city or major jurisdiction that has implemented innovative transportation strategies, especially in several different areas of sustainable transportation. The award rewards cities for improving mobility for residents, reducing transportation greenhouse gas and air pollution emissions, and improving safety and access for bicyclists and pedestrians.The STA shows international interest in cities at the forefront of transportation policy. By highlighting successfully completed programs and emphasizing transferability, the award helps disseminate new ideas and best practices while encouraging cities worldwide to improve their own livability. Noteworthy projects include the construction or expansion of BRT or LRT systems, bike shares or bike lanes, attention to low-income access to transportation, reform of parking or zoning regulations, and linking transportation and development practices (TOD). Process Criteria STAs are awarded to cities that have demonstrated significant progress in using transportation to create a more sustainable and livable city. The Sustainable Transport Award looks for cities working in several of the following policy areas: Improvements to public transportation, such as implementing a new mass transit system (e.g., bus rapid transit), expanding the existing systems to increase accessibility and coverage, or improving customer service. Improvements to non-motorized travel, such as the implementation or expansion of bike share programs and bike lanes, the creation of pedestrian walkways, and improvements to street crossings and sidewalks. Expansion or improvement of public space often includes the creation of open plazas, creating pedestrian-only zones, installing street lamps or trees along sidewalks, and pedestrian safety measures. Implementation of travel demand management programs to reduce private car use can include car-free days or zones, changes to parking requirements or availability, the implementation or expansion of car share systems, congestion charging, and structured tolls and fees. Reducing urban sprawl by linking transportation to development (TOD) can be done through changes to zoning laws and providing incentives to developers. Reduction of transport-related air pollution and greenhouse gas emissions by creating pollution laws, mandating air quality controls, restricting vehicle access, and creating an air advisory system. To be eligible for an STA, cities must have made significant progress in the past year in addressing sustainable transit. Awards are presented for projects implemented in the previous year rather than for planned activities or simply beginning construction. Nominations Cities must be nominated to be considered for the award. Nominations can come from government agencies, including the Mayor's office, NGOs, consultants, academics, or anyone else with a close working knowledge of the city's projects. Applicants are asked to provide program details, impact, significance, outcomes, transferability, and images. Steering Committee Final selection of the award recipient and honorable mentions is conducted by a steering committee, composed of experts and organizations working internationally on sustainable transportation. The committee includes the Institute for Transportation and Development Policy (ITDP), World Resources Institute, World Bank, GIZ, Asian Development Bank, Clean Air Asia, ICLEI, and Despacio. The committee looks for projects completed in the previous year that demonstrate innovation and success in improving sustainable transportation. Past winners 2005: Bogotá, Colombia 2006: Seoul, South Korea 2007: Guayaquil, Ecuador 2008: Paris, France and London, United Kingdom 2009: New York City, United States 2010: Ahmedabad, India 2011: Guangzhou, China 2012: Medellín, Colombia and San Francisco, United States 2013: Mexico City, Mexico 2014: Buenos Aires, Argentina 2015: Belo Horizonte, Rio de Janeiro, and São Paulo, Brazil 2016: Yichang, China 2017: Santiago, Chile 2018: Dar es Salaam, Tanzania 2019: Fortaleza, Brazil 2020: Pune, India 2021: Jakarta, Indonesia 2022: Bogotá, Colombia 2023: Paris, France 2024: Tianjin, China References International awards Sustainable transport Urban planning Awards established in 2005 Community awards Environmental awards
Sustainable Transport Award
[ "Physics", "Engineering" ]
832
[ "Physical systems", "Transport", "Sustainable transport", "Urban planning", "Architecture" ]
40,059,683
https://en.wikipedia.org/wiki/C20H24O7
{{DISPLAYTITLE:C20H24O7}} The molecular formula C20H24O7 (molar mass: 376.40 g/mol, exact mass: 376.1522 u) may refer to: Ailanthone Tripdiolide Triptolidenol Molecular formulas
C20H24O7
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
40,061,096
https://en.wikipedia.org/wiki/Local%20uniformization
In algebraic geometry, local uniformization is a weak form of resolution of singularities, stating that a variety can be desingularized near any valuation, or in other words that the Zariski–Riemann space of the array is in some sense non-singular. Local uniformization was introduced by , who separated the problem of resolving the singularities of a variety into the problem of local uniformization and the problem of combining the local uniformizations into a global desingularization. Local uniformization of a variety at a valuation of its function field means finding a projective model of the variety such that the center of the valuation is non-singular. It is weaker than resolution of singularities: if there is a resolution of singularities then this is a model such that the center of every valuation is non-singular. proved that if one can show local uniformization of a variety then one can find a finite number of models such that every valuation has a non-singular center on at least one of these models. To complete a proof of resolution of singularities, it is then sufficient to show that one can combine these finite models into a single model, but this seems rather hard. (Local uniformization at a valuation does not directly imply resolution at the center of the valuation: roughly speaking; it only implies resolution in a sort of "wedge" near this point, and it seems hard to combine the resolutions of different wedges into a resolution at a point.) proved local uniformization of varieties in any dimension over fields of characteristic 0, and used this to prove resolution of singularities for varieties in characteristic 0 of dimension at most 3. Local uniformization in positive characteristic seems to be much harder. proved local uniformization in all characteristics for surfaces and in characteristics at least 7 for 3-folds, and was able to deduce global resolution of singularities in these cases from this. simplified Abhyankar's long proof. extended Abhyankar's proof of local uniformization of 3-folds to the remaining characteristics 2, 3, and 5. showed that it is possible to find a local uniformization of any valuation after taking a purely inseparable extension of the function field. Local uniformization in positive characteristic for varieties of dimension at least 4 is (as of 2019) an open problem. References (1998 2nd edition) External links Algebraic geometry Singularity theory
Local uniformization
[ "Mathematics" ]
482
[ "Fields of abstract algebra", "Algebraic geometry" ]
40,065,111
https://en.wikipedia.org/wiki/Coupling%20coefficient%20of%20resonators
The coupling coefficient of resonators is a dimensionless value that characterizes interaction of two resonators. Coupling coefficients are used in resonator filter theory. Resonators may be both electromagnetic and acoustic. Coupling coefficients together with resonant frequencies and external quality factors of resonators are the generalized parameters of filters. In order to adjust the frequency response of the filter it is sufficient to optimize only these generalized parameters. Evolution of the term This term was first introduced in filter theory by M Dishal. In some degree it is an analog of coupling coefficient of coupled inductors. Meaning of this term has been improved many times with progress in theory of coupled resonators and filters. Later definitions of the coupling coefficient are generalizations or refinements of preceding definitions. Coupling coefficient considered as a positive constant Earlier well-known definitions of the coupling coefficient of resonators are given in monograph by G. Matthaei et al. Note that these definitions are approximate because they were formulated in the assumption that the coupling between resonators is sufficiently small. The coupling coefficient for the case of two equal resonators is defined by formula (1) where are the frequencies of even and odd coupled oscillations of unloaded pair of the resonators and It is obvious that the coupling coefficient defined by formula (2) is a positive constant that characterizes interaction of resonators at the resonant frequency In case when an appropriate equivalent network having an impedance or admittance inverter loaded at both ports with resonant one-port networks may be matched with the pair of coupled resonators with equal resonant frequencies, the coupling coefficient is defined by the formula (2) for series-type resonators and by the formula (3) for parallel-type resonators. Here are impedance-inverter and admittance-inverter parameters, are reactance slope parameters of the first and the second resonant series-type networks at resonant frequency and are the susceptance slope parameters of the first and the second resonant parallel-type networks. When the resonators are resonant LC-circuits the coupling coefficient in accordance with (2) and (3) takes the value (4) for the circuits with inductive coupling and the value (5) for the circuits with capacitive coupling. Here are the inductance and the capacitance of the first circuit, are the inductance and the capacitance of the second circuit, and are mutual inductance and mutual capacitance. Formulas (4) and (5) are known for a long time in theory of electrical networks. They represent values of inductive and capacitive coupling coefficients of the coupled resonant LC-circuits. Coupling coefficient considered as a constant having a sign Refinement of the approximate formula (1) was fulfilled in. Exact formula has a form (6) Formulae (4) and (5) were used while deriving this expression. Now formula (6) is universally recognized. It is given in highly cited monograph by J-S. Hong. It is seen that the coupling coefficient has a negative value if In accordance with new definition (6), the value of the inductive coupling coefficient of resonant LC-circuits is expressed by formula (4) as before. It has a positive value when and a negative value when Whereas the value of the capacitive coupling coefficient of resonant LC-circuits is always negative. In accordance with (6), the formula (5) for the capacitive coupling coefficient of resonant circuits takes a different form (7) Coupling between electromagnetic resonators may be realized both by magnetic or electric field. Coupling by magnetic field is characterized by the inductive coupling coefficient and coupling by electric field is characterized by the capacitive coupling coefficient Usually absolute values of and monotonically decay when the distance between the resonators increases. Their decay rates may be different. However absolute value of their sum may both decay all over distance range and grow over some distance range. Summation of the inductive and capacitive coupling coefficients is performed by formula (8) This formula is derived from the definition (6) and formulas (4) and (7). Note that the sign of the coupling coefficient itself is of no importance. Frequency response of the filter will not change if signs of all the coupling coefficients would be simultaneously alternated. However, the sign is important during collation of two coupling coefficients and especially during summation of inductive and capacitive coupling coefficients. Coupling coefficient considered as a function of the forced oscillation frequency Two coupled resonators may interact not only at the resonant frequencies. That is supported by ability to transfer energy of forced oscillations from one resonator to the other resonator. Therefore it would be more accurate to characterize interaction of resonators by a continuous function of forced-oscillation frequency rather than set of constants where is order number of the resonance. It is obvious that the function must meet the condition (9) Besides, the function must become zero at those frequencies where transmission of high frequency power from one resonator to another one is absent, i.e. must meet the second condition (10) The transmission zero arises in particularly in resonant circuits with mixed inductive-capacitive coupling when Its frequency is expressed by formula .(11) The definition of the function that generalizes formula (6) and meets the conditions (9) and (10) was stated on energy-based approach in. This function is expressed by formula (8) through frequency-dependent inductive and capacitive coupling coefficients and defined by formulas (12) (13) Here denotes energy of high frequency electromagnetic field stored by both resonators. Bar over denotes static component of high frequency energy, and dot denotes amplitude of oscillating component of high frequency energy. Subscript denotes magnetic part of high frequency energy, and subscript denotes electric part of high frequency energy. Subscripts 11, 12 and 22 denote parts of stored energy that are proportional to and where is complex amplitude of high frequency voltage at the first resonator port and is complex amplitude of voltage at the second resonator port. Explicit functions of the frequency-dependent inductive and capacitive couplings for pair of coupled resonant circuits obtained from (12) and (13) have forms (14) (15) where are resonant frequencies of the first and the second circuit disturbed by couplings. It is seen that values of these functions at coincide with constants and defined by formulas (14) and (15). Besides, function computed by formulae (8), (14) and (15) becomes zero at defined by formula (11). Coupling coefficients in filter theory Bandpass filters with inline coupling topology Theory of microwave narrow-band bandpass filters that have Chebyshev frequency response is stated in monograph. In these filters the resonant frequencies of all the resonators are tuned to the passband center frequency Every resonator is coupled with two neighbor resonators at most. Each of two edge resonators is coupled with one neighbor resonator and one of two filter ports. Such the topology of resonator couplings is called inline one. There is only one path of microwave power transmission from the input port to the output port in filters with inline coupling topology. Derivation of approximate formulas for the values of the coupling coefficients of neighbor resonators in filters with inline coupling topology those meet specified filter frequency response is given in. Here and are order numbers of the coupled resonators in the filter. The formulae were derived using low-pass prototype filters as well as formulae (2) and (3). Frequency response of the low-pass prototype filters is characterized by Chebyshev function of the first kind. The formulas were first published in. They have a form (16) where are normalized prototype element values, is order of the Chebyshev function which is equal to the number of the resonators, are the band-edge frequencies. Prototype element values for a specified bandpass of the filter are computed by formulas (17) if is even, if is odd. Here the next notations were used (18) where is the required passband ripple in dB. Formulas (16) are approximate not only because of the approximate definitions (2) and (3) for coupling coefficients were used. Exact expressions for the coupling coefficients in prototype filter were obtained in. However both former and refined formulae remain approximate in designing practical filters. The accuracy depends on both filter structure and resonator structure. The accuracy improves when the fractional bandwidth narrows. Inaccuracy of formulas (16) and their refined version is caused by the frequency dispersion of the coupling coefficients that may varies in a great degree for different structures of resonators and filters. In other words, the optimal values of the coupling coefficients at frequency depend on both specifications of the required passband and values of the derivatives That means the exact values of the coefficients ensuring the required passband can not be known beforehand. They may be established only after filter optimization. Therefore, the formulas (16) may be used to determine initial values of the coupling coefficients before optimization of the filter. The approximate formulas (16) allow also to ascertain a number of universal regularities concerning filters with inline coupling topology. For example, widening of current filter passband requires approximately proportional increment of all the coupling coefficients The coefficients are symmetrical with respect to the central resonator or the central pair of resonators even in filters having unequal characteristic impedances of transmission lines in the input and output ports. Value of the coefficient monotonically decreases with moving from the external pairs of resonators to the central pair. Real microwave filters with inline coupling topology as opposed to their prototypes may have transmission zeroes in stopbands. Transmission zeroes considerably improve filter selectivity. One of the reasons why zeroes arise is frequency dispersion of coupling coefficients for one or more pairs of resonators expressing in their vanishing at frequencies of transmission zeroes. Bandpass filters with cross couplings In order to generate transmission zeroes in stopbands for the purpose to improve filter selectivity, a number of supplementary couplings besides the nearest couplings are often made in the filters. They are called cross couplings. These couplings bring to foundation of several wave paths from the input port to the output port. Amplitudes of waves transmitted through different paths may compensate themselves at some separate frequencies while summing at the output port. Such the compensation results in transmission zeroes. In filters with cross couplings, it is convenient to characterize all filter couplings as a whole using a coupling matrix of dimension ,. It is symmetrical. Every its off-diagonal element is the coupling coefficient of ith and jth resonators Every diagonal element is the normalized susceptance of the ith resonator. All diagonal elements in a tuned filter are equal to zero because a susceptance vanishes at the resonant frequency. Important merit of the matrix is the fact that it allows to directly compute the frequency response of the equivalent network having the inductively coupled resonant circuits,. Therefore it is convenient to use this matrix when designing the cross-coupled filters. The coupling matrices , in particular, are used as coarse models of filters. Utilization of a coarse model allows to quicken filter optimization manyfold because of computation of the frequency response for the coarse model does not consume CPU time with respect to computation for the real filter. Coupling coefficient in terms of the vector fields Because the coupling coefficient is a function of both the mutual inductance and capacitance, it can also be expressed in terms of the vector fields and . Hong proposed that the coupling coefficient is the sum of the normalized overlap integrals (19) where (20) and (21) On the contrary, based on a coupled mode formalism, Awai and Zhang derived expressions for which is in favor of using the negative sign i.e., (22) Formulae (19) and (22) are approximate. They match the exact formula (8) only in case of a weak coupling. Formulae (20) and (21) in contrast to formulas (12) and (13) are approximate too because they do not describe a frequency dispersion which may often manifest itself in a form of transmission zeros in frequency response of a multi-resonator bandpass filter. Using Lagrange’s equation of motion, it was demonstrated that the interaction between two split-ring resonators, which form a meta-dimer, depends on the difference between the two terms. In this case, the coupled energy was expressed in terms of the surface charge and current densities. Recently, based on Energy Coupled Mode Theory (ECMT), a coupled mode formalism in the form of an eigenvalue problem, it was shown that the coupling coefficient is indeed the difference between the magnetic and electric components and . Using Poynting's theorem in its microscopic form, it was shown that can be expressed in terms of the interaction energy between the resonators' modes. References External links Tyurnev, V.V. (2010) "Coupling coefficients of resonators in microwave filter theory", Progress In Electromagnetics Research B, Vol. 21, P. 47–67. Filter theory
Coupling coefficient of resonators
[ "Engineering" ]
2,786
[ "Telecommunications engineering", "Filter theory" ]
40,067,524
https://en.wikipedia.org/wiki/Oil%20industry%20in%20Poland
The oil industry in Poland began at Bóbrka Field in 1853 , followed by the first refinery in 1854. Poland was the third most productive region in the world in 1900. It now has only a small, mostly state-owned component, with production from its Permian Basin in the west, small and very old fields in the Carpathians in the south, and offshore in the Baltic Sea. For natural gas the country is almost completely dependent on legacy pipelines from the former Soviet Union. Shale gas and tight oil Production of significant quantities of natural gas or petroleum from shale or tight (low permeability) reservoirs is in large part dependent on the social acceptance and technical and commercial viability of hydraulic fracturing. As of 2013 only 3% of the Poles opposed fracking. Leasing for unconventional shale plays in Poland began in 2007. But, as of 2013, the results of exploration efforts, as well as government regulation, have been disappointing, and estimates of the size of the total resource have been substantially reduced. Data indicates a substantial resource, but the permeability of the rocks, combined with the relative complexity of the faulting in some areas, have made success elusive. In 2013, the Energy Information Administration, a U.S. agency, estimated that 146 trillion cubic feet of shale gas and of tight oil could be economically recovered from the shales in Poland using present technology. However, an estimate published in March 2013 of recoverable shale gas reserves by the Polish Geological Institute was 24.8 trillion cubic feet. It remains to be seen whether the lack of reservoir permeability can be overcome. Poland has been dependent on a Soviet era gas pipeline system which brings in only expensive Russian gas. Power generation has been based on Poland's extensive reserves of coal, principally lignite. Development of a domestic gas industry to replace Russian imports is highly desirable as would the use of gas to retire or convert coal fired generation plants. Drilling for shale resources began in June 2010. But, as of July 2013, none of the wells which have been completed have produced gas in commercial quantities. ConocoPhillips, which purchased the most prospective geological area from Lane Energy Poland, was able to produce gas and oil in sustainable volumes. But, their costs were too high to justify the project. ExxonMobil, which positioned itself in the Lublin Basin, a highly faulted area, could never get a sustainable test, despite spending huge sums on geological research. Chevron also stubbed its toe in the Lublin Basin area, after receiving some bad geological advice. Talisman Energy also failed, and Marathon Oil drilled where there was little/no prospective shale resource. All have pulled out, leaving the Polish Oil and Gas Company as the prime company in the shale gas and tight oil plays. In the absence of regulation acceptable to the drillers who have the technology and resources to engage in extensive exploration, as of 2013, the extent of the tight oil and shale gas resource in Poland remains unknown, although it is believed by some informed observers that it has the potential to supply the needs of Poland for hundreds of year. However, using current technology, it is considered likely that it will be more of a national security mandate than a commercial venture any time soon. Polish firms In addition to exploration for tight oil and shale gas by international firms there is a small Polish oil and gas industry with some oil and gas production: Polskie Górnictwo Naftowe i Gazownictwo (PGNiG, literally: Polish Petroleum and Gas Mining) is a Polish state-controlled oil and natural gas company, which deals with the exploration and production of natural gas and crude oil, natural gas import, storage and distribution and sales of natural gas and crude oil. PGNiG is one of the largest companies in Poland and is listed on the Warsaw Stock Exchange Przedsiębiorstwo Poszukiwań i Eksploatacji Złóż Ropy i Gazu "Petrobaltic" S.A. (Exploration and Mining of Petroleum and Gas Deposits Joint stock company "Petrobaltic") was set up in November 1990. On 1 January 1999, the firm was transformed in limited liability company and the only shareholder become the State Treasury. The company is the only firm in Poland performing exploration and production of crude oil and gas in the Baltic Sea. B3 oil field is currently in production and has a 2006 production of 1.9 million bbl/year. The company has its head office in Gdańsk. Exploration and exploitation of oil and gas deposits are performed with three drilling platforms: Petrobaltic, Baltic Beta and Jacket-type platform called "PG-1". Historical firms Polmin (English: State Factory of Mineral Oils, Polish: Panstwowa Fabryka Olejow Mineralnych) was a Polish state-owned enterprise, which controlled excavation, transport and distribution of natural gas. Founded in 1909, it was nationalized in 1927, with main office in Lwów. Polmin operated a large oil refinery in Drohobych, which in late 1930s employed around 3000 people. The refinery purified oil extracted from rich fields of southern part of the Second Polish Republic (Gorlice, Borysław, Jasło, and Drohobych). Some Polish-language sources claim that Polmin refinery in Drohobycz was in late 1930s the biggest in Europe. Oil and gas fields in Poland Oil fields The first oil well was drilled at Bóbrka Field in 1853; it was 7 years after drilling the first oil well in Baku settlement (Bibi-Heybat) in 1846 on Apsheron peninsula. The B3 oil field is an oil and gas field in the Baltic Sea. The field is located 80 km off the Polish coastal town Rozewie. The crude oil is also referred to as Rozewie crude. The API gravity of the crude is 42-43 and sulfur content of 0.12 wt%. The jack up rig Baltic Beta located on the field takes care of processing, drilling and accommodation. The associated gas is sent through a pipeline to the heat and power generating plant in Władysławowo. Most of the oil produced from B3 is shipped by tanker to Gdańsk and fed to the Gdańsk refinery as a small part of the refinery feedstock. The B8 oil field is a major oil field in the Polish sector of the Baltic Sea about 70 km north of Jastarnia. The field was discovered in 1983 and started producing oil in 2006. The field currently (2023) accounts for four per cent of Poland’s oil production. The Barnówko-Mostno-Buszewo oil field is an oil field that was discovered in 1993. It began production in 1994. Its proven oil reserves are about and proven reserves of natural gas are around 350 billion cubic feet (9.9×109m³). The Dębno oil field is an oil field that was discovered in 2004. It began production in 2005. Its proven oil reserves are about and proven reserves of natural gas are around 283 billion cubic feet (8×109m³). The Lubiatów-Międzychód-Grotów oil field is an oil field that was discovered in 1993. It began production in 1994. Its proven oil reserves are about and proven reserves of natural gas are around 160 billion cubic feet (4.5×109m³). Gas fields The Daszawa gas field was discovered in 1950. It began production in 1950. The proven reserves of natural gas of the Daszawa gas field are around 72 billion cubic feet (2×109m³). The Dzików gas field, discovered in 1962, began production in 1965. Proven reserves are about 70 billion cubic feet (2×109m³). Jasionka Jodłówka Kościan Przemyśl Radlin Terliczka Wola Obszańska Żołynia Refining, distribution, and retailing Grupa Lotos S.A. was a vertically integrated oil company based in Gdańsk. The company is listed in the Polish index WIG 20. Its main activity branches were: crude oil production, refining and marketing of oil products. The company was a leader in lubricants on the Polish market. Grupa Lotos was a producer of unleaded gasoline, diesel, fuel oils, aviation fuels, motor and industrial lubricants, bitumens and waxes. It merged with PKN Orlen in 2022. PKN Orlen () () is a major Polish oil refiner, and petrol retailer. The company is the Central Europe's largest publicly traded firm with major operations in Poland, Czech Republic, Germany, Slovakia, and the Baltic States. In 2009, it was ranked in the Fortune Global 500 as the world's 31st largest oil company and the world's 249th largest company overall, and was the only Polish company ranked by Fortune. It currently (2012) ranks 297th, with a revenue of over US$36.1 billion. The Płock refinery is an oil refinery and petrochemical complex in Płock owned by PKN Orlen. The refinery has a Nelson complexity index of 9.5 and a capacity is 276 kbpd of crude oil. The Gdańsk refinery is an oil refinery located in Gdańsk formerly owned by Grupa Lotos, since 2022 owned by PKN Orlen. The refinery capacity is 210 kbpd of crude oil and it has a Nelson complexity index of 11.1. Gaz-System, Operator Gazociągów Przesyłowych GAZ-SYSTEM S.A., is a designated natural gas transmission system operator in Poland. The company was established on 16 April 2004 as a wholly owned subsidiary of PGNiG under the name PGNiG – Przesył Sp. z o.o. On 28 April 2005, all shares of the company were transferred to the State Treasury of Poland and the current name of the company was adopted on 8 June 2005. Gaz-System owns and operates all gas transmission and distribution pipelines in Poland, except the Yamal–Europe pipeline owned by EuRoPol Gaz S.A. The company is also responsible for construction of the Polskie LNG terminal at Świnoujście and the Baltic Pipe pipeline between Poland and Denmark. Naftoport Ltd () is a company which manages crude oil shipment and deliveries. It is located in Gdańsk. Naftoport Ltd was established in June 1991 by several Polish oil companies and Marine Commercial Port in Gdańsk. The company oversees operations of the terminal for reloading of crude oil and products in Port of Gdańsk. PERN Przyjazn SA (), joint stock Oil Pipeline Operation Company "Przyjaźń" is one of leading companies for oil transportation and storage in Poland. The company is based in Płock and overlooks catering of oil and gas through Poland to eastern European markets. Pipelines from the former Soviet Union The Druzhba pipeline (; also has been referred to as the Friendship Pipeline and the Comecon Pipeline) is the world's longest oil pipeline and in fact one of the biggest oil pipeline networks in the world. It carries oil some from the eastern part of the European Russia to points in Ukraine, Belarus, Poland, Hungary, Slovakia, the Czech Republic and Germany. The network also branches out into numerous pipelines to deliver its product throughout the Eastern Europe and beyond. The name "Druzhba" means "friendship", alluding to the fact that the pipeline supplied oil to the energy-hungry western regions of the Soviet Union, to its "fraternal socialist allies" in the former Soviet bloc, and to western Europe. Today, it is the largest principal artery for the transportation of Russian (and Kazakh) oil across Europe. The Odessa–Brody pipeline (also known as Sarmatia pipeline) is a crude oil pipeline between the Ukrainian cities Odessa at the Black Sea, and Brody near the Ukrainian-Polish border. There are plans to expand the pipeline to Płock, and furthermore to Gdańsk in Poland. The pipeline is operated by UkrTransNafta, Ukraine's state-owned oil pipeline company. The Yamal–Europe pipeline is a long pipeline connecting natural gas fields in Western Siberia and in the future on the Yamal peninsula, Russia, with Germany. Polski Gaz Sp z o. o. is a distributor of liquefied petroleum gas: propane and butane. Petrochemical Holding GMBH holds 100% share in Polski Gaz Sp. z o.o. Protest During summer 2013 "Occupy Chevron" protesters occupied the field near Żurawlów in the Grabowiec district where Chevron Corporation planned to drill an exploratory well. This type of activity is becoming more common. References External links and further reading History of Polish Gas Industry Mir-Babayev M.F. Brief history of the first drilled oil well; and people involved - "Oil-Industry History" (USA), 2017, v.18, #1, p. 25-34. Energy in Poland Natural gas in Poland Petroleum in Poland
Oil industry in Poland
[ "Chemistry" ]
2,695
[ "Petroleum", "Petroleum by country" ]
40,069,990
https://en.wikipedia.org/wiki/Chlorobaculum%20tepidum
Chlorobaculum tepidum, previously known as Chlorobium tepidum, is an anaerobic, thermophilic green sulfur bacteria first isolated from New Zealand. Its cells are gram-negative and non-motile rods of variable length. They contain chlorosomes and bacteriochlorophyll a and c. Natural habitat and environmental requirements Like other green sulfur bacteria C. tepidum requires light and specific compounds to perform anoxygenic photosynthesis. C. tepidum differs from other green sulfur bacteria in that it cannot easily use H2 or Fe2+ as electron donors, relying on elemental sulfur, sulfide, and thiosulfate instead. To fulfill their metabolic requirements, they reside primarily in anaerobic sulfur rich environments such as anaerobic levels of stratified lakes and lagoons, anaerobic levels of layered organic bacterial mats, and in hot springs where there is abundant sulfur. C. tepidum and other green sulfur bacteria also play a large role within the carbon and sulfur cycles. Within the sulfur cycle, they contribute to the oxidative branch by oxidizing reduced sulfur compounds. Within anaerobic sediment layers C. tepidum is able to couple carbon and sulfur cycling in a metabolically favorable way. Photosynthetic mechanism As it was mentioned before, C. tepidum performs anoxygenic photosynthesis. Within each cell there are 200–250 chlorosomes that are attached to the cytoplasmic side of reaction centers inserted within the inner cell membrane. The ellipsoidal shaped complexes act as light harvesting antenna to capture energy. Within each chlorosome are 215,000 ± 80,000 bacteriochlorophyll C that act as pigment molecules and absorb unique wavelengths of light relative to their color. C. tepidum contains genes that play an important role in the methylation of the C-8 and C-12 carbons of bacteriochlorophyll C. This methylation allows for BChl C levels to fluctuate in response to a change in the availability of light, resulting in a high efficiency of light harvesting and allowing C. tepidum to survive in areas of very low light intensity. Light energy is harvested by the chlorosomes and used in conjunction with H2, reduced sulfur compounds, or ferrous iron to preform redox reactions and provide energy to fix CO2 via the reverse tricarboxcylic acid cycle. Genome structure C. tepidum contains a genome that contains 2.15 Mbp, within there are a total of 2,337 genes (of these genes, there are 2,245 protein coding genes and 56 tRNA and rRNA coding genes). It's synthesis of chlorophyll a and bacteriochlorophylls a and c make it a model organism used to elucidate the biosynthesis of bacteriochlorophylls c. Present in the genome of C. tepidum are a multitude of genes that protect the bacterium against the presence of oxygen. The fact that such a large part of the genome is used to encode for protections against oxygen points to the possibility that C. tepidum spent a long period of its evolutionary history in proximity to oxygen, and therefore needed pathways that ensured that living in the presence of oxygen would not substantially harm the bacterium. Several of its carotenoid metabolic pathways (including a novel lycopene cyclase) have similar counterparts in cyanobacteria. See also List of bacterial orders List of bacteria genera References Further reading External links Phototrophic bacteria Chlorobiota Bacteria described in 1991
Chlorobaculum tepidum
[ "Chemistry", "Biology" ]
767
[ "Bacteria stubs", "Bacteria", "Photosynthesis", "Phototrophic bacteria" ]
34,756,609
https://en.wikipedia.org/wiki/Mertens-stable%20equilibrium
In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability. Like other refinements of Nash equilibrium used in game theory stability selects subsets of the set of Nash equilibria that have desirable properties. Stability invokes stronger criteria than other refinements, and thereby ensures that more desirable properties are satisfied. Desirable Properties of a Refinement Refinements have often been motivated by arguments for admissibility, backward induction, and forward induction. In a two-player game, an admissible decision rule for a player is one that does not use any strategy that is weakly dominated by another (see Strategic dominance). Backward induction posits that a player's optimal action in any event anticipates that his and others' subsequent actions are optimal. The refinement called subgame perfect equilibrium implements a weak version of backward induction, and increasingly stronger versions are sequential equilibrium, perfect equilibrium, quasi-perfect equilibrium, and proper equilibrium. Forward induction posits that a player's optimal action in any event presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached. Kohlberg and Mertens emphasized further that a solution concept should satisfy the invariance principle that it not depend on which among the many equivalent representations of the strategic situation as an extensive-form game is used. Thus it should depend only on the reduced normal-form game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens emphasized also the importance of the small worlds principle that a solution concept should depend only on the ordinal properties of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs. Kohlberg and Mertens demonstrated via examples that not all of these properties can be obtained from a solution concept that selects single Nash equilibria. Therefore, they proposed that a solution concept should select closed connected subsets of the set of Nash equilibria. Properties of Stable Sets Admissibility and Perfection: Each equilibrium in a stable set is perfect, and therefore admissible. Backward Induction and Forward Induction: A stable set includes a proper equilibrium of the normal form of the game that induces a quasi-perfect and therefore a sequential equilibrium in every extensive-form game with perfect recall that has the same normal form. A subset of a stable set survives iterative elimination of weakly dominated strategies and strategies that are inferior replies at every equilibrium in the set. Invariance and Small Worlds: The stable sets of a game are the projections of the stable sets of any larger game in which it is embedded while preserving the original players' feasible strategies and payoffs. Decomposition and Player Splitting. The stable sets of the product of two independent games are the products of their stable sets. Stable sets are not affected by splitting a player into agents such that no path through the game tree includes actions of two agents. For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game. Definition of a Stable Set A stable set is defined mathematically by essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition requires more than every nearby game having a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above. Mertens provided several formal definitions depending on the coefficient module used for homology or cohomology. A formal definition requires some notation. For a given game let be product of the simplices of the players' of mixed strategies. For each , let and let be its topological boundary. For let be the minimum probability of any pure strategy. For any define the perturbed game as the game where the strategy set of each player is the same as in , but where the payoff from a strategy profile is the payoff in from the profile . Say that is a perturbed equilibrium of if is an equilibrium of . Let be the graph of the perturbed equilibrium correspondence over , viz., the graph is the set of those pairs such that is a perturbed equilibrium of . For , is the corresponding equilibrium of . Denote by the natural projection map from to . For , let , and for let . Finally, refers to Čech cohomology with integer coefficients. The following is a version of the most inclusive of Mertens' definitions, called *-stability. Definition of a *-stable set: is a *-stable set if for some closed subset of with it has the following two properties: Connectedness: For every neighborhood of in , the set has a connected component whose closure is a neighborhood of in . Cohomological Essentiality: is nonzero for some . If essentiality in cohomology or homology is relaxed to homotopy, then a weaker definition is obtained, which differs chiefly in a weaker form of the decomposition property. References Game theory equilibrium concepts Non-cooperative games
Mertens-stable equilibrium
[ "Mathematics" ]
1,220
[ "Game theory", "Non-cooperative games", "Game theory equilibrium concepts" ]
34,757,331
https://en.wikipedia.org/wiki/Viennot%27s%20geometric%20construction
In mathematics, Viennot's geometric construction (named after Xavier Gérard Viennot) gives a diagrammatic interpretation of the Robinson–Schensted correspondence in terms of shadow lines. It has a generalization to the Robinson–Schensted–Knuth correspondence, which is known as the matrix-ball construction. The construction Starting with a permutation , written in two-line notation, say: one can apply the Robinson–Schensted correspondence to this permutation, yielding two standard Young tableaux of the same shape, P and Q. P is obtained by performing a sequence of insertions, and Q is the recording tableau, indicating in which order the boxes were filled. Viennot's construction starts by plotting the points in the plane, and imagining there is a light that shines from the origin, casting shadows straight up and to the right. This allows consideration of the points which are not shadowed by any other point; the boundary of their shadows then forms the first shadow line. Removing these points and repeating the procedure, one obtains all the shadow lines for this permutation. Viennot's insight is then that these shadow lines read off the first rows of P and Q (in fact, even more than that; these shadow lines form a "timeline", indicating which elements formed the first rows of P and Q after the successive insertions). One can then repeat the construction, using as new points the previous unlabelled corners, which allows to read off the other rows of P and Q. Animation For example consider the permutation Then Viennot's construction goes as follows: Applications One can use Viennot's geometric construction to prove that if corresponds to the pair of tableaux P,Q under the Robinson–Schensted correspondence, then corresponds to the switched pair Q,P. Indeed, taking to reflects Viennot's construction in the -axis, and this precisely switches the roles of P and Q. See also Plactic monoid Jeu de taquin References Bruce E. Sagan. The Symmetric Group. Springer, 2001. Algebraic combinatorics
Viennot's geometric construction
[ "Mathematics" ]
437
[ "Fields of abstract algebra", "Algebraic combinatorics", "Combinatorics" ]
34,757,962
https://en.wikipedia.org/wiki/Chemically%20modified%20electrode
A chemically modified electrode is an electrical conductor that has its surface modified for different electrochemical functions. Chemically modified electrodes are made using advanced approaches to electrode systems by adding a thin film or layer of certain chemicals to change properties of the conductor according to its targeted function. At a modified electrode, an oxidation-reduction substance accomplishes electrocatalysis by transferring electrons from the electrode to a reactant, or a reaction substrate. Modifying electrodes' surfaces has been one of the most active areas of research interest in electrochemistry since 1979, providing control over how electrodes interacts with their environments. Description Chemically modified electrodes are different from other types of electrodes as they have a molecular monolayer or micrometers-thick layers of film made from a certain chemical (depending on the function of the electrode). The thin film is coated on the surface of the electrode. The outcome would be a modified electrode with special new chemical properties in terms of physical, chemical, electrochemical, optical, electrical, transport, and other useful properties. Chemically modified electrodes and electrodes in general heavily depend on electron transport: a general term for electrochemical processes where the charge transports through the chemical films to the electrode. The term "coverage" is used to express the area-normalized in mol/m^2 of a specific type of chemical site in the thin chemical film in on the surface of the chemically modified electrode. Purpose of developing chemically modified electrodes Advancements in the field of electrochemical science kept getting more thorough until scientists in the field found no use of bare surfaces to continue their investigations. The reason behind that is that researches that involved electrodes required certain chemical and physical properties that did not naturally exist in the materials used as electrical conductors. To work their way out of the dilemma, they used chemical modification to tailor the materials they used. Atoms, molecules, and nano-particles are attached to the surface of materials to modify their electronic and structural properties, leading to changing their functionality. Applications of chemically modified electrodes In their first stages, chemically modified electrodes were merely applied in technologies they were initially made for (tuning surfaces for electrochemical investigations). After that, chemically modified electrodes provided powerful routes to tune the performance of electrodes. The modification of electrodes facilitated the following processes in electroanalytical chemistry: Providing selectivity of electrodes Resisting fouling Concentrating species Improving electrocatalytic properties Limiting access of interferences in complex samples It also provided a route for other purposes, such as: Researching energy conversion Researching the phenomena that influence electrochemical processes Storing and protecting corrosion Developing molecular electronics Developing electrochromic devices The research fields where chemically modified electrodes are used include: Basic electrochemical investigations Electron transfer between electrodes and electrolytes. Electrostaticity on electrode surfaces stationary or slow electric charges. Polymer electron transport and ionic transportMovement of electrons from one species or atom to another, with a special focus on polymers. Design of electrochemical systems and devicesThe creation of systems and devices that use chemically modified electrodes with all the required specifications of the systems or devices. Approaches to chemically modify electrodes The surface of electrodes can be modified in the following ways: (1) Adsorption (Chemisorption) A method that uses the same kind of valence forces involved in formation of chemical compounds, where the film is strongly adsorbed, or chemisorbed, onto the surface of the electrode, yielding monolayer coverage. This approach involves substrate-coupled self-assembled monolayers (SAMs), where molecules are spontaneously chemisorbed to the surface of the electrode, resulting in a microscopic superlattice structure of layers formed on it. (2) Covalent bonding A method that uses chemical agents to create a covalent bond between one or more monomolecular layers of the chemical modifier and the electrode surface. The common agents to use in this method include organosilanes and cyanuric chloride. (3) Polymer film coating A method that uses one of the following to hold electron-conductive and nonconductive polymer films on the electrode surface: Chemisorption and low solubility in the contacting solution Physical anchoring in a porous electrode This method includes removing chemical species (substrate) from self-assembled monolayers to allow adsorbing molecules on the electrode surface independently of the original substrate structure. The polymer films can be organic, organometallic or inorganic, and it can either contain the chemical modifier or have the chemical added to the polymer in a latter process. (4) Composite A method that has the chemical modifier mixed with an electrode matrix material. An example for this method is having an electron-transfer mediator (the chemical modifier) mixed with carbon particles in a carbon paste electrode (the electrode matrix). Carbon paste, glassy carbon paste, glassy carbon etc. electrodes when modified are termed as chemically modified electrodes. Chemically modified electrodes have been employed for the analysis of organic and inorganic species. References Electrodes
Chemically modified electrode
[ "Chemistry" ]
1,038
[ "Electrochemistry", "Electrodes" ]
34,769,089
https://en.wikipedia.org/wiki/Ethenolysis
In organic chemistry, ethenolysis is a chemical process in which internal olefins are degraded using ethylene () as the reagent. The reaction is an example of cross metathesis. The utility of the reaction is driven by the low cost of ethylene as a reagent and its selectivity. It produces compounds with terminal alkene functional groups (α-olefins), which are more amenable to other reactions such as polymerization and hydroformylation. The general reaction equation is: Ethenolysis is a form of methylenation, i.e., the installation of methylene () groups. Applications Terminal alkenes Using ethenolysis, higher molecular weight internal alkenes can be converted to more valuable terminal alkenes. The Shell higher olefin process (SHOP process) uses ethenolysis on an industrial scale. SHOP α-olefin mixtures are first separated by distillation. Higher molecular weight fractions are isomerized by alkaline alumina catalysts in the liquid phase. The resulting internal olefins are reacted with ethylene to regenerate α-olefins. The large excess of ethylene moves the reaction equilibrium to the terminal α-olefins. Catalysts are often prepared from rhenium(VII) oxide () supported on alumina. Perfume In one application, neohexene, a precursor to perfumes, is prepared by ethenolysis of diisobutene: α,ω-Dienes, i.e., diolefins of the formula , are prepared industrially by ethenolysis of cyclic alkenes. For example, 1,5-hexadiene, a useful crosslinking agent and synthetic intermediate, is produced from 1,5-cyclooctadiene: The catalyst is derived from rhenium(VII) oxide supported on alumina. 1,9-Decadiene, a related species, is produced similarly from cyclooctene. Decenoic acid In an application directed at using renewable feedstocks, methyl oleate, derived from natural seed oils, can be converted to 1-decene and methyl 9-decenoate: Polyethylene and polypropylene recycling Mixed polyolefins can be recycled via high selectivity isomerizing ethenolysation using a sodium on alumina catalyst followed by olefin metathesis using a stream of ethylene gas flowing into a reaction chamber containing a tungsten oxide on silica catalyst, albeit at high temperature. Carbon atoms freed by the breaking carbon-carbon bonds attach to ethylene molecules. Polyethylene is first converted to propylene, while polypropylene is ultimately converted to a mixture of propylene and isobutylene. References Carbon-carbon bond forming reactions Organometallic chemistry Homogeneous catalysis
Ethenolysis
[ "Chemistry" ]
608
[ "Catalysis", "Carbon-carbon bond forming reactions", "Organic reactions", "Homogeneous catalysis", "Organometallic chemistry" ]
23,122,856
https://en.wikipedia.org/wiki/Gauge%20principle
In physics, a gauge principle specifies a procedure for obtaining an interaction term from a free Lagrangian which is symmetric with respect to a continuous symmetry—the results of localizing (or gauging) the global symmetry group must be accompanied by the inclusion of additional fields (such as the electromagnetic field), with appropriate kinetic and interaction terms in the action, in such a way that the extended Lagrangian is covariant with respect to a new extended group of local transformations. See also Gauge theory Gauge covariant derivative Gauge fixing Gauge gravitation theory Kaluza–Klein theory Lie algebra Lie group Lorenz gauge Quantum chromodynamics Quantum electrodynamics Quantum field theory Quantum gauge theory Standard Model Standard Model (mathematical formulation) Symmetry breaking Symmetry in physics Yang–Mills theory Yang–Mills existence and mass gap 1964 PRL symmetry breaking papers References Gauge theories Theoretical physics
Gauge principle
[ "Physics" ]
184
[ "Theoretical physics", "Quantum mechanics", "Quantum physics stubs", "Theoretical physics stubs" ]
23,123,504
https://en.wikipedia.org/wiki/Irrigation%20statistics
This page shows statistical data on irrigation of agricultural lands worldwide. Irrigation is the artificial abstraction of water from a source followed by the distribution of it at scheme level aiming at application at field level to enhance crop production when rainfall is scarce. Irrigated area The appended table gives an overview of irrigated areas in the world in 2003 Only the countries with more than 10 million ha of irrigated land are mentioned. (*) Including India, China and Pakistan There are 4 countries with 5 to 10 million ha irrigated land: Iran (7.7), Mexico (6.3), Turkey (5.1), and Thailand (5.0). The 16 countries with 2 to 5 million ha irrigated land are: Bangladesh (4.7), Indonesia (4.5), Russia (4.5), Uzbekistan (4.3), Spain (3.8), Brazil (3.5), Iraq (3.5), Egypt (3.4), Romania (3.0), Vietnam (3.0), Italy (2.8), France (2.6), Japan (2.6), Australia (2.6), Ukraine (2.3), and Kazakhstan (2.1) See also List of countries by irrigated land area Area per application method at field level 94% of the application methods of irrigation water at field level is of the category surface irrigation, whereby the water is spread over the field by gravity. Of the remaining 6%, the majority is irrigated by methods requiring energy, expensive hydraulic pressure techniques and pipe systems like sprinkler irrigation and drip irrigation, for the major part in the USA. The source of irrigation water in these cases often is groundwater from aquifers. However, the exploitation of aquifers can also be combined with surface irrigation at field level. In relatively small areas one applies subirrigation whereby the water infiltrates into the soil below the soil surface from pipes or ditches. This category includes tidal irrigation used in the lower part of rivers where the tidal influence is felt by permitting the river water to enter ditches at high tide and allowing it to infiltrate from there into the soil In relative rare cases one uses labor-intensive methods like irrigation with watering-cans and by filling dug-in porous pots (pitcher irrigation) from where the water enters the soil by capillary suction. Surface irrigation can be divided into the following types, based on the method by which water is spread over the field after it has been admitted through the inlet: spate irrigation (in Pakistan called Rod Koh), which may occur in hilly regions in dry zones where small rivers produce spate floods; ditches and bunds are built to guide the water to the fields to be irrigated; the number of fields irrigated at each flood event depends on the duration and intensity of the flood. The sailaba system in Balochistan is an example flood-plain irrigation, which may occur in dry zones in larger river plains, where the river has high discharges during a short season only. Bunds are constructed to retain the river floods and the lands are being planted to crops when the floods recede (flood recession cropping). The molapos in the Okavango inland delta are an example border-strip irrigation, in which the water moves over a graded strip of land with a mild slope and the water infiltrates into the soil while the wetting front advances. Borders are made along the strip to prevent the water of spreading out to neighboring fields. level-basin irrigation, in which the water is set up on the soil surface quickly on leveled plots and given time to infiltrate. The basins are surrounded by borders to retain the water. Basins can be used for any crop. The combination of basins with furrows without slope is applied for relatively large plants like cotton, maize and sugar cane. For crops sown by broadcast, like some cereals (especially wheat), one uses basins with corrugations drawn in the soil. In orchards one may make relative small (possibly circular) basins around the trees. Basin irrigation can be used both in flat areas and sloping lands. In the latter case, terraces have to be made. The spectacular terraces pictured were made by hand, a tedious job, and they are an amazing world heritage. Rice grown in the flooded basins of terraces (paddies) is often irrigated continuously whereby the water flows from one field to the other, making sure that the rice plants remain submerged. The main reason of the submergence is weed control, but the rice needs to be of a variety that tolerates waterlogging. furrow irrigation, in which the water moves over the field in furrows between ridges on which the crop is planted. The water infiltrates into the soil from the furrows. The furrows can be made both in sloping and flat land. In steep sloping land they are preferably laid out at a mild slope at an angle with the topographic contour lines to avoid soil erosion. The first four forms of irrigation come in the category of flood-irrigation, because the entire surface of the cropped area is wetted, whereas with furrow irrigation, the surface of the ridges remain dry. The irrigation of sloping fields is called flow irrigation as the water on the surface does not come to a stand still and the stream entering the field has to be cut back before the wetting front reaches the end of the field to avoid runoff losses. Areal growth From 1955 to 1975 the annual growth of the irrigated area was almost 3%. From 1970 to 1982 the growth rate was some 2% per year, and from 1983 to 1994 about 1.3% per year. The growth rate of irrigated area is decreasing. The following table shows the irrigation development in the world between 1955 and 1983, distinguishing developed from developing countries: The developed countries witnessed a relatively greater increase than the developing nations. Water use Irrigation schemes in the world use about 3 500 km3 water per year, of which 74% is evaporated by the crops. This is some 80% of all water used by mankind (4 400 km3 per year). The water used for irrigation is roughly 25% of the annually available water resources (14 000 km3) and 9% of all annual river discharges in the hydrological cycle. River discharges occur for the major part in regions with humid climates, far removed from the regions with (semi)arid climates, where irrigation water is most needed. Compared to the 8 600 km3 of annual river discharge in the drier climates only, the yearly water use for irrigation is 40% Of the total irrigated area worldwide 38% is equipped for irrigation with groundwater, especially in India (39 million ha), China (19 million ha) and the United States of America (17 million ha). Total consumptive groundwater use for irrigation is estimated as 545 km3/year. Groundwater use in irrigation leads in places to exploitation of groundwater at rates above groundwater recharge and depletion of groundwater reservoirs. Economical significance The irrigated area occupies worldwide about 16% of the total agricultural area, but the crop yield is roughly 40% of the total yield. Hence, the productivity of irrigated land is 3.6 times that of unirrigated land. The monetary value of the yield of irrigated crops is some 6.6 times that of unirrigated crops. In irrigated land one grows crops with higher market values. References External links Further reading FAO's global information system on water and agriculture: AQUASTAT Irrigation maps of the world : http://www.tropentag.de/2006/abstracts/full/211.pdf ftp://ftp.fao.org/agl/aglw/aquastat/GMIAv401lowres.pdf Land management Irrigation Hydraulic engineering
Irrigation statistics
[ "Physics", "Engineering", "Environmental_science" ]
1,638
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
23,130,135
https://en.wikipedia.org/wiki/Lump%20sum%20turnkey
Lump sum turnkey (LSTK) is a combination of the business-contract concepts of lump sum and turnkey. Lump sum is a noun which means a complete payment consisting of a single sum of money while turnkey is an adjective of a product or service which means product or service will be ready to use upon delivery. In the construction industry, LSTK combines two concepts. The LS (lump sum) part refers to the payment of a fixed sum for the delivery under e.g. an EPC contract. The financial risk lies with the contractor. TK (turn key) specifies that the scope of work includes start-up of the facility to a level of operational status. Ultimately the scope of work will define just exactly what is needed. Progressive LSTK Very large projects may be split into phases where a fixed price (lump sum) is agreed at the start of each phase. This reduces the overall project risk taken on by the contractor at the start of the project, and increases flexibility on the part of the project owner to adapt the project to changing circumstances. References Business terms Facilities engineering
Lump sum turnkey
[ "Engineering" ]
226
[ "Building engineering", "Facilities engineering", "Mechanical engineering by discipline" ]
23,131,857
https://en.wikipedia.org/wiki/TTFA
TTFA may refer to: Thenoyltrifluoroacetone, a chemical compound used pharmacologically as a chelating agent Thallium(III) trifluoroacetate Trinidad and Tobago Football Association, the governing body of association football in Trinidad and Tobago Chelating agents
TTFA
[ "Chemistry" ]
63
[ "Chelating agents", "Process chemicals" ]
41,519,162
https://en.wikipedia.org/wiki/Dirac%20structure
In mathematics a Dirac structure is a geometric structure generalizing both symplectic structures and Poisson structures, and having several applications to mechanics. It is based on the notion of the Dirac bracket constraint introduced by Paul Dirac and was first introduced by Ted Courant and Alan Weinstein. Linear Dirac structures Let be a real vector space, and its dual. A (linear) Dirac structure on is a linear subspace of satisfying for all one has , is maximal with respect to this property. In particular, if is finite dimensional, then the second criterion is satisfied if . Similar definitions can be made for vector spaces over other fields. An alternative (equivalent) definition often used is that satisfies , where orthogonality is with respect to the symmetric bilinear form on given by . Examples If is a vector subspace, then is a Dirac structure on , where is the annihilator of ; that is, . Let be a skew-symmetric linear map, then the graph of is a Dirac structure. Similarly, if is a skew-symmetric linear map, then its graph is a Dirac structure. Dirac structures on manifolds A Dirac structure on a smooth manifold is an assignment of a (linear) Dirac structure on the tangent space to at , for each . That is, for each , a Dirac subspace of the space . Many authors, in particular in geometry rather than the mechanics applications, require a Dirac structure to satisfy an extra integrability condition as follows: suppose are sections of the Dirac bundle () then In the mechanics literature this would be called a closed or integrable Dirac structure. Examples Let be a smooth distribution of constant rank on a manifold , and for each let , then the union of these subspaces over forms a Dirac structure on . Let be a symplectic form on a manifold , then its graph is a (closed) Dirac structure. More generally, this is true for any closed 2-form. If the 2-form is not closed, then the resulting Dirac structure is not closed. Let be a Poisson structure on a manifold , then its graph is a (closed) Dirac structure. Applications Port-Hamiltonian systems Nonholonomic constraints Thermodynamics References H. Bursztyn, A brief introduction to Dirac manifolds. Geometric and topological methods for quantum field theory, 4–38, Cambridge Univ. Press, Cambridge, 2013. Classical mechanics Differential geometry Symplectic geometry
Dirac structure
[ "Physics" ]
518
[ "Mechanics", "Classical mechanics" ]
29,196,163
https://en.wikipedia.org/wiki/MPIR%20%28mathematics%20software%29
Multiple Precision Integers and Rationals (MPIR) is an open-source software multiprecision integer library forked from the GNU Multiple Precision Arithmetic Library (GMP) project. It consists of much code from past GMP releases, and some original contributed code. According to the MPIR-devel mailing list, "MPIR is no longer maintained", except for building the old code on Windows using new versions of Microsoft Visual Studio. According to the MPIR developers, some of the main goals of the MPIR project were: Maintaining compatibility with GMP – so that MPIR can be used as a replacement for GMP. Providing build support for Linux, Mac OS, Solaris and Windows systems. Supporting building MPIR using Microsoft based build tools for use in 32- and 64-bit versions of Windows. MPIR is optimized for many processors (CPUs). Assembly language code exists for these : ARM, DEC Alpha 21064, 21164, and 21264, AMD K6, K6-2, Athlon, K8 and K10, Intel Pentium, Pentium Pro-II-III, Pentium 4, generic x86, Intel IA-64, Core 2, i7, Atom, Motorola-IBM PowerPC 32 and 64, MIPS R3000, R4000, SPARCv7, SuperSPARC, generic SPARCv8, UltraSPARC. Language bindings See also Arbitrary-precision arithmetic, data type: bignum GNU Multiple Precision Arithmetic Library GNU Multiple Precision Floating-Point Reliably (MPFR) Class Library for Numbers supporting GiNaC References External links GMP — official site of GNU Multiple Precision Arithmetic Library MPFR — official site of GNU Multiple Precision Floating-Point Reliably C (programming language) libraries Computer arithmetic Computer arithmetic algorithms Free software programmed in C Numerical software
MPIR (mathematics software)
[ "Mathematics" ]
381
[ "Computer arithmetic", "Arithmetic", "Numerical software", "Mathematical software" ]
29,202,283
https://en.wikipedia.org/wiki/Ablomin
Ablomin is a toxin present in the venom of the Japanese Mamushi snake, which blocks L-type voltage-gated calcium channels. Etymology The protein ablomin is a component of the venom of the Japanese Mamushi snake, Gloydius blomhoffii. The term ‘ablomin’ is an acronym derived from Agkistrodon blomhoffi, an old name for this snake. Sources The protein can be found in the venom of the Japanese Mamushi snake, a member of the Viperidae family. Chemistry Ablomin is part of the Cystein-Rich Secretory Protein (CRISP) family. CRISPs comprise a particular group of snake venom proteins distributed among the venom of several families of snakes, such as elapids, colubrids and vipers. The protein exists of 240 amino acids, coded by an mRNA of 1336 base pairs. Structurally, it is composed of three distinct regions: an N-terminal protein domain, a hinge region and a C-terminal cystein-rich domain. It has a molecular mass of 25 kDa. Ablomin shows great sequence homology with triflin (83.7%) and latisemin (61.5%), two other snake venom components of the CRISP family, which also target voltage-dependent calcium channels. In addition, it shows partial homology with helothermine (52.8%), a venom protein of the Mexican beaded lizard; this protein, however, targets other ion channels than ablomin. Target Ablomin reduces potassium-induced contraction of smooth muscles, suggesting that it blocks L-type voltage-gated calcium channels. Moreover, ablomin may slightly inhibit rod-type cyclic nucleotide-gated ion channels (CNGA1) channels. Toxicity Ablomin affects high potassium-induced contraction of arterial smooth muscle in rat-tails in a concentration-dependent matter. Reduction of arterial smooth muscle contraction in a rat-tail results in vasodilation of the rat-tails artery, which may lead to hypothermia. Blocking other L-type voltage gated Ca2+ channels, for instance in the heart, may lead to arrhythmias and even cardiac arrest. See also Other snake venom proteins in the CRISP family: Piscivorin from the Eastern Cottonmouth Triflin from the Habu snake Ophanin from the King Cobra Latisemin from the Erabu snake References Ion channel toxins Neurotoxins Snake toxins
Ablomin
[ "Chemistry" ]
523
[ "Neurochemistry", "Neurotoxins" ]
29,205,279
https://en.wikipedia.org/wiki/Tensin
Tensin was first identified as a 220 kDa multi-domain protein localized to the specialized regions of plasma membrane called integrin-mediated focal adhesions (which are formed around a transmembrane core of an αβ integrin heterodimer). Genome sequencing and comparison have revealed the existence of four tensin genes in humans. These genes appear to be related by ancient instances of gene duplication. Tensin binds to actin filaments and contains a phosphotyrosine-binding (PTB) domain at the C-terminus, which interacts with the cytoplasmic tail of β integrins. These interactions allow tensin to link actin filaments to integrin receptors. Several factors induce tyrosine phosphorylation of tensin. Thus, tensin functions as a platform for assembly and disassembly of signaling complexes at focal adhesions by recruiting tyrosine-phosphorylated signaling molecules, and also by providing interaction sites for other proteins. Haynie, by contrast, argues in a review of tensin structure and function that experimental evidence for the specific association of tensin with actin filaments is inconclusive at best. Recent work has also demonstrated TNS3 and TNS4 to exhibit force-dependent recruitment to keratin network in epithelial cells, highlighting its novel role in mechanotransduction. It is beyond reasonable doubt, however, that tensin 1, tensin 2 and tensin 3 each contains a protein tyrosine phosphatase (PTP) domain near the N-terminus. The PTP domain is unlikely to be active in tensin 1, owing to mutation of the essential nucleophilic cysteine in the signature motif to asparagine. Nevertheless, phosphatidylinositol-3,4,5-trisphosphate 3-phosphatase, the well-studied tumor suppressor that is better known as PTEN, gets its name from homology with PTPs and tensin 1. More detailed structure comparisons have revealed that tensins 1-3, PTEN, auxilin and other proteins in animals, plants and fungi comprise a PTP-C2 superdomain. An integrated PTP domain and C2 domain, the PTP-C2 superdomain came into existence over 1 billion years ago and has functioned as a single heritable unit since then. The first tensin cDNA sequence was isolated from chicken. Analysis of knockout mice has demonstrated critical roles of tensin in renal function, muscle regeneration, and cell migration. Evidence is now emerging to suggest tensin is an important component linking the ECM, the actin cytoskeleton, and signal transduction. Therefore, tensin and its downstream signaling molecules may be targets for therapeutic interventions in renal disease, wound healing and cancer. References External links MBInfo - Tensin in Cell Adhesion Tensin review 2014 tensin review Proteins
Tensin
[ "Chemistry" ]
622
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
29,206,114
https://en.wikipedia.org/wiki/SATB2
Special AT-rich sequence-binding protein 2 (SATB2) also known as DNA-binding protein SATB2 is a protein that in humans is encoded by the SATB2 gene. SATB2 is a DNA-binding protein that specifically binds nuclear matrix attachment regions and is involved in transcriptional regulation and chromatin remodeling. SATB2 shows a restricted mode of expression and is expressed in certain cell nuclei . The SATB2 protein is mainly expressed in the epithelial cells of the colon and rectum, followed by the nuclei of neurons in the brain. Function With an average worldwide prevalence of 1/800 live births, oral clefts are one of the most common birth defects. Although over 300 malformation syndromes can include an oral cleft, non-syndromic forms represent about 70% of cases with cleft lip with or without cleft palate (CL/P) and roughly 50% of cases with cleft palate (CP) only. Non-syndromic oral clefts are considered ‘complex’ or ‘multifactorial’ in that both genes and environmental factors contribute to the etiology. Current research suggests that several genes are likely to control risk, as well as environmental factors such as maternal smoking. Re-sequencing studies to identify specific mutations suggest several different genes may control risk to oral clefts, and many distinct variants or mutations in apparently causal genes have been found reflecting a high degree of allelic heterogeneity. Although most of these mutations are extremely rare and often show incomplete penetrance (i.e., an unaffected parent or other relatives may also carry the mutation), combined they may account for up to 5% of non-syndromic oral cleft. Mutations in the SATB2 gene have been found to cause isolated cleft palates. SATB2 also likely influences brain development. This is consistent with mouse studies that show SATB2 is necessary for the proper establishment of cortical neuron connections across the corpus callosum, despite the apparently normal corpus callosum in heterozygous knockout mice. Structure SATB2 is a 733 amino-acid homeodomain-containing human protein with a molecular weight of 82.5 kDa encoded by the SATB2 gene on 2q33. The protein contains two degenerate homeodomain regions known as CUT domains (amino acid 352–437 and 482–560) and a classical homeodomain (amino acid 614–677). There is an extraordinarily high degree of sequence conservation, with only three predicted amino-acid substitutions in the 733 residue protein with I481V, A590T and I730T being amino acid differences between the human and the mouse protein. Clinical significance SATB2 has been implicated as causative in the cleft or high palate of individuals with 2q32q33 microdeletion syndrome. SATB2 was found to be disrupted in two unrelated cases with de novo apparently balanced chromosome translocations associated with cleft palate and Pierre Robin sequence. The role of SATB2 in tooth and jaw development is supported by the identification of a de novo SATB2 mutation in a male with profound intellectual disabilities and jaw and tooth abnormalities and a translocation interrupting SATB2 in an individual with Robin sequence. In addition, mouse models have demonstrated haploinsufficiency of SATB2 results in craniofacial defects that phenocopy those caused by 2q32q33 deletion in humans; moreover, full functional loss of SATB2 amplifies these defects. SATB2 expression is highly specific for cancer in the lower GI-tract and has been implicated as a cancer biomarker for colorectal cancer. References Further reading External links Registry of SATB2 cases http://satb2gene.com Transcription factors
SATB2
[ "Chemistry", "Biology" ]
813
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
29,210,138
https://en.wikipedia.org/wiki/Fursultiamine
Fursultiamine (INN; chemical name thiamine tetrahydrofurfuryl disulfide or TTFD; brand names Adventan, Alinamin-F, Benlipoid, Bevitol Lipophil, Judolor, Lipothiamine) is a medication and vitamin used to treat thiamine deficiency. Chemically, it is a disulfide derivative of thiamine and is similar in structure to allithiamine. It was synthesized in Japan in the 1960s from allithiamine for the purpose of developing forms of thiamine with improved lipophilicity for treating vitamin B1 deficiency (i.e., beriberi), It was subsequently commercialized not only in Japan but also in Spain, Austria, Germany, and the United States. See also Vitamin B1 analogue References Further reading Formamides Organic disulfides Aminopyrimidines Tetrahydrofurans Prodrugs Thiamine
Fursultiamine
[ "Chemistry" ]
202
[ "Chemicals in medicine", "Prodrugs" ]