id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
24,143,128 | https://en.wikipedia.org/wiki/Puncture%20resistance | Puncture resistance denotes the relative ability of a material or object to inhibit the intrusion of a foreign object. This is defined by a test method, regulation, or technical specification. It can be measured in several ways ranging from a slow controlled puncture to a rapid impact of a sharp object or a rounded probe.
Tests devised to measure puncture resistance are generally application-specific, covering items such as roofing and packaging materials, protective gloves, needle disposal facilities,
bulletproof vests, tires, etc. Puncture resistance in fabrics can be obtained through very tight woven fabrics, small ceramic plates in fabric coating or tight woven fabrics with a coating of hard crystals. All described methods significantly reduce the softness and flexibility of the fabric.
The puncture resistance will depend on the nature of puncture attempt, with the two most important features being point sharpness and force. A fine sharp point such as a hypodermic needle will require a high ability to absorb and distribute the force to avoid penetration, but the total forces applied are still limited. The EN388 glove standard use a more pencil-like object with a flat tip of 1mm diameter. The EN388 test is highly dependent on the materials ability to withstand high forces through high tenacity and to a lesser extent to avoid cut or separation of the material.
There is no or limited correlation between the protections provided in the low force/needle protection and the high force/pencil like EN388 test.
Needle-resistant materials as described above are generally pierced by a force between 2-10N by a 25 gauge needle perpendicular to the fabric. The forces in the EN 388 test results are rated according to a score from 0-4 (0, <20N; 1, 20N; 2, 60N; 3, 100N; 4, >150N). A newer test, ASTM F2878-10, is specifically designed to simulate common hypodermic needles in 21-, 25-, 28- gauge.
See also
Fracture toughness
ASTM International
Aramid
References
Puncture Resistance
Standards
ASTM D3420 Standard Test Method for Pendulum Impact Resistance of Plastic Film
ASTM D4833 Standard Test Method for Index Puncture Resistance of Geomembranes and Related Products
ASTM D5494 Standard Test Method for the Determination of Pyramid Puncture Resistance of Unprotected and Protected Geomembranes
ASTM D5602 Standard Test Method for Static Puncture Resistance of Roofing Membrane Specimens
ASTM D5635 Standard Test Method for Dynamic Puncture Resistance of Roofing Membrane Specimens
ASTM D5748 Standard Test Method for Protrusion Puncture Resistance of Stretch Wrap Film
ASTM F924 Standard Test Method for Resistance to Puncture of Cushioned Resilient Floor Coverings
ASTM F1342 Standard Test Method for Protective Clothing Material Resistance to Puncture
ASTM F2132-01(2008)e1 Standard Specification for Puncture Resistance of Materials Used in Containers for Discarded Medical Needles and Other Sharps
ASTM F2878-10 Standard Test Method for Protective Clothing Material Resistance to Hypodermic Needle Puncture
Materials science | Puncture resistance | [
"Physics",
"Materials_science",
"Engineering"
] | 640 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
6,572,909 | https://en.wikipedia.org/wiki/Gaussian%20measure | In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space , closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are named after the German mathematician Carl Friedrich Gauss. One reason why Gaussian measures are so ubiquitous in probability theory is the central limit theorem. Loosely speaking, it states that if a random variable is obtained by summing a large number of independent random variables with variance 1, then has variance and its law is approximately Gaussian.
Definitions
Let and let denote the completion of the Borel -algebra on . Let denote the usual -dimensional Lebesgue measure. Then the standard Gaussian measure is defined by
for any measurable set . In terms of the Radon–Nikodym derivative,
More generally, the Gaussian measure with mean and variance is given by
Gaussian measures with mean are known as centered Gaussian measures.
The Dirac measure is the weak limit of as , and is considered to be a degenerate Gaussian measure; in contrast, Gaussian measures with finite, non-zero variance are called non-degenerate Gaussian measures.
Properties
The standard Gaussian measure on
is a Borel measure (in fact, as remarked above, it is defined on the completion of the Borel sigma algebra, which is a finer structure);
is equivalent to Lebesgue measure: , where stands for absolute continuity of measures;
is supported on all of Euclidean space: ;
is a probability measure , and so it is locally finite;
is strictly positive: every non-empty open set has positive measure;
is inner regular: for all Borel sets , so Gaussian measure is a Radon measure;
is not translation-invariant, but does satisfy the relation where the derivative on the left-hand side is the Radon–Nikodym derivative, and is the push forward of standard Gaussian measure by the translation map , ;
is the probability measure associated to a normal probability distribution:
Infinite-dimensional spaces
It can be shown that there is no analogue of Lebesgue measure on an infinite-dimensional vector space. Even so, it is possible to define Gaussian measures on infinite-dimensional spaces, the main example being the abstract Wiener space construction. A Borel measure on a separable Banach space is said to be a non-degenerate (centered) Gaussian measure if, for every linear functional except , the push-forward measure is a non-degenerate (centered) Gaussian measure on in the sense defined above.
For example, classical Wiener measure on the space of continuous paths is a Gaussian measure.
See also
References
Measures (measure theory)
Stochastic processes | Gaussian measure | [
"Physics",
"Mathematics"
] | 568 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
6,573,013 | https://en.wikipedia.org/wiki/Cameron%E2%80%93Martin%20theorem | In mathematics, the Cameron–Martin theorem or Cameron–Martin formula (named after Robert Horton Cameron and W. T. Martin) is a theorem of measure theory that describes how abstract Wiener measure changes under translation by certain elements of the Cameron–Martin Hilbert space.
Motivation
The standard Gaussian measure on -dimensional Euclidean space is not translation-invariant. (In fact, there is a unique translation invariant Radon measure up to scale by Haar's theorem: the -dimensional Lebesgue measure, denoted here .) Instead, a measurable subset has Gaussian measure
Here refers to the standard Euclidean dot product in . The Gaussian measure of the translation of by a vector is
So under translation through , the Gaussian measure scales by the distribution function appearing in the last display:
The measure that associates to the set the number is the pushforward measure, denoted . Here refers to the translation map: . The above calculation shows that the Radon–Nikodym derivative of the pushforward measure with respect to the original Gaussian measure is given by
The abstract Wiener measure on a separable Banach space , where is an abstract Wiener space, is also a "Gaussian measure" in a suitable sense. How does it change under translation? It turns out that a similar formula to the one above holds if we consider only translations by elements of the dense subspace .
Statement of the theorem
For abstract wiener spaces
Let be an abstract Wiener space with abstract Wiener measure . For , define by . Then is equivalent to with Radon–Nikodym derivative
where
denotes the Paley–Wiener integral.
The Cameron–Martin formula is valid only for translations by elements of the dense subspace , called Cameron–Martin space, and not by arbitrary elements of . If the Cameron–Martin formula did hold for arbitrary translations, it would contradict the following result:
If is a separable Banach space and is a locally finite Borel measure on that is equivalent to its own push forward under any translation, then either has finite dimension or is the trivial (zero) measure. (See quasi-invariant measure.)
In fact, is quasi-invariant under translation by an element if and only if . Vectors in are sometimes known as Cameron–Martin directions.
Version for locally convex vector spaces
Consider a locally convex vector space , with a Gaussian measure on the cylindrical σ-algebra and let denote the translation by . For an element in the topological dual define the distance to the mean
and denote the closure in as .
Define the covariance operator extended to the closure as
.
Define the norm
then the Cameron-Martin space of in is
.
If for there exists an such that then and . Further there is equivalence with Radon-Nikodým density
If the two measures are singular.
Integration by parts
The Cameron–Martin formula gives rise to an integration by parts formula on : if has bounded Fréchet derivative , integrating the Cameron–Martin formula with respect to Wiener measure on both sides gives
for any . Formally differentiating with respect to and evaluating at gives the integration by parts formula
Comparison with the divergence theorem of vector calculus suggests
where is the constant "vector field" for all . The wish to consider more general vector fields and to think of stochastic integrals as "divergences" leads to the study of stochastic processes and the Malliavin calculus, and, in particular, the Clark–Ocone theorem and its associated integration by parts formula.
An application
Using Cameron–Martin theorem one may establish (See Liptser and Shiryayev 1977, p. 280) that for a symmetric non-negative definite matrix whose elements are continuous and satisfy the condition
it holds for a −dimensional Wiener process that
where is a nonpositive definite matrix which is a unique solution of the matrix-valued Riccati differential equation
with the boundary condition .
In the special case of a one-dimensional Brownian motion where , the unique solution is , and we have the original formula as established by Cameron and Martin:
See also
References
Probability theorems
Theorems in measure theory | Cameron–Martin theorem | [
"Mathematics"
] | 829 | [
"Theorems in mathematical analysis",
"Theorems in measure theory",
"Theorems in probability theory",
"Mathematical problems",
"Mathematical theorems"
] |
6,573,614 | https://en.wikipedia.org/wiki/Mesoamerican%20architecture | Mesoamerican architecture is the set of architectural traditions produced by pre-Columbian cultures and civilizations of Mesoamerica, traditions which are best known in the form of public, ceremonial and urban monumental buildings and structures. The distinctive features of Mesoamerican architecture encompass a number of different regional and historical styles, which however are significantly interrelated. These styles developed throughout the different phases of Mesoamerican history as a result of the intensive cultural exchange between the different cultures of the Mesoamerican culture area through thousands of years. Mesoamerican architecture is mostly noted for its pyramids, which are the largest such structures outside of Ancient Egypt.
One interesting and widely researched topic is the relation between cosmovision, religion, geography, and architecture in Mesoamerica. Much seems to suggest that many traits of Mesoamerican architecture were governed by religious and mythological ideas. For example, the layout of most Mesoamerican cities seem to be influenced by the cardinal directions and their mythological and symbolic meanings in Mesoamerican culture.
Another part of Mesoamerican architecture is its iconography. The monumental architecture of Mesoamerica was decorated with images of religious and cultural significance, and also in many cases with writing in some of the Mesoamerican writing systems. Iconographic decorations and texts on buildings are important contributors to the overall current knowledge of pre-Columbian Mesoamerican society, history and religion.
Chronology
The following tables show the different phases of Mesoamerican architecture and archeology and correlates them with the cultures, cities, styles and specific buildings that are notable from each period.
Urban planning and cosmovision
Cosmos and its replication
Symbolism
An important part of the Mesoamerican religious system was replicating their beliefs in concrete tangible forms, in effect making the world an embodiment of their beliefs. This meant that the Mesoamerican city was constructed to be a microcosm, manifesting the same division that existed in the religious, mythical geography—a division between the underworld and the human world. The underworld was represented by the direction north and many structures and buildings related to the underworld, such as tombs, are often found in the city's northern half. The southern part represented life, sustenance, and rebirth and often contained structures related to the continuity and daily function of the city-state, such as monuments depicting the noble lineages, or residential quarters, markets, etc. Between the two halves of the north/south axis was the plaza, often containing stelae resembling the world tree the Mesoamerican axis mundi, and a ballcourt which served as a crossing point between the two worlds.
Some Mesoamericanists argue that in religious symbolism the Mesoamerican monumental architecture pyramids were mountains, stelae were trees, and wells, ballcourts and cenotes were caves that provided access to the underworld.
Orientation
Mesoamerican architecture is often designed to align to specific celestial events. Some pyramids, temples, and other structures were designed to achieve special lighting effects on particular days important in the Mesoamerican cosmovision. A famous example is the "El Castillo" pyramid at Chichen Itza, where a light-and-shadow effect can be observed during several weeks around the equinoxes. Contrary to a common opinion, however, there is no evidence that this phenomenon was the result of a purposeful design intended to commemorate the equinoxes.
Much Mesoamerican architecture is also aligned to roughly 15° east of north. Vincent H Malmstrom has argued that this is because of a general wish to align the pyramids to face the sunset on August 13, which was the beginning date of the Maya Long Count calendar. However, recent research has shown that the earliest orientations marking sunsets on August 13 (and April 30) occur outside of the Maya area. Their purpose must have been to record the dates separated by a period of 260 days (from August 13 to April 30), equivalent to the length of the sacred Mesoamerican calendrical count. In general, the orientations in Mesoamerican architecture tend to mark the dates separated by multiples of 13 and 20 days, i.e. of basic periods of the calendrical system. The distribution of these dates in the year suggests that the orientations allowed the use of observational calendars that facilitated the prediction of agriculturally significant dates. These conclusions are supported by the results of systematic research accomplished in various Mesoamerican regions, including central Mexico, the Maya Lowlands, Oaxaca, the Gulf Coast lowlands, and western and northern Mesoamerica. While solar orientations prevail, some prominent buildings were aligned to Venus extremes, a notable example being the Governor's Palace at Uxmal. Orientations to lunar standstill positions on the horizon have also been documented; they are particularly common along the Northeast Coast of the Yucatán peninsula, where the worship of the goddess Ixchel, associated with the Moon, is known to have had an outstanding importance during the Postclassic period.
The Plaza
Nearly every known ancient Mesoamerican city had one or more formal public plazas. They are typically large impressive spaces, surrounded by tall pyramids, temples, and other important buildings. Activities that would take place in these plazas would include private rituals, periodic markets, mass spectator ceremonies, participatory public ceremonies, feasts, and other popular celebrations.
The size of the main plazas in Mesoamerican cities differed greatly, the largest being located in Tenochtitlan with an estimated size of 115,000 square meters. This plaza is an outlier due to the population of the city being so large. The next largest estimated plaza is located in the Gulf Coast in the city of Cempoala (or Zempoala), measuring at 48,088 square meters. Most plazas average at around 3,000 square meters, the smallest being at the site of Paxte which is 528 square meters. Some cities contain many smaller plazas throughout, while some focus their attention on a significantly large main plaza.
Tenochtitlan
Tenochtitlan was an Aztec city that thrived from 1325 to 1521. The city was built on an island, surrounded on all sides by Lake Texcoco. It consisted of an elaborate system of canals, aqueducts, and causeways allowing the city to supply its residents. The island was about 12 square kilometers and had a population of approximately 125,000 people, making it the largest Mesoamerican city ever recorded. The main plaza of Tenochtitlan was approximately 115,000 square meters, or . The main temple of Tenochtitlan known as Templo Mayor or the Great Temple was 100 meters by 80 meters at its base, and 60 meters tall. The city ultimately fell in 1521 when it was destroyed by the Spanish conquistador Hernán Cortés in 1521. Cortés and the Spaniards raided the city for its gold supply and artifacts, leaving little behind of the Aztec's civilization.
At the monumental Templo Mayor of Tenochtitlan, archaeologists discovered that the Aztec enlarged the temple seven times, with five extra façades, but always kept intact the basic dual symbolism of the rain god Tlaloc and the tribute/war god Huitzilopochtli. Mexican Archaeologist Eduardo Matos Moctezuma has shown that the symbolic and ritual life of this imperial shrine unified the patterns of forced tributary payments from hundreds of communities with the agricultural and hydraulic subsystems of food production.
Pyramids
Often the most important religious temples sat atop the towering pyramids, presumably as the closest place to the heavens. While recent discoveries point toward the extensive use of pyramids as tombs, the temples themselves seem to rarely, if ever, contain burials. Residing atop the pyramids, some of over two-hundred feet, such as that at El Mirador, the temples were impressive and decorated structures themselves. Commonly topped with a roof comb, or superficial grandiose wall, these temples might have served as a type of propaganda.
Pyramid of the Sun
The Pyramid of the Sun is the largest structure created in the city of Teotihuacan and one of the largest structures in the entire Western Hemisphere. It stands at about 216 feet and is roughly at its base. The pyramid is located on the east side of the avenue of the dead which runs almost directly down the center of the city of Teotihuacan. After archaeologists discovered animal remains, masks, figurines, specifically one of the Aztec god Huehueteotl, and shards of clay pots in the pyramid, it was agreed upon that the pyramid was likely a ritual temple at one point.
Temple of the Feathered Serpent
The Temple of the Feathered Serpent was constructed after the Pyramid of the Sun and the Pyramid of the Moon had been completed. The temple marks one of the first uses of the architecture style of talud-tablero. On the surfaces, the temple had murals illustrated on them just like so many temples that were built at the same time and by the same people. The tableros featured large serpent heads complete with elaborate headdresses. The feathered serpent refers to the Aztec god Quetzalcoatl.
Ballcourts
The Mesoamerican ballgame ritual was a symbolic journey between the underworld and the world of the living, and many ball courts are found in the mid-part of the city functioning as a connection between the northern and southern halves of the city. All but the earliest ball courts are masonry structures. Over 1300 ball courts have been identified, and although there is a tremendous variation in size, they all have the same general shape: a long narrow alley flanked by two walls with horizontal, sloping, and sometimes vertical faces. The later vertical faces, such as those at Chichen Itza and El Tajin, are often covered with complex iconography and scenes of human sacrifice.
Although the alleys in early ball courts were open-ended, later ball courts had enclosed end-zones, giving the structure an -shape when viewed from above. The playing alley may be at ground level, or the ball court may be "sunken".
Ball courts were no mean feats of engineering. One of the sandstone stones on El Tajin's South Ball court is 11 m long and weighs more than 10 tons.
Residential quarters and elite residences
Large and often highly decorated, the palaces usually sat close to the center of a city and housed the population's elite. Any exceedingly large royal palace, or one consisting of many chambers on different levels might be referred to as an acropolis. However, often these were one-story and consisted of many small chambers and typically at least one interior courtyard; these structures appear to take into account the needed functionality required of a residence, as well as the decoration required for their inhabitants stature.
Archaeologists seem to agree that many palaces are home to various tombs. At Copán, beneath over four-hundred years of later remodeling, a tomb for one of the ancient rulers has been discovered and the North Acropolis at Tikal appears to have been the site of numerous burials during the Terminal Pre-classic and Early Classic periods.
Building materials
The most surprising aspect of the great Mesoamerican structures is their lack of many advanced technologies that would seem to be necessary for such constructions. Lacking metal tools, Mesoamerican architecture required one thing in abundance: manpower. Yet, beyond this enormous requirement, the remaining materials seem to have been readily available. They most often utilized limestone, which remained pliable enough to be worked with stone tools while being quarried, and only hardened once when removed from its bed. In addition to the structural use of limestone, much of their mortar consisted of crushed, burnt, and mixed limestone that mimicked the properties of cement and was used just as widely for stucco finishing as it was for mortar. However, later improvements in quarrying techniques reduced the necessity for this limestone-stucco as their stones began to fit quite perfectly, yet it remained a crucial element in some post and lintel roofs.
A common building material in central Mexico was tezontle (a light, volcanic rock). It was common for palaces and monumental structures to be made of this rough stone and then covered with stucco or with a cantera veneer. Very large and ornate architectural ornaments were fashioned from a very enduring stucco (kalk), especially in the Maya region, where a type of hydraulic limestone cement or concrete was also used. In the case of the common houses, wooden framing, adobe, and thatch were used to build homes over stone foundations. However, instances of what appear to be common houses of limestone have been discovered as well. Buildings were typically finished with high slanted roofs usually built of wood or thatch although stone roofs in these high slant fashions are also used rarely.
Styles
Megalithic
An architectural construction technique that employs large dry-laid limestone blocks (c. 1 m × 50 cm × 30 cm) covered with a thick layer of stucco. This style was common in the northern Maya lowlands from the Preclassic until the early parts of the Early Classic.
Talud-tablero
Pyramids in Mesoamerican were platformed pyramids and many used a style called talud-tablero, which first became common in Teotihuacan. This style consists of a platform structure, or the "tablero," on top of a sloped "talud". Many different variants on the talud-tablero style arose throughout Mesoamerica, developing and manifesting itself differently among the various cultures.
Classic Period Maya styles
Palenque, Tikal, Copán, Tonina, the corbeled arch.
"Toltec" Style
Chichén Itzá, Tula Hidalgo, chacmools, Atlantean figures, Quetzalcoatl designs.
Puuc
So named after the Puuc hills in which this style developed and flourished during the latter portion of the Late Classic and throughout the Terminal Classic in the northern Maya lowlands, Puuc architecture consists of veneer facing stones applied to a concrete core. Two façades were typically built, partitioned by a ridge of stone. The blank lower façade is formed by flat cut stones and punctuated by doorways. The upper partition is richly decorated with repeating geometric patterns and iconographic elements, especially the distinctive curved-nosed Chaac masks. Carved columnettes are also common.
Technology
Corbelled arch
Mesoamerican cultures never invented the keystone, and so were unable to build true arches, but instead, all of their architecture made use of the "false" or Corbelled arch. These arches are built without centering and can be built without support, by corbelling regularly the horizontal courses of the wall masonry. This type of arch supports much less weight than a true arch.
However, recent work by engineer James O'Kon suggests the Mesoamerican "arch" is technically not a corbelled arch at all but a trapezium truss system. Moreover, unlike a corbelled arch, it does not rely on overlapping layers of blocks but cast-in-place concrete often supported by timber thrust beams. Computer analysis reveals this to be structurally superior to a curved arch
True arch
Scholars such as David Eccott and Gordon Ekholm argue that true arches were known in pre-Columbian times in Mesoamerica; they point to various examples of true arches at a Maya site in La Muneca, the facade of Temple A at Nukum, two low domes at Tajin in Veracruz, a sweat bath at Chichen Itza, and an arch at Oztuma. In 2010, a robot discovered a long arch-roofed passageway underneath the Pyramid of Quetzalcoatl, which stands in the ancient city of Teotihuacan north of Mexico City, dated to around 200 AD.
UNESCO World Heritage Sites
A number of important archeological sites representing Mesoamerican architecture have been categorized as "World Heritage Sites" by the UNESCO.
El Salvador
Maya site of Joya de Cerén
Honduras
Maya Site of Copán
Guatemala
Tikal National Park
Archaeological Park and Ruins of Quirigua
Mexico
Pre-Hispanic City and National Park of Palenque
Pre-Hispanic Town of Uxmal
Pre-Hispanic City of Teotihuacan
Historic Centre of Oaxaca and Archaeological Site of Monte Albán
Pre-Hispanic City of Chichen Itza
Archaeological Monuments Zone of Xochicalco, Morelos
El Tajin, Pre-Hispanic City of Veracruz
Ancient Maya City of Calakmul, Campeche
Historic Centre of Mexico City and Xochimilco (with templo mayor and adjacent temples)
Agave Landscape and Ancient Industrial Facilities of Tequila (with guachimonotones and near teuchitlán culture sites)
Prehistoric Caves of Yagul and Mitla in the Central Valley of Oaxaca (with the yagul archeological site)
See also
Maya architecture
Mayan Revival architecture
Maya city
Buildings and structures in Mesoamerica
Triadic pyramid
Notes
Further reading
Leibsohn, Dana, and Barbara E. Mundy, "The Mechanics of the Art World", Vistas: Visual Culture in Spanish America, 1520–1820 (2015).
External links
Architectural history
Architectural styles
Architecture in Mexico
Central American architecture | Mesoamerican architecture | [
"Engineering"
] | 3,559 | [
"Architectural history",
"Architecture"
] |
6,574,091 | https://en.wikipedia.org/wiki/Gain%20scheduling | In control theory, gain scheduling is an approach to control of nonlinear systems that uses a family of linear controllers, each of which provides satisfactory control for a different operating point of the system.
One or more observable variables, called the scheduling variables, are used to determine what operating region the system is currently in and to enable the appropriate linear controller. For example, in an aircraft flight control system, the altitude and Mach number might be the scheduling variables, with different linear controller parameters available (and automatically plugged into the controller) for various combinations of these two variables. In brief, gain scheduling is a control design approach that constructs a nonlinear controller for a nonlinear plant by patching together a collection of linear controllers.
A relatively large scope state of the art about gain scheduling has been published in (Survey of Gain-Scheduling Analysis & Design, D.J.Leith, WE.Leithead).
Recently, new methodologies using Machine learning, such as Adaptive control based on Artificial Neural Networks (ANN) and Reinforcement Learning, have been studied.
See also
Linear parameter-varying control
References
Further reading
Nonlinear control
Control engineering
Classical control theory | Gain scheduling | [
"Engineering"
] | 229 | [
"Control engineering"
] |
6,575,865 | https://en.wikipedia.org/wiki/Ohana%20project | The Ohana project aims to use seven big telescopes on top of Mauna Kea, Hawaii Big Island, in an interferometer configuration. Mauna Kea is a former volcano whose height is 13,600 ft (4,145 m). It is a good site for telescopes which probe the universe in the optical and infrared wavelengths because of its altitude and low levels of light pollution.
OHANA stands for Optical Hawaiian Array for Nanoradian Astronomy. In Hawaiian, ‘ohana means "family".
Telescopes involved in the project
Among the telescopes belonging to the Mauna Kea Observatory, seven are involved in the ‘OHANA project :
Two Keck telescopes which each have a 10 m diameter primary mirror.
Subaru with an 8.2 m primary mirror.
Gemini North with an 8 m primary mirror.
Canada France Hawaii Telescope (CFHT) with a prime focus/Cassegrain configuration with a usable aperture diameter of 3.58 meters.
The Infrared Telescope Facility is a 3 m telescope.
The United Kingdom Infrared Telescope is a 3.8 m telescope.
Stages of the project
The project's big improvement is the use of optic fiber instead of mirrors to guide the beams in the interferometer recombination system.
Before doing some tests with the telescopes, scientists had to prove that the elements used with the optics fiber were reliable (for example, the off-axis parabola, used to inject the beams in the optics fiber). Those tests were partly carried out on the Infrared Optical Telescope Array Observatory.
The next stage was to use two optics fiber of 300 m between Keck I and Keck II to look at the same source. This was achieved on June 15, 2005.
The current stage of the project is the use of the CFHT and Gemini North together. To do that, a long delay line was integrated in the Meudon Observatory to be used in the CFHT Coudé room. Currently, the delay line is to be shipped to Hawai'i to be integrated in the Coudé room.
The next stage will be the use of the two Kecks and the Subaru, then the use of Gemini North, the CFHT and UKIRT together, and finally, the use of all telescopes together.
External links
OHANA from Canada-France-Hawaii Telescope
Telescopes
Interferometers | Ohana project | [
"Astronomy",
"Technology",
"Engineering"
] | 473 | [
"Interferometers",
"Telescopes",
"Measuring instruments",
"Astronomical instruments"
] |
6,580,748 | https://en.wikipedia.org/wiki/Chloryl%20fluoride | Chloryl fluoride is the chemical compound with the formula ClO2F. It is commonly encountered as side-product in reactions of chlorine fluorides with oxygen sources. It is the acyl fluoride of chloric acid.
Preparation
ClO2F was first reported by Schmitz and Schumacher in 1942, who prepared it by the fluorination of ClO2. The compound is more conveniently prepared by reaction of sodium chlorate and chlorine trifluoride and purified by vacuum fractionation, i.e. selectively condensing this species separately from other products. This species is a gas boiling at −6 °C:
6 NaClO3 + 4 ClF3 → 6 ClO2F + 2 Cl2 + 3 O2 + 6 NaF
Structure
In contrast to O2F2, ClO2F is a pyramidal molecule. This structure is predicted by VSEPR. The differing structures reflects the greater tendency of chlorine to exist in positive oxidation states with oxygen and fluorine ligands. The related Cl-O-F compound perchloryl fluoride, ClO3F, is tetrahedral.
The related bromine compound bromyl fluoride (BrO2F) adopts the same structure as ClO2F, whereas iodyl fluoride (IO2F) forms a polymeric substance under standard conditions.
References
External links
WebBook page for ClO2F
Chloryl compounds
Oxyfluorides
Nonmetal halides
Gases
Substances discovered in the 1940s | Chloryl fluoride | [
"Physics",
"Chemistry"
] | 329 | [
"Statistical mechanics",
"Gases",
"Phases of matter",
"Matter"
] |
520,099 | https://en.wikipedia.org/wiki/Siteswap | Siteswap, also called quantum juggling or the Cambridge notation, is a numeric juggling notation used to describe or represent juggling patterns. The term may also be used to describe siteswap patterns, possible patterns transcribed using siteswap. Throws are represented by non-negative integers that specify the number of beats in the future when the object is thrown again: "The idea behind siteswap is to keep track of the order that balls are thrown and caught, and only that." It is an invaluable tool in determining which combinations of throws yield valid juggling patterns for a given number of objects, and has led to previously unknown patterns (such as 441). However, it does not describe body movements such as behind-the-back and under-the-leg. Siteswap assumes that "throws happen on beats that are equally spaced in time."
For example, a three-ball cascade may be notated "3 ", while a shower may be notated "5 1".
Origin
The notation was invented by Paul Klimek in Santa Cruz, California in 1981, and later developed by undergraduates Bruce "Boppo" Tiemann, Joel David Hamkins, and the late Bengt Magnusson at the California Institute of Technology in 1985, and by Mike Day, mathematician Colin Wright, and mathematician Adam Chalcraft in Cambridge, England in 1985 (whence comes an alternative name). Hamkins wrote computer code in 1985 to systematically generate siteswap patterns—the printouts were taken immediately to the Athenaeum lawn at Caltech to be tried out by himself, Tiemann, and Magnusson. The numbers derive from the number of balls used in the most common juggling patterns. Siteswap has been described as, "perhaps the most popular" name.
The name siteswap comes from the ability to generate patterns by "swapping" landing times of any 2 "sites" in a siteswap using the . For example, swapping the landing times of throws "5" and "1" in the siteswap "51" generates the siteswap "24".
Vanilla
Its simplest form, sometimes called vanilla siteswap, describes only patterns whose throws alternate hands and in which one ball is thrown from each hand at a time. If one were juggling while walking forward, something like the adjacent diagram would be seen from above, sometimes called a space-time diagram or ladder diagram. In this diagram, three balls are being juggled. Time progresses from the top to the bottom.
This pattern can be described by stating how many throws later each ball is caught. For instance, on the first throw in the diagram, the purple ball is thrown in the air (up out of the screen, towards the bottom left) by the right hand, next the blue ball, the green ball, the green ball again, and the blue ball again and then finally the purple ball is caught and thrown by the left hand on the fifth throw, this gives the first throw a count of 5. This produces a sequence of numbers which denote the height of each throw to be made. Since hands alternate, odd-numbered throws send the ball to the other hand, while even-numbered throws send the ball to the same hand. A 3 represents a throw to the opposite hand at the height of the basic three-cascade; a 4 represents a throw to the same hand at the height of the four-fountain, and so on.
There are three special throws: a 0 is a pause with an empty hand, a 1 is a quick pass straight across to the other hand, and a 2 is a momentary hold of an object. Throws longer than 9 beats are given letters starting with a. The number of beats a ball is in the air usually corresponds to how high it was thrown, so many people refer to the numbers as heights, but this is not technically correct; all that matters is the number of beats in the air, not how high it is thrown. For example, bouncing a ball takes longer than a throw in the air to the same height, and so can be a higher siteswap value while being a lower throw.
Each pattern repeats after a certain number of throws, called the period of the pattern. The period is the number of digits in the shortest non-repeating representation of a pattern. For example, the pattern diagrammed on the right is 53145305520 which has 11 digits and therefore has a period of 11. If the period is an odd number, like this one, then each time the sequence is repeated, the sequence starts with the other hand, and the pattern is symmetrical because each hand is doing the same thing (although at different times). If the period is an even number then on every repeat of the pattern, each hand does the same thing it did last time and the pattern is asymmetrical.
The number of balls used for the pattern is the average of the throw numbers in the pattern. For example, 441 is a three-object pattern because (4+4+1)/3 is 3, and 86 is a seven-object pattern. All patterns must therefore have a siteswap sequence that averages to an integer. Not all such sequences describe patterns; for example 543 with integer average 4 but its three throws all land at the same time, colliding.
Some hold to a convention in that a siteswap is written with its highest numbers first. One drawback to doing so is evident in the pattern 51414, a 3-ball pattern which cannot be inserted into the middle of a string of 3-throws, unlike its rotation 45141 which can.
Synchronous
Siteswap notation can be extended to denote patterns containing synchronous throws from both hands. The numbers for the two throws are combined in parentheses and separated by a comma. Since synchronous throws are only thrown on even beats, only even numbers are allowed. Throws that move to the other hand are marked by an x following the number. Thus a synchronous three-prop shower is denoted (4x,2x), meaning one hand continually throws a low throw or 'zip' to the opposite hand, while the other continually makes a higher throw to the first. Sequences of bracketed pairs are written without delimiting markers. Patterns that repeat in mirror image on the opposite side can be abbreviated with a *. For example, Instead of (4,2x)(2x,4) (3-ball box pattern), can be abbreviated to (4,2x)*.
Multiplexing
A further extension allows siteswap to notate patterns involving multiple throws from either or both hands at the same time in a multiplex pattern. The numbers for multiple throws from a single hand are written together inside square brackets. For example, [33]33 is a normal 3-ball cascade, with a pair of balls always thrown together.
Passing
Simultaneous juggling: <xxx|yyy> notation means one juggler does 'xxx' while another does 'yyy'. 'p' is used to represent a passing throw. For example, <3p 3|3p 3> is a 6 prop '2 count' passing pattern, where all left hand throws are passes and right hand throws are selves. This can also be used with synchronous patterns; a two-person 'shower' is then <(4xp,2x)|(4xp,2x)>
Fractional notation
If the pattern contains fractions, e.g. <4.5 3 3 | 3 4 3.5> the juggler after the bar is supposed to be half a count later, and all fractions are passes.
social siteswap
If both juggle the same pattern (although shifted in time), the pattern is called a social siteswap and only half of the pattern needs to be written: <4p 3| 3 4p> becomes 4p 3 and <4.5 3 3| 3 4.5 3> becomes 4.5 3 3. (note that in the latter case, 4.5 will be straight passes from one juggler, crossing passes (i.e. left to left or right to right hand) from the other juggler.
Social siteswaps can also be created for more than 2 jugglers (e.g. 4p 3 3 or 3.7 3 for 3 jugglers, where 3.7 is meant to mean 3.66666.... or 3 . Then each juggler should start count after the previous one.)
Note that some jugglers use fractions to note multi-handed patterns.
Multi-handed
Multi-hand notation was developed by Ed Carstens in 1992 for use with his juggling program JugglePro. Siteswap notation in its simplest form ("Vanilla siteswap") assumes that only one ball is thrown at a time. It follows that any valid siteswap for two hands will also be valid for any number of hands, on the condition that the hands throw after each other. Commonly used multi-hand siteswaps are 1-handed (diabolo) siteswap, and 4-handed (passing) siteswap.
1-handed (diabolo)
The siteswap is performed by a single hand, or a diabolo player throwing diabolos at different heights.
4-handed
Valid siteswaps can be juggled by a 4-handed juggler, or for 2 jugglers coordinating 4 hands, on the condition that hands throw alternately.
In practice, this is most easily obtained if the jugglers throw by turns, one sequence being (Right hand of juggler A, right hand of juggler B, left hand of A, left hand of B).
mixed-up notation
Some jugglers, when noting 4-handed siteswap, divide the siteswap values by the number of jugglers. This leads to a fractional notation similar to the notation for social siteswaps, but the order of the notation can be different.
State diagrams
Just after throwing a ball (or club or other juggling object), all balls are in the air and are under the influence of gravity. Assuming the balls are caught at a consistent level, then the timing of when the balls land is already determined. We can mark each point in time when a ball is going to land with an x, and each point in time when there is not yet a ball scheduled to land with a -. This describes the current state and determines what number ball can be thrown next. For instance, we can look at the state just after our first throw in the diagram, it is xx--x. We can use the state to determine what can be thrown next. First we take the x off the left hand side (that's the ball that's landing next) and shift everything else to the left filling in a - on the right. This leaves us with x--x-. Since we caught a ball (the x we removed from the left) we can't "throw" a 0 next. We also can't throw a 1 or a 4, because there are already balls scheduled to land there. So assuming that the highest we can accurately throw a ball is to a height of 5, then we can only throw a 2, 3, or a 5. In this diagram, the juggler threw a 3, so an x goes in the third spot, replacing the -, and we have x-xx- as the new state.
The diagram shown illustrates all possible states for someone juggling three items and a maximum height of 5. From each state one can follow the arrows and the corresponding numbers produce the siteswap. Any path which produces a cycle generates a valid siteswap, and all siteswaps can be generated this way. The diagram quickly becomes bigger when more balls or higher throws are introduced as there are more possible states and more possible throws.
Another method of representing siteswap states is represent a ball with a 1 instead of an x, and represent a spot where there's no ball scheduled to land with a 0 instead of a -. Then the state can be represented with a binary number, such as binary 10011. This format makes it possible to represent multiplex states, i.e. the number 2 represents that 2 balls land on that beat.
A siteswap state diagram can also be represented as a state-transition table, as shown on the right. To generate a siteswap, pick a starting state row. Index into the row via the corresponding throw column. The state entry at the intersection is the transitioned to state when that throw is made. From the new state, one can index into the table again. This process can be repeated so that when the original state is reached, a valid siteswap will be created.
Mathematical properties
Validity
Not all siteswap sequences are valid. All vanilla, synchronous, and multiplex siteswap sequences are valid if their state transitions create a cycle in their state diagram graph. Sequences that do not create a cycle are invalid. For example, the pattern 531 can be mapped to a state diagram as shown on the right. Since the transitions in this sequence create a cycle in the graph, this pattern is valid.
There are other methods of determining a sequence's validity based on the flavor of siteswap.
A vanilla siteswap sequence where is the period of the siteswap, is valid when the cardinality of the set (written in Set-builder notation) is equal to the period whereTo find if a pattern is valid, first create a new sequence formed by adding to the first number, to the second number, to the third number and so on. Second, calculate the modulus (remainder) of each number with the period. If none of the numbers are duplicated in this final sequence, then the pattern is valid.
For example, the pattern 531 would produce or . Since the pattern 531 has a period of 3, the results from the previous example would produce or . In this case, 531 is valid since the numbers are all unique. Another example, 513 is an invalid pattern because the first step produces or , the second step produces or , and the final sequence contains at least a duplicate of one number, in this case a 2.
A synchronous siteswap is valid if
it only contains even numbers and
it can be converted into a valid vanilla siteswap using the .
otherwise it is invalid.
Swap property
New valid vanilla sequences can be generated by swapping adjacent elements from another valid vanilla siteswap sequence, adding 1 to the number being swapped to the left and subtracting 1 from the number being swapped to the right. The swap property will convert the valid sequence with arbitrary value , to generate the new valid sequence .
For example, the swap property performed on the inner two throws of the sequence 4413 would move the 4 to the right subtracting 1 from it to become 3 and move the 1 to the left adding 1 to it to become 2. This produces the new valid siteswap pattern 4233.
Slide property
A valid synchronous sequence can be converted to a valid asynchronous sequence and vice versa using the slide property. Given the synchronous sequence , two new vanilla sequences can be formed: and , whereandThe slide property gets its name by sliding the throw times of one of the hands by one time unit so that the throws align asynchronously. For example, the siteswap (8x,4x)(4,4) would create two asynchronous (vanilla) siteswaps using the slide property: 9344 and 5744.
Prime patterns
Siteswaps may be considered either prime or composite. A siteswap is prime if the path created in its state diagram does not traverse any state more than once. Siteswaps that are not prime are called composite.
A non-rigorous but simpler method of determining if a siteswap is prime is to try to split it into any valid shorter pattern which uses the same number of props. For example, 44404413 can be split into 4440, 441, and 3; therefore, 44404413 is a composite. Another example, 441, which uses three props, is prime, as 1, 4, 41, and 44 are not valid three prop patterns (as 1/1≠3, 4/1≠3, (4+1)/2≠3, and (4+4)/2≠3). Sometimes this process does not work; for example, 153 (better known by its rotation 531) looks like it can be split into 15 and 3, but checking that the cycle has no repeating nodes in the graph traversal indicates that it is prime by the more rigorous definition.
It has been shown empirically that the longest prime siteswaps bounded by height contain mostly the throws and . The longest prime patterns with height 22 (with 3 ball maximum), for 9 balls (with 13 maximum height), and for heights and ball counts in between, were enumerated by Jack Boyce in February 1999 using a program called jdeep. The full list of longest prime siteswaps generated by jdeep (with '0' throws represented by a '-' and maximum height throws represented by a '+') can be found here.
Mathematical connections
Connections to abstract algebra
Vanilla siteswap patterns may be viewed as certain elements of the affine symmetric group (the affine Weyl group of type ). One presentation of this group is as the set of bijective functions f on the integers such that, for a fixed n: f(i + n) = f(i) + n for all integers i. If the element f satisfies the further condition that f(i) ≥ i for all i, then f corresponds to the (infinitely repeated) siteswap pattern whose ith number is f(i) − i: that is, the ball thrown at time i will land at time f(i).
Connections to topology
A subset of these siteswap patterns naturally label strata in the positroid stratification of the Grassmannian.
List of symbols
Number: Relative duration (height) of a toss. 1, 2, 3...
Brackets []: Multiplex. [333]33.
Chevrons and vertical bar <|>: Simultaneous and passing patterns.
P: Pass. <333P|333P>
Fraction: Pass 1/y beats later. <4.5 3 3 | 3 4 3.5>
Parentheses (): Synchronous pattern.
*: Synchronous pattern that switches sides. (4,2x)(2x,4) = (4,2x)*
x: Toss to the other hand during a synchronous pattern.
Programs
There are many free computer programs available which simulate juggling patterns.
Juggling Lab animator - An open source animator which was written in Java and interprets nearly all siteswap syntax.
Jongl - 3d animator capable of displaying multihand (passing) patterns.
JoePass! works on Windows, Macintosh and Wine (For Linux)
Gunswap - A web based, open source, 3d juggling animator and pattern library.
There are also some games to play with siteswap:
Siteswap Game developed by Sebi Haushofer (for Java)
See also
List of siteswaps
Notes
References
Further reading
External links
"Symmetric Passing Patterns", PassingDB.com.
DSSS: The Diabolo Siteswap Simulator, ArtofDiabolo.com.
Juggling Lab (Downloadable animator)
Gunswap Juggling (Online animator)
TWJC Siteswap Calculator (Helpful Vanilla, Multiplex and Synchronous siteswap validator)
"Staggered Symmetric Passing Patterns for 2 jugglers" by Sean Gandini (social siteswaps)
Juggling patterns and tricks
Notation
Rhythm and meter | Siteswap | [
"Physics",
"Mathematics"
] | 4,123 | [
"Physical quantities",
"Time",
"Symbols",
"Rhythm and meter",
"Notation",
"Spacetime"
] |
521,267 | https://en.wikipedia.org/wiki/Reflection%20%28physics%29 | Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection (for example at a mirror) the angle at which the wave is incident on the surface equals the angle at which it is reflected.
In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors.
Reflection of light
Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. In specular reflection the phase of the reflected waves depends on the choice of the origin of coordinates, but the relative phase between s and p (TE and TM) polarizations is fixed by the properties of the media and of the interface between them.
A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass.
In the diagram, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr. The law of reflection states that θi = θr, or in other words, the angle of incidence equals the angle of reflection.
In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, and how much is refracted in a given situation. This is analogous to the way impedance mismatch in an electric circuit causes reflection of signals. Total internal reflection of light from a denser medium occurs if the angle of incidence is greater than the critical angle.
Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector.
When light reflects off a material with higher refractive index than the medium in which is traveling, it undergoes a 180° phase shift. In contrast, when light reflects off a material with lower refractive index the reflected light is in phase with the incident light. This is an important principle in the field of thin-film optics.
Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.
Laws of reflection
If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows:
The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane.
The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal.
The reflected ray and the incident ray are on the opposite sides of the normal.
These three laws can all be derived from the Fresnel equations.
Mechanism
In classical electrodynamics, light is considered as an electromagnetic wave, which is described by Maxwell's equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave in all directions, like a dipole antenna. All these waves add up to give specular reflection and refraction, according to the Huygens–Fresnel principle.
In the case of dielectrics such as glass, the electric field of the light acts on the electrons in the material, and the moving electrons generate fields and become new radiators. The refracted light in the glass is the combination of the forward radiation of the electrons and the incident light. The reflected light is the combination of the backward radiation of all of the electrons.
In metals, electrons with no binding energy are called free electrons. When these electrons oscillate with the incident light, the phase difference between their radiation field and the incident field is π (180°), so the forward radiation cancels the incident light, and backward radiation is just the reflected light.
Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter.
Diffuse reflection
When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law.
The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation.
Retroreflection
Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came.
When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet.
Some animals' retinas act as retroreflectors (see tapetum lucidum'' for more detail), as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight.
A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror.
A surface can be made partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes.
Multiple reflections
When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle. The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus.
Note that these are theoretical ideals, requiring perfect alignment of perfectly smooth, perfectly flat perfect reflectors that absorb none of the light. In practice, these situations can only be approached but not achieved because the effects of any surface imperfections in the reflectors propagate and magnify, absorption gradually extinguishes the image, and any observing equipment (biological or technological) will interfere.
Complex conjugate reflection
In this process (which is also known as phase conjugation), light bounces exactly back in the direction from which it came due to a nonlinear optical process. Not only the direction of the light is reversed, but the actual wavefronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the aberrating optics a second time. If one were to look into a complex conjugating mirror, it would be black because only the photons which left the pupil would reach the pupil.
Other types of reflection
Neutron reflection
Materials that reflect neutrons, for example beryllium, are used in nuclear reactors and nuclear weapons. In the physical and biological sciences, the reflection of neutrons off atoms within a material is commonly used to determine the material's internal structure.
Sound reflection
When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000 Hz), and thus a very wide range of wavelengths (from about 20 mm to 17 m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in many directions—to scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space.
In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction. Sound reflection can affect the acoustic space.
Seismic reflection
Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits.
See also
Anti-reflective coating
Diffraction
Echo satellite
Huygens–Fresnel principle
List of reflected light sources
Negative refraction
Ocean surface wave
Reflection coefficient
Reflectivity
Refraction
Ripple tank
Signal reflection
Snell's law
Sun glitter
Two-ray ground-reflection model
References
External links
Acoustic reflection
Animations demonstrating optical reflection by QED
Simulation on Laws of Reflection of Sound By Amrita University
Physical phenomena
Geometrical optics
Physical optics
Acoustics
Sound | Reflection (physics) | [
"Physics"
] | 2,496 | [
"Physical phenomena",
"Classical mechanics",
"Acoustics"
] |
521,843 | https://en.wikipedia.org/wiki/Hydraulic%20engineering | Hydraulic engineering as a sub-discipline of civil engineering is concerned with the flow and conveyance of fluids, principally water and sewage. One feature of these systems is the extensive use of gravity as the motive force to cause the movement of the fluids. This area of civil engineering is intimately related to the design of bridges, dams, channels, canals, and levees, and to both sanitary and environmental engineering.
Hydraulic engineering is the application of the principles of fluid mechanics to problems dealing with the collection, storage, control, transport, regulation, measurement, and use of water. Before beginning a hydraulic engineering project, one must figure out how much water is involved. The hydraulic engineer is concerned with the transport of sediment by the river, the interaction of the water with its alluvial boundary, and the occurrence of scour and deposition. "The hydraulic engineer actually develops conceptual designs for the various features which interact with water such as spillways and outlet works for dams, culverts for highways, canals and related structures for irrigation projects, and cooling-water facilities for thermal power plants."
Fundamental principles
A few examples of the fundamental principles of hydraulic engineering include fluid mechanics, fluid flow, behavior of real fluids, hydrology, pipelines, open channel hydraulics, mechanics of sediment transport, physical modeling, hydraulic machines, and drainage hydraulics.
Fluid mechanics
Fundamentals of Hydraulic Engineering defines hydrostatics as the study of fluids at rest. In a fluid at rest, there exists a force, known as pressure, that acts upon the fluid's surroundings. This pressure, measured in N/m2, is not constant throughout the body of fluid. Pressure, p, in a given body of fluid, increases with an increase in depth. Where the upward force on a body acts on the base and can be found by the equation:
where,
ρ = density of water
g = specific gravity
y = depth of the body of liquid
Rearranging this equation gives you the pressure head . Four basic devices for pressure measurement are a piezometer, manometer, differential manometer, Bourdon gauge, as well as an inclined manometer.
As Prasuhn states:
On undisturbed submerged bodies, pressure acts along all surfaces of a body in a liquid, causing equal perpendicular forces in the body to act against the pressure of the liquid. This reaction is known as equilibrium. More advanced applications of pressure are that on plane surfaces, curved surfaces, dams, and quadrant gates, just to name a few.
Behavior of real fluids
Real and Ideal fluids
The main difference between an ideal fluid and a real fluid is that for ideal flow p1 = p2 and for real flow p1 > p2. Ideal fluid is incompressible and has no viscosity. Real fluid has viscosity. Ideal fluid is only an imaginary fluid as all fluids that exist have some viscosity.
Viscous flow
A viscous fluid will deform continuously under a shear force by the pascles law, whereas an ideal fluid does not deform.
Laminar flow and turbulence
The various effects of disturbance on a viscous flow are a stable, transition and unstable.
Bernoulli's equation
For an ideal fluid, Bernoulli's equation holds along streamlines.
As the flow comes into contact with the plate, the layer of fluid actually "adheres" to a solid surface. There is then a considerable shearing action between the layer of fluid on the plate surface and the second layer of fluid. The second layer is therefore forced to decelerate (though it is not quite brought to rest), creating a shearing action with the third layer of fluid, and so on. As the fluid passes further along with the plate, the zone in which shearing action occurs tends to spread further outwards. This zone is known as the "boundary layer". The flow outside the boundary layer is free of shear and viscous-related forces so it is assumed to act as an ideal fluid. The intermolecular cohesive forces in a fluid are not great enough to hold fluid together. Hence a fluid will flow under the action of the slightest stress and flow will continue as long as the stress is present. The flow inside the layer can be either vicious or turbulent, depending on Reynolds number.
Applications
Common topics of design for hydraulic engineers include hydraulic structures such as dams, levees, water distribution networks including both domestic and fire water supply, distribution and automatic sprinkler systems, water collection networks, sewage collection networks, storm water management, sediment transport, and various other topics related to transportation engineering and geotechnical engineering. Equations developed from the principles of fluid dynamics and fluid mechanics are widely utilized by other engineering disciplines such as mechanical, aeronautical and even traffic engineers.
Related branches include hydrology and rheology while related applications include hydraulic modeling, flood mapping, catchment flood management plans, shoreline management plans, estuarine strategies, coastal protection, and flood alleviation.
History
Antiquity
Earliest uses of hydraulic engineering were to irrigate crops and dates back to the Middle East and Africa. Controlling the movement and supply of water for growing food has been used for many thousands of years. One of the earliest hydraulic machines, the water clock was used in the early 2nd millennium BC. Other early examples of using gravity to move water include the Qanat system in ancient Persia and the very similar Turpan water system in ancient China as well as irrigation canals in Peru.
In ancient China, hydraulic engineering was highly developed, and engineers constructed massive canals with levees and dams to channel the flow of water for irrigation, as well as locks to allow ships to pass through. Sunshu Ao is considered the first Chinese hydraulic engineer. Another important Hydraulic Engineer in China, Ximen Bao was credited of starting the practice of large scale canal irrigation during the Warring States period (481 BC–221 BC), even today hydraulic engineers remain a respectable position in China.
In the Archaic epoch of the Philippines, hydraulic engineering also developed specially in the Island of Luzon, the Ifugaos of the mountainous region of the Cordilleras built irrigations, dams and hydraulic works and the famous Banaue Rice Terraces as a way for assisting in growing crops around 1000 BC. These Rice Terraces are 2,000-year-old terraces that were carved into the mountains of Ifugao in the Philippines by ancestors of the indigenous people. The Rice Terraces are commonly referred to as the "Eighth Wonder of the World". It is commonly thought that the terraces were built with minimal equipment, largely by hand. The terraces are located approximately 1500 metres (5000 ft) above sea level. They are fed by an ancient irrigation system from the rainforests above the terraces. It is said that if the steps were put end to end, it would encircle half the globe.
Eupalinos of Megara was an ancient Greek engineer who built the Tunnel of Eupalinos on Samos in the 6th century BC, an important feat of both civil and hydraulic engineering. The civil engineering aspect of this tunnel was that it was dug from both ends which required the diggers to maintain an accurate path so that the two tunnels met and that the entire effort maintained a sufficient slope to allow the water to flow.
Hydraulic engineering was highly developed in Europe under the aegis of the Roman Empire where it was especially applied to the construction and maintenance of aqueducts to supply water to and remove sewage from their cities. In addition to supplying the needs of their citizens they used hydraulic mining methods to prospect and extract alluvial gold deposits in a technique known as hushing, and applied the methods to other ores such as those of tin and lead.
In the 15th century, the Somali Ajuran Empire was the only hydraulic empire in Africa. As a hydraulic empire, the Ajuran State monopolized the water resources of the Jubba and Shebelle Rivers. Through hydraulic engineering, it also constructed many of the limestone wells and cisterns of the state that are still operative and in use today. The rulers developed new systems for agriculture and taxation, which continued to be used in parts of the Horn of Africa as late as the 19th century.
Further advances in hydraulic engineering occurred in the Muslim world between the 8th and 16th centuries, during what is known as the Islamic Golden Age. Of particular importance was the 'water management technological complex' which was central to the Islamic Green Revolution. The various components of this 'toolkit' were developed in different parts of the Afro-Eurasian landmass, both within and beyond the Islamic world. However, it was in the medieval Islamic lands where the technological complex was assembled and standardized, and subsequently diffused to the rest of the Old World. Under the rule of a single Islamic caliphate, different regional hydraulic technologies were assembled into "an identifiable water management technological complex that was to have a global impact." The various components of this complex included canals, dams, the qanat system from Persia, regional water-lifting devices such as the noria, shaduf and screwpump from Egypt, and the windmill from Islamic Afghanistan. Other original Islamic developments included the saqiya with a flywheel effect from Islamic Spain, the reciprocating suction pump and crankshaft-connecting rod mechanism from Iraq, and the geared and hydropowered water supply system from Syria.
Modern times
In many respects, the fundamentals of hydraulic engineering have not changed since ancient times. Liquids are still moved for the most part by gravity through systems of canals and aqueducts, though the supply reservoirs may now be filled using pumps. The need for water has steadily increased from ancient times and the role of the hydraulic engineer is a critical one in supplying it. For example, without the efforts of people like William Mulholland the Los Angeles area would not have been able to grow as it has because it simply does not have enough local water to support its population. The same is true for many of our world's largest cities. In much the same way, the central valley of California could not have become such an important agricultural region without effective water management and distribution for irrigation. In a somewhat parallel way to what happened in California, the creation of the Tennessee Valley Authority (TVA) brought work and prosperity to the South by building dams to generate cheap electricity and control flooding in the region, making rivers navigable and generally modernizing life in the region.
Leonardo da Vinci (1452–1519) performed experiments, investigated and speculated on waves and jets, eddies and streamlining. Isaac Newton (1642–1727) by formulating the laws of motion and his law of viscosity, in addition to developing the calculus, paved the way for many great developments in fluid mechanics. Using Newton's laws of motion, numerous 18th-century mathematicians solved many frictionless (zero-viscosity) flow problems. However, most flows are dominated by viscous effects, so engineers of the 17th and 18th centuries found the inviscid flow solutions unsuitable, and by experimentation they developed empirical equations, thus establishing the science of hydraulics.
Late in the 19th century, the importance of dimensionless numbers and their relationship to turbulence was recognized, and dimensional analysis was born. In 1904 Ludwig Prandtl published a key paper, proposing that the flow fields of low-viscosity fluids be divided into two zones, namely a thin, viscosity-dominated boundary layer near solid surfaces, and an effectively inviscid outer zone away from the boundaries. This concept explained many former paradoxes and enabled subsequent engineers to analyze far more complex flows. However, we still have no complete theory for the nature of turbulence, and so modern fluid mechanics continues to be combination of experimental results and theory.
The modern hydraulic engineer uses the same kinds of computer-aided design (CAD) tools as many of the other engineering disciplines while also making use of technologies like computational fluid dynamics to perform the calculations to accurately predict flow characteristics, GPS mapping to assist in locating the best paths for installing a system and laser-based surveying tools to aid in the actual construction of a system.
See also
References
Further reading
Vincent J. Zipparro, Hans Hasen (Eds), Davis' Handbook of Applied Hydraulics, Mcgraw-Hill, 4th Edition (1992), , at Amazon.com
Classification of Organics in Secondary Effluents. M. Rebhun, J. Manka. Environmental Science and Technology, 5, pp. 606–610, (1971). 25.
External links
International Association of Hydraulic Engineering and Research
Hydraulic Engineering in Prehistoric Mexico
Hydrologic Engineering Center
Chanson, H. (2007). Hydraulic Engineering in the 21st Century : Where to ?, Journal of Hydraulic Research, IAHR, Vol. 45, No. 3, pp. 291–301 (ISSN 0022-1686).
Hydraulics
Civil engineering
Water resources management | Hydraulic engineering | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,605 | [
"Hydrology",
"Physical systems",
"Construction",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering",
"Fluid dynamics"
] |
521,877 | https://en.wikipedia.org/wiki/Sodium%20hydride | Sodium hydride is the chemical compound with the empirical formula NaH. This alkali metal hydride is primarily used as a strong yet combustible base in organic synthesis. NaH is a saline (salt-like) hydride, composed of Na+ and H− ions, in contrast to molecular hydrides such as borane, silane, germane, ammonia, and methane. It is an ionic material that is insoluble in all solvents (other than molten sodium metal), consistent with the fact that H− ions do not exist in solution.
Basic properties and structure
NaH is colorless, although samples generally appear grey. NaH is around 40% denser than Na (0.968 g/cm3).
NaH, like LiH, KH, RbH, and CsH, adopts the NaCl crystal structure. In this motif, each Na+ ion is surrounded by six H− centers in an octahedral geometry. The ionic radii of H− (146 pm in NaH) and F− (133 pm) are comparable, as judged by the Na−H and Na−F distances.
"Inverse sodium hydride" (hydrogen sodide)
A very unusual situation occurs in a compound dubbed "inverse sodium hydride", which contains H+ and Na− ions. Na− is an alkalide, and this compound differs from ordinary sodium hydride in having a much higher energy content due to the net displacement of two electrons from hydrogen to sodium. A derivative of this "inverse sodium hydride" arises in the presence of the base [36]adamanzane. This molecule irreversibly encapsulates the H+ and shields it from interaction with the alkalide Na−. Theoretical work has suggested that even an unprotected protonated tertiary amine complexed with the sodium alkalide might be metastable under certain solvent conditions, though the barrier to reaction would be small and finding a suitable solvent might be difficult.
Preparation
Industrially, NaH is prepared by introducing molten sodium into mineral oil with hydrogen at atmospheric pressure and mixed vigorously at ~8000 rpm. The reaction is especially rapid at 250−300 °C.
The resultant suspension of NaH in mineral oil is often directly used, such as in the production of diborane.
Applications in organic synthesis
As a strong base
NaH is a base of wide scope and utility in organic chemistry. As a superbase, it is capable of deprotonating a range of even weak Brønsted acids to give the corresponding sodium derivatives. Typical "easy" substrates contain O-H, N-H, S-H bonds, including alcohols, phenols, pyrazoles, and thiols.
NaH notably deprotonates carbon acids (i.e., C-H bonds) such as 1,3-dicarbonyls such as malonic esters. The resulting sodium derivatives can be alkylated. NaH is widely used to promote condensation reactions of carbonyl compounds via the Dieckmann condensation, Stobbe condensation, Darzens condensation, and Claisen condensation. Other carbon acids susceptible to deprotonation by NaH include sulfonium salts and DMSO. NaH is used to make sulfur ylides, which in turn are used to convert ketones into epoxides, as in the Johnson–Corey–Chaykovsky reaction.
As a reducing agent
NaH reduces certain main group compounds, but analogous reactivity is very rare in organic chemistry (see below). Notably boron trifluoride reacts to give diborane and sodium fluoride:
6 NaH + 2 BF3 → B2H6 + 6 NaF
Si–Si and S–S bonds in disilanes and disulfides are also reduced.
A series of reduction reactions, including the hydrodecyanation of tertiary nitriles, reduction of imines to amines, and amides to aldehydes, can be effected by a composite reagent composed of sodium hydride and an alkali metal iodide (NaH⋅MI, M = Li, Na).
Hydrogen storage
Although not commercially significant sodium hydride has been proposed for hydrogen storage for use in fuel cell vehicles. In one experimental implementation, plastic pellets containing NaH are crushed in the presence of water to release the hydrogen. One challenge with this technology is the regeneration of NaH from the NaOH formed by hydrolysis.
Practical considerations
Sodium hydride is sold as a mixture of 60% sodium hydride (w/w) in mineral oil. Such a dispersion is safer to handle and weigh than pure NaH. The compound is often used in this form but the pure grey solid can be prepared by rinsing the commercial product with pentane or tetrahydrofuran, with care being taken because the waste solvent will contain traces of NaH and can ignite in air. Reactions involving NaH usually require air-free techniques.
Safety
NaH can ignite spontaneously in air. It also reacts vigorously with water or humid air to release hydrogen, which is very flammable, and sodium hydroxide (NaOH), a quite corrosive base. In practice, most sodium hydride is sold as a dispersion in mineral oil, which can be safely handled in air. Although sodium hydride is widely used in DMSO, DMF or DMAc for SN2 type reactions there have been many cases of fires and/or explosions from such mixtures.
References
Cited sources
Metal hydrides
Reagents for organic chemistry
Sodium compounds
Superbases
Rock salt crystal structure
Semiconductor materials | Sodium hydride | [
"Chemistry"
] | 1,182 | [
"Superbases",
"Inorganic compounds",
"Semiconductor materials",
"Reducing agents",
"Metal hydrides",
"Reagents for organic chemistry",
"Bases (chemistry)"
] |
521,933 | https://en.wikipedia.org/wiki/Radon%20measure | In mathematics (specifically in measure theory), a Radon measure, named after Johann Radon, is a measure on the -algebra of Borel sets of a Hausdorff topological space that is finite on all compact sets, outer regular on all Borel sets, and inner regular on open sets. These conditions guarantee that the measure is "compatible" with the topology of the space, and most measures used in mathematical analysis and in number theory are indeed Radon measures.
Motivation
A common problem is to find a good notion of a measure on a topological space that is compatible with the topology in some sense. One way to do this is to define a measure on the Borel sets of the topological space. In general there are several problems with this: for example, such a measure may not have a well defined support. Another approach to measure theory is to restrict to locally compact Hausdorff spaces, and only consider the measures that correspond to positive linear functionals on the space of continuous functions with compact support (some authors use this as the definition of a Radon measure). This produces a good theory with no pathological problems, but does not apply to spaces that are not locally compact. If there is no restriction to non-negative measures and complex measures are allowed, then Radon measures can be defined as the continuous dual space on the space of continuous functions with compact support. If such a Radon measure is real then it can be decomposed into the difference of two positive measures. Furthermore, an arbitrary Radon measure can be decomposed into four positive Radon measures, where the real and imaginary parts of the functional are each the differences of two positive Radon measures.
The theory of Radon measures has most of the good properties of the usual theory for locally compact spaces, but applies to all Hausdorff topological spaces. The idea of the definition of a Radon measure is to find some properties that characterize the measures on locally compact spaces corresponding to positive functionals, and use these properties as the definition of a Radon measure on an arbitrary Hausdorff space.
Definitions
Let be a measure on the -algebra of Borel sets of a Hausdorff topological space .
The measure is called inner regular or tight if, for every open set , equals the supremum of over all compact subsets of .
The measure is called outer regular if, for every Borel set , equals the infimum of over all open sets containing .
The measure is called locally finite if every point of has a neighborhood for which is finite.
If is locally finite, then it follows that is finite on compact sets, and for locally compact Hausdorff spaces, the converse holds, too. Thus, in this case, local finiteness may be equivalently replaced by finiteness on compact subsets.
The measure is called a Radon measure if it is inner regular and locally finite. In many situations, such as finite measures on locally compact spaces, this also implies outer regularity (see also Radon spaces).
(It is possible to extend the theory of Radon measures to non-Hausdorff spaces, essentially by replacing the word "compact" by "closed compact" everywhere. However, there seem to be almost no applications of this extension.)
Radon measures on locally compact spaces
When the underlying measure space is a locally compact topological space, the definition of a Radon measure can be expressed in terms of continuous linear functionals on the space of continuous functions with compact support. This makes it possible to develop measure and integration in terms of functional analysis, an approach taken by Bourbaki and a number of other authors.
Measures
In what follows denotes a locally compact topological space. The continuous real-valued functions with compact support on form a vector space , which can be given a natural locally convex topology. Indeed, is the union of the spaces of continuous functions with support contained in compact sets . Each of the spaces carries naturally the topology of uniform convergence, which makes it into a Banach space. But as a union of topological spaces is a special case of a direct limit of topological spaces, the space can be equipped with the direct limit locally convex topology induced by the spaces ; this topology is finer than the topology of uniform convergence.
If is a Radon measure on then the mapping
is a continuous positive linear map from to . Positivity means that whenever is a non-negative function. Continuity with respect to the direct limit topology defined above is equivalent to the following condition: for every compact subset of there exists a constant such that, for every continuous real-valued function on with ,
Conversely, by the Riesz–Markov–Kakutani representation theorem, each linear form on arises as integration with respect to a unique regular Borel measure.
A real-valued Radon measure is defined to be continuous linear form on ; they are precisely the differences of two Radon measures. This gives an identification of real-valued Radon measures with the dual space of the locally convex space . These real-valued Radon measures need not be signed measures. For example, is a real-valued Radon measure, but is not even an extended signed measure as it cannot be written as the difference of two measures at least one of which is finite.
Some authors use the preceding approach to define positive Radon measures to be the positive linear forms on . In this set-up it is common to use a terminology in which Radon measures in the above sense are called positive measures and real-valued Radon measures as above are called (real) measures.
Integration
To complete the buildup of measure theory for locally compact spaces from the functional-analytic viewpoint, it is necessary to extend measure (integral) from compactly supported continuous functions. This can be done for real or complex-valued functions in several steps as follows:
Definition of the upper integral of a lower semicontinuous positive (real-valued) function as the supremum (possibly infinite) of the positive numbers for compactly supported continuous functions ;
Definition of the upper integral for an arbitrary positive (real-valued) function as the infimum of upper integrals for lower semi-continuous functions ;
Definition of the vector space as the space of all functions on for which the upper integral of the absolute value is finite; the upper integral of the absolute value defines a semi-norm on , and is a complete space with respect to the topology defined by the semi-norm;
Definition of the space of integrable functions as the closure inside of the space of continuous compactly supported functions.
Definition of the integral for functions in as extension by continuity (after verifying that is continuous with respect to the topology of );
Definition of the measure of a set as the integral (when it exists) of the indicator function of the set.
It is possible to verify that these steps produce a theory identical with the one that starts from a Radon measure defined as a function that assigns a number to each Borel set of .
The Lebesgue measure on can be introduced by a few ways in this functional-analytic set-up. First, it is possibly to rely on an "elementary" integral such as the Daniell integral or the Riemann integral for integrals of continuous functions with compact support, as these are integrable for all the elementary definitions of integrals. The measure (in the sense defined above) defined by elementary integration is precisely the Lebesgue measure. Second, if one wants to avoid reliance on Riemann or Daniell integral or other similar theories, it is possible to develop first the general theory of Haar measures and define the Lebesgue measure as the Haar measure on that satisfies the normalisation condition .
Examples
The following are all examples of Radon measures:
Lebesgue measure on Euclidean space (restricted to the Borel subsets);
Haar measure on any locally compact topological group;
Dirac measure on any topological space;
Gaussian measure on Euclidean space with its Borel sigma algebra;
Probability measures on the -algebra of Borel sets of any Polish space. This example not only generalizes the previous example, but includes many measures on non-locally compact spaces, such as Wiener measure on the space of real-valued continuous functions on the interval .
A measure on is a Radon measure if and only if it is a locally finite Borel measure.
The following are not examples of Radon measures:
Counting measure on Euclidean space is an example of a measure that is not a Radon measure, since it is not locally finite.
The space of ordinals at most equal to , the first uncountable ordinal with the order topology is a compact topological space. The measure which equals on any Borel set that contains an uncountable closed subset of , and otherwise, is Borel but not Radon, as the one-point set has measure zero but any open neighbourhood of it has measure .
Let be the interval equipped with the topology generated by the collection of half open intervals . This topology is sometimes called Sorgenfrey line. On this topological space, standard Lebesgue measure is not Radon since it is not inner regular, since compact sets are at most countable.
Let be a Bernstein set in (or any Polish space). Then no measure which vanishes at points on is a Radon measure, since any compact set in is countable.
Standard product measure on for uncountable is not a Radon measure, since any compact set is contained within a product of uncountably many closed intervals, each of which is shorter than 1.
We note that, intuitively, the Radon measure is useful in mathematical finance particularly for working with Lévy processes because it has the properties of both Lebesgue and Dirac measures, as unlike the Lebesgue, a Radon measure on a single point is not necessarily of measure .
Basic properties
Moderated Radon measures
Given a Radon measure on a space , we can define another measure (on the Borel sets) by putting
The measure is outer regular, and locally finite, and inner regular for open sets. It coincides with on compact and open sets, and can be reconstructed from as the unique inner regular measure that is the same as on compact sets. The measure is called moderated if is -finite; in this case the measures and are the same. (If is -finite this does not imply that is -finite, so being moderated is stronger than being -finite.)
On a hereditarily Lindelöf space every Radon measure is moderated.
An example of a measure that is -finite but not moderated as follows. The topological space has as underlying set the subset of the real plane given by the -axis of points together with the points with , positive integers. The topology is given as follows. The single points are all open sets. A base of neighborhoods of the point is given by wedges consisting of all points in of the form with for a positive integer . This space is locally compact. The measure is given by letting the -axis have measure and letting the point have measure . This measure is inner regular and locally finite, but is not outer regular as any open set containing the -axis has measure infinity. In particular the -axis has -measure but -measure infinity.
Radon spaces
A topological space is called a Radon space if every finite Borel measure is a Radon measure, and strongly Radon if every locally finite Borel measure is a Radon measure. Any Suslin space is strongly Radon, and moreover every Radon measure is moderated.
Duality
On a locally compact Hausdorff space, Radon measures correspond to positive linear functionals on the space of continuous functions with compact support. This is not surprising as this property is the main motivation for the definition of Radon measure.
Metric space structure
The pointed cone of all (positive) Radon measures on can be given the structure of a complete metric space by defining the Radon distance between two measures to be
This metric has some limitations. For example, the space of Radon probability measures on ,
is not sequentially compact with respect to the Radon metric: i.e., it is not guaranteed that any sequence of probability measures will have a subsequence that is convergent with respect to the Radon metric, which presents difficulties in certain applications. On the other hand, if is a compact metric space, then the Wasserstein metric turns into a compact metric space.
Convergence in the Radon metric implies weak convergence of measures:
but the converse implication is false in general. Convergence of measures in the Radon metric is sometimes known as strong convergence, as contrasted with weak convergence.
See also
Notes
References
. Functional-analytic development of the theory of Radon measure and integral on locally compact spaces.
. Haar measure; Radon measures on general Hausdorff spaces and equivalence between the definitions in terms of linear functionals and locally finite inner regular measures on the Borel sigma-algebra.
. Contains a simplified version of Bourbaki's approach, specialised to measures defined on separable metrizable spaces.
External links
Measures (measure theory)
Integral representations
Lp spaces | Radon measure | [
"Physics",
"Mathematics"
] | 2,676 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
522,062 | https://en.wikipedia.org/wiki/Engineering%20tolerance | Engineering tolerance is the permissible limit or limits of variation in:
a physical dimension;
a measured value or physical property of a material, manufactured object, system, or service;
other measured values (such as temperature, humidity, etc.);
in engineering and safety, a physical distance or space (tolerance), as in a truck (lorry), train or boat under a bridge as well as a train in a tunnel (see structure gauge and loading gauge);
in mechanical engineering, the space between a bolt and a nut or a hole, etc.
Dimensions, properties, or conditions may have some variation without significantly affecting functioning of systems, machines, structures, etc. A variation beyond the tolerance (for example, a temperature that is too hot or too cold) is said to be noncompliant, rejected, or exceeding the tolerance.
Considerations when setting tolerances
A primary concern is to determine how wide the tolerances may be without affecting other factors or the outcome of a process. This can be by the use of scientific principles, engineering knowledge, and professional experience. Experimental investigation is very useful to investigate the effects of tolerances: Design of experiments, formal engineering evaluations, etc.
A good set of engineering tolerances in a specification, by itself, does not imply that compliance with those tolerances will be achieved. Actual production of any product (or operation of any system) involves some inherent variation of input and output. Measurement error and statistical uncertainty are also present in all measurements. With a normal distribution, the tails of measured values may extend well beyond plus and minus three standard deviations from the process average. Appreciable portions of one (or both) tails might extend beyond the specified tolerance.
The process capability of systems, materials, and products needs to be compatible with the specified engineering tolerances. Process controls must be in place and an effective quality management system, such as Total Quality Management, needs to keep actual production within the desired tolerances. A process capability index is used to indicate the relationship between tolerances and actual measured production.
The choice of tolerances is also affected by the intended statistical sampling plan and its characteristics such as the Acceptable Quality Level. This relates to the question of whether tolerances must be extremely rigid (high confidence in 100% conformance) or whether some small percentage of being out-of-tolerance may sometimes be acceptable.
An alternative view of tolerances
Genichi Taguchi and others have suggested that traditional two-sided tolerancing is analogous to "goal posts" in a football game: It implies that all data within those tolerances are equally acceptable. The alternative is that the best product has a measurement which is precisely on target. There is an increasing loss which is a function of the deviation or variability from the target value of any design parameter. The greater the deviation from target, the greater is the loss. This is described as the Taguchi loss function or quality loss function, and it is the key principle of an alternative system called inertial tolerancing.
Research and development work conducted by M. Pillet and colleagues at the Savoy University has resulted in industry-specific adoption. Recently the publishing of the French standard NFX 04-008 has allowed further consideration by the manufacturing community.
Mechanical component tolerance
Dimensional tolerance is related to, but different from fit in mechanical engineering, which is a designed-in clearance or interference between two parts. Tolerances are assigned to parts for manufacturing purposes, as boundaries for acceptable build. No machine can hold dimensions precisely to the nominal value, so there must be acceptable degrees of variation. If a part is manufactured, but has dimensions that are out of tolerance, it is not a usable part according to the design intent. Tolerances can be applied to any dimension. The commonly used terms are:
Basic size The nominal diameter of the shaft (or bolt) and the hole. This is, in general, the same for both components.
Lower deviation The difference between the minimum possible component size and the basic size.
Upper deviation The difference between the maximum possible component size and the basic size.
Fundamental deviation The minimum difference in size between a component and the basic size.
This is identical to the upper deviation for shafts and the lower deviation for holes. If the fundamental deviation is greater than zero, the bolt will always be smaller than the basic size and he hole will always be wider. Fundamental deviation is a form of allowance, rather than tolerance.
International Tolerance grade This is a standardised measure of the maximum difference in size between the component and the basic size (see below).
For example, if a shaft with a nominal diameter of 10mm is to have a sliding fit within a hole, the shaft might be specified with a tolerance range from 9.964 to 10 mm (i.e., a zero fundamental deviation, but a lower deviation of 0.036 mm) and the hole might be specified with a tolerance range from 10.04 mm to 10.076 mm (0.04 mm fundamental deviation and 0.076 mm upper deviation). This would provide a clearance fit of somewhere between 0.04 mm (largest shaft paired with the smallest hole, called the Maximum Material Condition - MMC) and 0.112 mm (smallest shaft paired with the largest hole, Least Material Condition - LMC). In this case the size of the tolerance range for both the shaft and hole is chosen to be the same (0.036 mm), meaning that both components have the same International Tolerance grade but this need not be the case in general.
When no other tolerances are provided, the machining industry uses the following standard tolerances:
International Tolerance grades
When designing mechanical components, a system of standardized tolerances called International Tolerance grades are often used. The standard (size) tolerances are divided into two categories: hole and shaft. They are labelled with a letter (capitals for holes and lowercase for shafts) and a number. For example: H7 (hole, tapped hole, or nut) and h7 (shaft or bolt). H7/h6 is a very common standard tolerance which gives a tight fit. The tolerances work in such a way that for a hole H7 means that the hole should be made slightly larger than the base dimension (in this case for an ISO fit 10+0.015−0, meaning that it may be up to 0.015 mm larger than the base dimension, and 0 mm smaller). The actual amount bigger/smaller depends on the base dimension. For a shaft of the same size, h6 would mean 10+0−0.009, which means the shaft may be as small as 0.009 mm smaller than the base dimension and 0 mm larger. This method of standard tolerances is also known as Limits and Fits and can be found in ISO 286-1:2010 (Link to ISO catalog).
The table below summarises the International Tolerance (IT) grades and the general applications of these grades:
An analysis of fit by statistical interference is also extremely useful: It indicates the frequency (or probability) of parts properly fitting together.
Electrical component tolerance
An electrical specification might call for a resistor with a nominal value of 100 Ω (ohms), but will also state a tolerance such as "±1%". This means that any resistor with a value in the range 99–101Ω is acceptable. For critical components, one might specify that the actual resistance must remain within tolerance within a specified temperature range, over a specified lifetime, and so on.
Many commercially available resistors and capacitors of standard types, and some small inductors, are often marked with coloured bands to indicate their value and the tolerance. High-precision components of non-standard values may have numerical information printed on them.
Low tolerance means only a small deviation from the components given value, when new, under normal operating conditions and at room temperature. Higher tolerance means the component will have a wider range of possible values.
Difference between allowance and tolerance
The terms are often confused but sometimes a difference is maintained. See .
Clearance (civil engineering)
In civil engineering, clearance refers to the difference between the loading gauge and the structure gauge in the case of railroad cars or trams, or the difference between the size of any vehicle and the width/height of doors, the width/height of an overpass or the diameter of a tunnel as well as the air draft under a bridge, the width of a lock or diameter of a tunnel in the case of watercraft. In addition there is the difference between the deep draft and the stream bed or sea bed of a waterway.
See also
Backlash (engineering)
Geometric dimensioning and tolerancing
Engineering fit
Key relevance
Loading gauge
Margin of error
Precision engineering
Probabilistic design
Process capability
Slack action
Specification (technical standard)
Statistical process control
Statistical tolerance
Structure gauge
Taguchi methods
Tolerance coning
Tolerance interval
Tolerance stacks
Verification and validation
Notes
Further reading
Pyzdek, T, "Quality Engineering Handbook", 2003,
Godfrey, A. B., "Juran's Quality Handbook", 1999,
ASTM D4356 Standard Practice for Establishing Consistent Test Method Tolerances
External links
Tolerance Engineering Design Limits & Fits
Online calculation of fits
Index of ISO Hole and Shaft tolerances/limits pages
Quality
Engineering concepts
Statistical deviation and dispersion
Mechanical standards
Metrology
Metalworking terminology
Approximations | Engineering tolerance | [
"Mathematics",
"Engineering"
] | 1,903 | [
"Mechanical standards",
"Mathematical relations",
"nan",
"Mechanical engineering",
"Approximations"
] |
522,081 | https://en.wikipedia.org/wiki/Transamination | Transamination is a chemical reaction that transfers an amino group to a ketoacid to form new amino acids.This pathway is responsible for the deamination of most amino acids. This is one of the major degradation pathways which convert essential amino acids to non-essential amino acids (amino acids that can be synthesized de novo by the organism).
Transamination in biochemistry is accomplished by enzymes called transaminases or aminotransferases. α-ketoglutarate acts as the predominant amino-group acceptor and produces glutamate as the new amino acid.
Aminoacid + α-ketoglutarate ↔ α-keto acid + glutamate
Glutamate's amino group, in turn, is transferred to oxaloacetate in a second transamination reaction yielding aspartate.
Glutamate + oxaloacetate ↔ α-ketoglutarate + aspartate
Mechanism of action
Transamination catalyzed by aminotransferase occurs in two stages. In the first step, the α amino group of an amino acid is transferred to the enzyme, producing the corresponding α-keto acid and the aminated enzyme. During the second stage, the amino group is transferred to the keto acid acceptor, forming the amino acid product while regenerating the enzyme. The chirality of an amino acid is determined during transamination. For the reaction to complete, aminotransferases require participation of aldehyde containing coenzyme, pyridoxal-5'-phosphate (PLP), a derivative of Pyridoxine (Vitamin B6). The amino group is accommodated by conversion of this coenzyme to pyridoxamine-5'-phosphate (PMP). PLP is covalently attached to the enzyme via a Schiff Base linkage formed by the condensation of its aldehyde group with the ε-amino group of an enzymatic Lys residue. The Schiff base, which is conjugated to the enzyme's pyridinium ring, is the focus of the coenzyme activity.
The product of transamination reactions depend on the availability of α-keto acids. The products usually are either alanine, aspartate or glutamate, since their corresponding alpha-keto acids are produced through metabolism of fuels. Being a major degradative aminoacid pathway, lysine, proline and threonine are the only three amino acids that do not always undergo transamination and rather use respective dehydrogenase.
Alternative Mechanism
A second type of transamination reaction can be described as a nucleophilic substitution of one amine or amide anion on an amine or ammonium salt. For example, the attack of a primary amine by a primary amide anion can be used to prepare secondary amines:
RNH2 + R'NH− → RR'NH + NH2−
Symmetric secondary amines can be prepared using Raney nickel (2RNH2 → R2NH + NH3). And finally, quaternary ammonium salts can be dealkylated using ethanolamine:
R4N+ + NH2CH2CH2OH → R3N + RN+H2CH2CH2OH
Aminonaphthalenes also undergo transaminations.[2]
Types of aminotransferase
Transamination is mediated by several types of aminotransferase enzymes. An aminotransferase may be specific for an individual amino acid, or it may be able to process any member of a group of similar ones, for example the branched-chain amino acids, which comprises valine, isoleucine, and leucine. The two common types of aminotransferases are alanine aminotransferase (ALT) and aspartate aminotransferase (AST).
References
• Smith, M. B. and March, J. Advanced Organic Chemistry: Reactions, Mechanisms, and Structure, 5th ed. Wiley, 2001, p. 503.
• Gerald Booth "Naphthalene Derivatives" in Ullmann's Encyclopedia of Industrial Chemistry, 2005, Wiley-VCH, Weinheim. doi:10.1002/14356007.a17_009
Voet & Voet. "Biochemistry" Fourth edition
External links
Amino Acid Biosynthesis
The chemical logic behind aminoacid degradation and the urea cycle
Amino acids
Biochemical reactions
Organic reactions | Transamination | [
"Chemistry",
"Biology"
] | 943 | [
"Biomolecules by chemical classification",
"Biochemical reactions",
"Amino acids",
"Organic reactions",
"Biochemistry"
] |
522,130 | https://en.wikipedia.org/wiki/Protonation | In chemistry, protonation (or hydronation) is the adding of a proton (or hydron, or hydrogen cation), usually denoted by H+, to an atom, molecule, or ion, forming a conjugate acid. (The complementary process, when a proton is removed from a Brønsted–Lowry acid, is deprotonation.) Some examples include
The protonation of water by sulfuric acid:
H2SO4 + H2O H3O+ +
The protonation of isobutene in the formation of a carbocation:
(CH3)2C=CH2 + HBF4 (CH3)3C+ +
The protonation of ammonia in the formation of ammonium chloride from ammonia and hydrogen chloride:
NH3(g) + HCl(g) → NH4Cl(s)
Protonation is a fundamental chemical reaction and is a step in many stoichiometric and catalytic processes. Some ions and molecules can undergo more than one protonation and are labeled polybasic, which is true of many biological macromolecules. Protonation and deprotonation (removal of a proton) occur in most acid–base reactions; they are the core of most acid–base reaction theories. A Brønsted–Lowry acid is defined as a chemical substance that protonates another substance. Upon protonating a substrate, the mass and the charge of the species each increase by one unit, making it an essential step in certain analytical procedures such as electrospray mass spectrometry. Protonating or deprotonating a molecule or ion can change many other chemical properties, not just the charge and mass, for example solubility, hydrophilicity, reduction potential or oxidation potential, and optical properties can change.
Rates
Protonations are often rapid, partly because of the high mobility of protons in many solvents. The rate of protonation is related to the acidity of the protonating species: protonation by weak acids is slower than protonation of the same base by strong acids. The rates of protonation and deprotonation can be especially slow when protonation induces significant structural changes.
Enantioselective protonations are under kinetic control, are of considerable interest in organic synthesis. They are also relevant to various biological processes.
Reversibility and catalysis
Protonation is usually reversible, and the structure and bonding of the conjugate base are normally unchanged on protonation. In some cases, however, protonation induces isomerization, for example cis-alkenes can be converted to trans-alkenes using a catalytic amount of protonating agent. Many enzymes, such as the serine hydrolases, operate by mechanisms that involve reversible protonation of substrates.
See also
Acid dissociation constant
Deprotonation (or dehydronation)
Molecular autoionization
References
Chemical reactions
Reaction mechanisms | Protonation | [
"Chemistry"
] | 602 | [
"Reaction mechanisms",
"nan",
"Chemical kinetics",
"Physical organic chemistry"
] |
522,274 | https://en.wikipedia.org/wiki/Deprotonation | Deprotonation (or dehydronation) is the removal (transfer) of a proton (or hydron, or hydrogen cation), (H+) from a Brønsted–Lowry acid in an acid–base reaction. The species formed is the conjugate base of that acid. The complementary process, when a proton is added (transferred) to a Brønsted–Lowry base, is protonation (or hydronation). The species formed is the conjugate acid of that base.
A species that can either accept or donate a proton is referred to as amphiprotic. An example is the H2O (water) molecule, which can gain a proton to form the hydronium ion, H3O+, or lose a proton, leaving the hydroxide ion, OH−.
The relative ability of a molecule to give up a proton is measured by its pKa value. A low pKa value indicates that the compound is acidic and will easily give up its proton to a base. The pKa of a compound is determined by many aspects, but the most significant is the stability of the conjugate base. This is primarily determined by the ability (or inability) of the conjugated base to stabilize negative charge. One of the most important ways of assessing a conjugate base's ability to distribute negative charge is using resonance. Electron withdrawing groups (which can stabilize the molecule by increasing charge distribution) or electron donating groups (which destabilize by decreasing charge distribution) present on a molecule also determine its pKa. The solvent used can also assist in the stabilization of the negative charge on a conjugated base.
Bases used to deprotonate depend on the pKa of the compound. When the compound is not particularly acidic, and, as such, the molecule does not give up its proton easily, a base stronger than the commonly known hydroxides is required. Hydrides are one of the many types of powerful deprotonating agents. Common hydrides used are sodium hydride and potassium hydride. The hydride forms hydrogen gas with the liberated proton from the other molecule. The hydrogen is dangerous and could ignite with the oxygen in the air, so the chemical procedure should be done in an inert atmosphere (e.g., nitrogen).
Deprotonation can be an important step in a chemical reaction. Acid–base reactions typically occur faster than any other step which may determine the product of a reaction. The conjugate base is more electron-rich than the molecule which can alter the reactivity of the molecule. For example, deprotonation of an alcohol forms the negatively charged alkoxide, which is a much stronger nucleophile.
To determine whether or not a given base will be sufficient to deprotonate a specific acid, compare the conjugate base with the original base. A conjugate base is formed when the acid is deprotonated by the base. In the image above, hydroxide acts as a base to deprotonate the carboxylic acid. The conjugate base is the carboxylate salt. In this case, hydroxide is a strong enough base to deprotonate the carboxylic acid because the conjugate base is more stable than the base because the negative charge is delocalized over two electronegative atoms compared to one. Using pKa values, the carboxylic acid is approximately 4 and the conjugate acid, water, is 15.7. Because acids with higher pKa values are less likely to donate their protons, the equilibrium will favor their formation. Therefore, the side of the equation with water will be formed preferentially. If, for example, water, instead of hydroxide, was used to deprotonate the carboxylic acid, the equilibrium would not favor the formation of the carboxylate salt. This is because the conjugate acid, hydronium, has a pKa of -1.74, which is lower than the carboxylic acid. In this case, equilibrium would favor the carboxylic acid.
References
Acid–base chemistry
Chemical reactions
Reaction mechanisms | Deprotonation | [
"Chemistry"
] | 872 | [
"Reaction mechanisms",
"Acid–base chemistry",
"Equilibrium chemistry",
"Physical organic chemistry",
"nan",
"Chemical kinetics"
] |
522,329 | https://en.wikipedia.org/wiki/Glycosylamine | Glycosylamines are a class of biochemical compounds consisting of a glycosyl group attached to an amino group, -NR2. They are also known as N-glycosides, as they are a type of glycoside. Glycosyl groups can be derived from carbohydrates. The glycosyl group and amino group are connected with a β-N-glycosidic bond, forming a cyclic hemiaminal ether bond (α-aminoether).
Examples include nucleosides such as adenosine.
References
Biomolecules
Glycosides | Glycosylamine | [
"Chemistry",
"Biology"
] | 133 | [
"Carbohydrates",
"Natural products",
"Glycosides",
"Biotechnology stubs",
"Biochemistry stubs",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Glycobiology",
"Molecular biology"
] |
522,356 | https://en.wikipedia.org/wiki/Indium%20phosphide | Indium phosphide (InP) is a binary semiconductor composed of indium and phosphorus. It has a face-centered cubic ("zincblende") crystal structure, identical to that of GaAs and most of the III-V semiconductors.
Manufacturing
Indium phosphide can be prepared from the reaction of white phosphorus and indium iodide at 400 °C., also by direct combination of the purified elements at high temperature and pressure, or by thermal decomposition of a mixture of a trialkyl indium compound and phosphine.
Applications
The application fields of InP splits up into three main areas. It is used as the basis for optoelectronic components, high-speed electronics, and photovoltaics
High-speed optoelectronics
InP is used as a substrate for epitaxial optoelectronic devices based other semiconductors, such as indium gallium arsenide. The devices include pseudomorphic heterojunction bipolar transistors that could operate at 604 GHz.
InP itself has a direct bandgap, making it useful for optoelectronics devices like laser diodes and photonic integrated circuits for the optical telecommunications industry, to enable wavelength-division multiplexing applications. It is used in high-power and high-frequency electronics because of its superior electron velocity with respect to the more common semiconductors silicon and gallium arsenide.
Optical Communications
InP is used in lasers, sensitive photodetectors and modulators in the wavelength window typically used for telecommunications, i.e., 1550 nm wavelengths, as it is a direct bandgap III-V compound semiconductor material. The wavelength between about 1510 nm and 1600 nm has the lowest attenuation available on optical fibre (about 0.2 dB/km). Further, O-band and C-band wavelengths supported by InP facilitate single-mode operation, reducing effects of intermodal dispersion.
Photovoltaics and optical sensing
InP can be used in photonic integrated circuits that can generate, amplify, control and detect laser light.
Optical sensing applications of InP include
Air pollution control by real-time detection of gases (CO, CO2, NOX [or NO + NO2], etc.).
Quick verification of traces of toxic substances in gases and liquids, including tap water, or surface contaminations.
Spectroscopy for non-destructive control of product, such as food. Researchers of Eindhoven University of Technology and MantiSpectra have already demonstrated the application of an integrated near-infrared spectral sensor for milk. In addition, it has been proven that this technology can also be applied to plastics and illicit drugs.
References
Cited sources
External links
Extensive site on the physical properties of indium phosphide (Ioffe institute)
Band structure and carrier concentration of InP.
InP conference series at IEEE
Indium Phosphide: Transcending frequency and integration limits. Semiconductor TODAY Compounds&AdvancedSilicon • Vol. 1 • Issue 3 • September 2006
Phosphides
Indium compounds
Inorganic phosphorus compounds
Optoelectronics
III-V semiconductors
III-V compounds
IARC Group 2A carcinogens
Zincblende crystal structure | Indium phosphide | [
"Chemistry"
] | 670 | [
"Inorganic compounds",
"Semiconductor materials",
"III-V compounds",
"III-V semiconductors",
"Inorganic phosphorus compounds"
] |
522,442 | https://en.wikipedia.org/wiki/Mutarotation | In stereochemistry, mutarotation is the change in optical rotation of a chiral material in a solution due to a change in proportion of the two constituent anomers (i.e. the interconversion of their respective stereocenters) until equilibrium is reached. Cyclic sugars show mutarotation as α and β anomeric forms interconvert.
The optical rotation of the solution depends on the optical rotation of each anomer and their ratio in the solution.
Mutarotation was discovered by French chemist Augustin-Pierre Dubrunfaut in 1844, when he noticed that the specific rotation of aqueous sugar solution changes with time.
Measurement
The α and β anomers are diastereomers of each other and usually have different specific rotations. A solution or liquid sample of a pure α anomer will rotate plane polarised light by a different amount and/or in the opposite direction than the pure β anomer of that compound. The optical rotation of the solution depends on the optical rotation of each anomer and their ratio in the solution.
For example, if a solution of β-D-glucopyranose is dissolved in water, its specific optical rotation will be +18.7°. Over time, some of the β-D-glucopyranose will undergo mutarotation to become α-D-glucopyranose, which has an optical rotation of +112.2°. The rotation of the solution will increase from +18.7° to an equilibrium value of +52.7° as some of the β form is converted to the α form. The equilibrium mixture is about 64% of β-D-glucopyranose and about 36% of α-D-glucopyranose, though there are also traces of the other forms including furanoses and open chained form.
The observed rotation of the sample is the weighted sum of the optical rotation of each anomer weighted by the amount of that anomer present. Therefore, one can use a polarimeter to measure the rotation of a sample and then calculate the ratio of the two anomers present from the enantiomeric excess, as long as one knows the rotation of each pure anomer. One can monitor the mutarotation process over time or determine the equilibrium mixture by observing the optical rotation and how it changes.
Reaction mechanism
See also
Anomer
Carbohydrate
Monosaccharide
Polysaccharide
Stereochemistry
References
External links
Carbohydrate chemistry
Carbohydrates
Organic chemistry
Stereochemistry | Mutarotation | [
"Physics",
"Chemistry"
] | 539 | [
"Glycobiology",
"Biomolecules by chemical classification",
"Carbohydrates",
"Stereochemistry",
"Organic compounds",
"Space",
"Carbohydrate chemistry",
"Chemical synthesis",
"nan",
"Spacetime"
] |
522,476 | https://en.wikipedia.org/wiki/Stereocenter | In stereochemistry, a stereocenter of a molecule is an atom (center), axis or plane that is the focus of stereoisomerism; that is, when having at least three different groups bound to the stereocenter, interchanging any two different groups creates a new stereoisomer. Stereocenters are also referred to as stereogenic centers.
A stereocenter is geometrically defined as a point (location) in a molecule; a stereocenter is usually but not always a specific atom, often carbon. Stereocenters can exist on chiral or achiral molecules; stereocenters can contain single bonds or double bonds. The number of hypothetical stereoisomers can be predicted by using 2n, with n being the number of tetrahedral stereocenters; however, exceptions such as meso compounds can reduce the prediction to below the expected 2n.
Chirality centers are a type of stereocenter with four different substituent groups; chirality centers are a specific subset of stereocenters because they can only have sp3 hybridization, meaning that they can only have single bonds.
Location
Stereocenters can exist on chiral or achiral molecules. They are defined as a location (point) within a molecule, rather than a particular atom, in which the interchanging of two groups creates a stereoisomer. A stereocenter can have either four different attachment groups, or three different attachment groups where one group is connected by a double bond. Since stereocenters can exist on achiral molecules, stereocenters can have either sp3 or sp2 hybridization.
Possible number of stereoisomers
Stereoisomers are compounds that are identical in composition and connectivity but have a different spatial arrangement of atoms around the central atom. A molecule having multiple stereocenters will produce many possible stereoisomers. In compounds whose stereoisomerism is due to tetrahedral (sp3) stereogenic centers, the total number of hypothetically possible stereoisomers will not exceed 2n, where n is the number of tetrahedral stereocenters. However, this is an upper bound because molecules with symmetry frequently have fewer stereoisomers.
The stereoisomers produced by the presence of multiple stereocenters can be defined as enantiomers (non-superposable mirror images) and diastereomers (non-superposable, non-identical, non-mirror image molecules). Enantiomers and diastereomers are produced due to differing stereochemical configurations of molecules containing the same composition and connectivity (bonding); the molecules must have multiple (two or more) stereocenters to be classified as enantiomers or diastereomers. Enantiomers and diastereomers will produce individual stereoisomers that contribute to the total number of possible stereoisomers.
However, the stereoisomers produced may also give a meso compound, which is an achiral compound that is superposable on its mirror image; the presence of a meso compound will reduce the number of possible stereoisomers. Since a meso compound is superposable on its mirror image, the two "stereoisomers" are actually identical. Resultantly, a meso compound will reduce the number of stereoisomers to below the hypothetical 2n amount due to symmetry.
Additionally, certain configurations may not exist due to steric reasons. Cyclic compounds with chiral centers may not exhibit chirality due to the presence of a two-fold rotation axis. Planar chirality may also provide for chirality without having an actual chiral center present.
Configuration
Configuration is defined as the arrangement of atoms around a stereocenter. The Cahn-Ingold-Prelog (CIP) system uses R and S designations to define the configuration of atoms about any stereocenter. A designation of R denotes a clockwise direction of substituent priority around the stereocenter, while a designation of S denotes a counter-clockwise direction of substituent priority.
Chirality centers
A chirality center (chiral center) is a type of stereocenter. A chirality center is defined as an atom holding a set of four different ligands (atoms or groups of atoms) in a spatial arrangement which is non-superposable on its mirror image. Chirality centers must be sp3 hybridized, meaning that a chirality center can only have single bonds. In organic chemistry, a chirality center usually refers to a carbon, phosphorus, or sulfur atom, though it is also possible for other atoms to be chirality centers, especially in areas of organometallic and inorganic chemistry.
The concept of a chirality center generalizes the concept of an asymmetric carbon atom (a carbon atom bonded to four different entities) to a broader definition of any atom with four different attachment groups in which an interchanging of any two attachment groups gives rise to an enantiomer.
Stereogenic on carbon
A carbon atom that is attached to four different substituent groups is called an asymmetric carbon atom or chiral carbon. Chiral carbons are the most common type of chirality center.
Stereogenic on other atoms
Chirality is not limited to carbon atoms, though carbon atoms are often centers of chirality due to their ubiquity in organic chemistry. Nitrogen and phosphorus atoms can also form bonds in a tetrahedral configuration. A nitrogen in an amine may be a stereocenter if all three groups attached are different because the electron pair of the amine functions as a fourth group. However, nitrogen inversion, a form of pyramidal inversion, causes racemization which means that both epimers at that nitrogen are present under normal circumstances. Racemization by nitrogen inversion may be restricted (such as quaternary ammonium or phosphonium cations), or slow, which allows the existence of chirality.
Metal atoms with tetrahedral or octahedral geometries may also be chiral due to having different ligands. For the octahedral case, several chiralities are possible. Having three ligands of two types, the ligands may be lined up along the meridian, giving the mer-isomer, or forming a face—the fac isomer. Having three bidentate ligands of only one type gives a propeller-type structure, with two different enantiomers denoted Λ and Δ.
Chirality and stereocenters
As mentioned earlier, the requirement for an atom to be a chirality center is that the atom must be sp3 hybridized with four different attachments. Because of this, all chirality centers are stereocenters. However, only under some conditions is the reverse true. Recall that a point can be considered a sterocenter with a minimum of three attachment points; stereocenters can be either sp3 or sp2 hybridized, as long as the interchanging any two different groups creates a new stereoisomer. This means that although all chirality centers are stereocenters, not every stereocenter is a chirality center.
Stereocenters are important identifiers for chiral or achiral molecules. As a general rule, if a molecule has no stereocenters, it is considered achiral. If it has at least one stereocenter, the molecule has the potential for chirality. However, there are some exceptions like meso compounds that make molecules with multiple stereocenters considered achiral.
See also
Cahn–Ingold–Prelog priority rules for nomenclature
Descriptor (chemistry)
References
Stereochemistry | Stereocenter | [
"Physics",
"Chemistry"
] | 1,579 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
522,786 | https://en.wikipedia.org/wiki/Homotopy%20lifting%20property | In mathematics, in particular in homotopy theory within algebraic topology, the homotopy lifting property (also known as an instance of the right lifting property or the covering homotopy axiom) is a technical condition on a continuous function from a topological space E to another one, B. It is designed to support the picture of E "above" B by allowing a homotopy taking place in B to be moved "upstairs" to E.
For example, a covering map has a property of unique local lifting of paths to a given sheet; the uniqueness is because the fibers of a covering map are discrete spaces. The homotopy lifting property will hold in many situations, such as the projection in a vector bundle, fiber bundle or fibration, where there need be no unique way of lifting.
Formal definition
Assume all maps are continuous functions between topological spaces. Given a map , and a space , one says that has the homotopy lifting property, or that has the homotopy lifting property with respect to , if:
for any homotopy , and
for any map lifting (i.e., so that ),
there exists a homotopy lifting (i.e., so that ) which also satisfies .
The following diagram depicts this situation:
The outer square (without the dotted arrow) commutes if and only if the hypotheses of the lifting property are true. A lifting corresponds to a dotted arrow making the diagram commute. This diagram is dual to that of the homotopy extension property; this duality is loosely referred to as Eckmann–Hilton duality.
If the map satisfies the homotopy lifting property with respect to all spaces , then is called a fibration, or one sometimes simply says that has the homotopy lifting property.
A weaker notion of fibration is Serre fibration, for which homotopy lifting is only required for all CW complexes .
Generalization: homotopy lifting extension property
There is a common generalization of the homotopy lifting property and the homotopy extension property. Given a pair of spaces , for simplicity we denote . Given additionally a map , one says that has the homotopy lifting extension property if:
For any homotopy , and
For any lifting of , there exists a homotopy which covers (i.e., such that ) and extends (i.e., such that ).
The homotopy lifting property of is obtained by taking , so that above is simply .
The homotopy extension property of is obtained by taking to be a constant map, so that is irrelevant in that every map to E is trivially the lift of a constant map to the image point of .
See also
Covering space
Fibration
Notes
References
.
Jean-Pierre Marquis (2006) "A path to Epistemology of Mathematics: Homotopy theory", pages 239 to 260 in The Architecture of Modern Mathematics, J. Ferreiros & J.J. Gray, editors, Oxford University Press
External links
Homotopy theory
Algebraic topology | Homotopy lifting property | [
"Mathematics"
] | 631 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
523,076 | https://en.wikipedia.org/wiki/Excitable%20medium | An excitable medium is a nonlinear dynamical system which has the capacity to propagate a wave of some description, and which cannot support the passing of another wave until a certain amount of time has passed (known as the refractory time).
A forest is an example of an excitable medium: if a wildfire burns through the forest, no fire can return to a burnt spot until the vegetation has gone through its refractory period and regrown. In chemistry, oscillating reactions are excitable media, for example the Belousov–Zhabotinsky reaction and the Briggs–Rauscher reaction. Cell excitability is the change in membrane potential that is necessary for cellular responses in various tissues. The resting potential forms the basis of cell excitability and these processes are fundamental for the generation of graded and action potentials. Normal and pathological activities in the heart and brain can be modelled as excitable media. A group of spectators at a sporting event are an excitable medium, as can be observed in a Mexican wave (so-called from its initial appearance in the 1986 World Cup in Mexico).
Modelling excitable media
Excitable media can be modelled using both partial differential equations and cellular automata.
With cellular automata
Cellular automata provide a simple model to aid in the understanding of excitable media. Perhaps the simplest such model is in. See Greenberg-Hastings cellular automaton for this model.
Each cell of the automaton is made to represent some section of the medium being modelled (for example, a patch of trees in a forest, or a segment of heart tissue). Each cell can be in one of the three following states:
Quiescent or excitable — the cell is unexcited, but can be excited. In the forest fire example, this corresponds to the trees being unburnt.
Excited — the cell is excited. The trees are on fire.
Refractory — the cell has recently been excited and is temporarily not excitable. This corresponds to a patch of land where the trees have burnt and the vegetation has yet to regrow.
As in all cellular automata, the state of a particular cell in the next time step depends on the state of the cells around it—its neighbours—at the current time. In the forest fire example the simple rules given in Greenberg-Hastings cellular automaton might be modified as follows:
If a cell is quiescent, then it remains quiescent unless one or more of its neighbours is excited. In the forest fire example, this means a patch of land only burns if a neighbouring patch is on fire.
If a cell is excited, it becomes refractory at the next iteration. After trees have finished burning, the patch of land is left barren.
If a cell is refractory, then its remaining refractory period is lessened at the next period, until it reaches the end of the refractory period and becomes excitable once more. The trees regrow.
This function can be refined according to the particular medium. For example, the effect of wind can be added to the model of the forest fire.
Geometries of waves
One-dimensional waves
It is most common for a one-dimensional medium to form a closed circuit, i.e. a ring. For example, the Mexican wave can be modeled as a ring going around the stadium. If the wave moves in one direction it will eventually return to where it started. If, upon a wave's return to the origin, the original spot has gone through its refractory period, then the wave will propagate along the ring again (and will do so indefinitely). If, however, the origin is still refractory upon the wave's return, the wave will be stopped.
In the Mexican wave, for example, if for some reason, the originators of the wave are still standing upon its return it will not continue. If the originators have sat back down then the wave can, in theory, continue.
Two-dimensional waves
Several forms of waves can be observed in a two-dimensional medium.
A spreading wave will originate at a single point in the medium and spread outwards. For example, a forest fire could start from a lightning strike at the centre of a forest and spread outwards.
A spiral wave will again originate at a single point, but will spread in a spiral circuit. Spiral waves are believed to underlie phenomena such as tachycardia and fibrillation.
Spiral waves constitute one of the mechanisms of fibrillation when they organize in long-lasting reentrant activities named rotors.
See also
Autowave
Notes
References
Leon Glass and Daniel Kaplan, Understanding Nonlinear Dynamics.
External links
An introduction to excitable media
Java applets that show excitable media in 0, 1, and 2D
CAPOW software by Rudy Rucker contains several excitable media models.
Dynamical systems
Biophysics
Nonlinear systems
Mathematical modeling | Excitable medium | [
"Physics",
"Mathematics",
"Biology"
] | 1,020 | [
"Mathematical modeling",
"Applied and interdisciplinary physics",
"Applied mathematics",
"Nonlinear systems",
"Mechanics",
"Biophysics",
"Dynamical systems"
] |
25,543,876 | https://en.wikipedia.org/wiki/Sup45p | Sup45p is the Saccharomyces cerevisiae (a yeast) eukaryotic translation termination factor. More specifically, it is the yeast eukaryotic release factor 1 (eRF1). Its job is to recognize stop codons in RNA and bind to them. It binds to the Sup35p protein and then takes on the shape of a tRNA molecule so that it can safety incorporate itself into the A site of the Ribosome to disruptits flow and "release" the protein and end translation.
Notes
Saccharomyces cerevisiae genes
Protein biosynthesis | Sup45p | [
"Chemistry"
] | 130 | [
"Protein biosynthesis",
"Protein stubs",
"Gene expression",
"Biochemistry stubs",
"Biosynthesis"
] |
25,545,661 | https://en.wikipedia.org/wiki/Chiral%20media | The term chiral describes an object, especially a molecule, which has or produces a non-superposable mirror image of itself. In chemistry, such a molecule is called an enantiomer or is said to exhibit chirality or enantiomerism. The term "chiral" comes from the Greek word for the human hand, which itself exhibits such non-superimposeability of the left hand precisely over the right. Due to the opposition of the fingers and thumbs, no matter how the two hands are oriented, it is impossible for both hands to exactly coincide. Helices, chiral characteristics (properties), chiral media, order, and symmetry all relate to the concept of left- and right-handedness.
Types of chirality
Chirality describes that something is different from its mirror image. Chirality can be defined in two or three dimensions. It can be an intrinsic property of an object, such as a molecule, crystal or metamaterial. It can also arise from the relative position and orientation of different components, such as the propagation direction of a beam of light relative to the structure of an achiral material.
Intrinsic 3d chirality
Any object that cannot be superimposed with its mirror image by translation or rotation in three dimensions has intrinsic 3d chirality. Intrinsic means that the chirality is a property of the object. In most contexts, materials described as chiral have intrinsic 3d chirality. Typical examples are homogeneous/homogenizable chiral materials that have a chiral structure on the subwavelength scale. For example, an isotropic chiral material can comprise a random dispersion of handed molecules or inclusions, such as a liquid consisting of chiral molecules. Handedness can also be present at the macroscopic level in structurally chiral materials. For example, the molecules of cholesteric liquid crystals are randomly positioned but macroscopically they exhibit a helicoidal orientational order. Other examples of structurally chiral materials can be fabricated either as stacks of uniaxial laminas or using sculptured thin films. Remarkably, artificial examples of both types of chiral materials were produced by J. C. Bose more than 11 decades ago. 3D chirality causes the electromagnetic effects of optical activity and linear conversion dichroism.
Extrinsic 3d chirality
Any arrangement that cannot be superimposed with its mirror image by translation or rotation in three dimensions has extrinsic 3d chirality. Extrinsic means that the chirality is a consequence of the arrangement of different components, rather than an intrinsic property of the components themself. For example, the propagation direction of a beam of light through an achiral crystal (or metamaterial) can form an experimental arrangement that is different from its mirror image. In particular, oblique incidence onto any planar structure that does not possess two-fold rotational symmetry results in a 3D-chiral experimental arrangement, except for the special case when the structure has a line of mirror symmetry in the plane of incidence.
Bunn predicted in 1945 that extrinsic 3d chirality would cause optical activity and the effect was later detected in liquid crystals.
Extrinsic 3d chirality causes large optical activity and linear conversion dichroism in metamaterials. These effects are inherently tuneable by changing the relative orientation of incident wave and material. Both extrinsic 3d chirality and the resulting optical activity are reversed for opposite angles of incidence.
Intrinsic 2d chirality
Any object that cannot be superimposed with its mirror image by translation or rotation in two dimensions has intrinsic 2d chirality, also known as planar chirality. Intrinsic means that the chirality is a property of the object. Any planar pattern that does not have a line of mirror symmetry is 2d-chiral, and examples include flat spirals and letters such as S, G, P. In contrast to 3d-chiral objects, the perceived sense of twist of 2d-chiral patterns is reversed for opposite directions of observation. 2d chirality is associated with circular conversion dichroism, which causes directionally asymmetric transmission (reflection and absorption) of circularly polarized electromagnetic waves.
Extrinsic 2d chirality
Also 2d chirality can arise from the relative arrangement of different (achiral) components. In particular, oblique illumination of any planar periodic structure will result in extrinsic 2d chirality, except for the special cases where the plane of incidence is either parallel or perpendicular to a line of mirror symmetry of the structure. Strong circular conversion dichroism due to extrinsic 2d chirality has been observed in metamaterials.
Handedness of electromagnetic waves
Electromagnetic waves can have handedness associated with their polarization. Polarization of an electromagnetic wave is the property that describes the orientation, i.e., the time-varying direction and amplitude, of the electric field vector. For example, the electric field vectors of left-handed or right-handed circularly polarized waves form helices of opposite handedness in space as illustrated by the adjacent animation.
Polarizations are described in terms of the figures traced by the electric field vector as a function of time at a fixed position in space. In general, polarization is elliptical and is traced in a clockwise or counterclockwise sense. If, however, the major and minor axes of the ellipse are equal, then the polarization is said to be circular . If the minor axis of the ellipse is zero, the polarization is said to be linear. Rotation of the electric vector in a clockwise sense is designated right-hand polarization, and rotation in a counterclockwise sense is designated left-hand polarization. When deciding whether the rotation is clockwise or counterclockwise, a convention is needed. Optical physicists tend to determine handedness from the perspective of an observer looking towards the source from within the wave, like an astronomer looking at a star. Engineers tend to determine handedness looking along the wave from behind the source, like an engineer standing behind a radiating antenna. Both conventions yield opposite definitions of left-handed and right-handed polarizations and therefore care must be taken to understand which convention is being followed.
Mathematically, an elliptically polarized wave may be described as the vector sum of two waves of equal wavelength but unequal amplitude, and in quadrature (having their respective electric vectors at right angles and π/2 radians out of phase).
Circular polarization
Circular polarization, regarding electromagnetic wave propagation, is polarization such that the tip of the electric field vector describes a helix. The magnitude of the electric field vector is constant. The projection of the tip of the electric field vector upon any fixed plane intersecting, and normal to, the direction of propagation, describes a circle. A circularly polarized wave may be resolved into two linearly polarized waves in phase quadrature with their planes of polarization at right angles to each other. Circular polarization may be referred to as "right-hand" or "left-hand," depending on whether the helix describes the thread of a right-hand or left-hand screw, respectively
Optical activity
3D-chiral materials can exhibit optical activity, which manifests itself as circular birefringence, causing polarization rotation for linearly polarized waves, and circular dichroism, causing different attenuation of left- and right-handed circularly polarized waves. The former can be exploited to realize polarization rotators, while the latter can be used to realize circular polarizers. Optical activity is weak in natural chiral materials, but it can be enhanced by orders of magnitude in an artificial chiral materials, i.e., chiral metamaterials.
Just like the perceived sense of twist of a helix is the same for opposite directions of observation, optical activity is the same for opposite directions of wave propagation.
Circular birefringence
In 3d-chiral media, circularly polarized electromagnetic waves of opposite handedness can propagate with different speed. This phenomenon is known as circular birefringence and described by different real parts of refractive indices for left- and right-handed circularly polarized waves. As a consequence, left- and right-handed circularly polarized waves accumulate different amounts of phase upon propagation through a chiral medium. This phase difference causes rotation of the polarization state of linearly polarized waves, which may be thought of as superposition of left- and right-handed circularly polarized waves. Circular birefringence can yield a negative index of refraction for waves of one handedness when the effect is sufficiently large.
Circular dichroism
In 3d-chiral media, circularly polarized electromagnetic waves of opposite handedness can propagate with different losses. This phenomenon is known as circular dichroism and described by different imaginary parts of refractive indices for left- and right-handed circularly polarized waves.
Specular optical activity
While optical activity is normally observed for transmitted light, polarization rotation and different attenuation of left-handed and right-handed circularly polarized waves can also occur for light reflected by chiral substances. These phenomena of specular circular birefringence and specular circular dichroism are jointly known as specular optical activity. Specular optical activity is weak in natural materials. Extrinsic 3d chirality associated with oblique illumination of metasurfaces lacking two-fold rotational symmetry leads to large specular optical activity.
Nonlinear optical activity
Optical activity that depends on the intensity of light has been predicted and then observed in lithium iodate crystals.
In comparison to lithium iodate, extrinsic 3d chirality associated with oblique illumination of metasurfaces lacking two-fold rotational symmetry was found to lead to 30 million times stronger nonlinear optical activity in the optical part of the spectrum. At microwave frequencies, a 12 orders of magnitude stronger effect than in lithium iodate was observed for an intrinsically 3d-chiral structure.
Circular conversion dichroism
2D chirality is associated with directionally asymmetric transmission (reflection and absorption) of circularly polarized electromagnetic waves. 2D-chiral materials, which are also anisotropic and lossy exhibit different total transmission (reflection and absorption) levels for the same circularly polarized wave incident on their front and back. The asymmetric transmission phenomenon arises from different, e.g. left-to-right, circular polarization conversion efficiencies for opposite propagation directions of the incident wave and therefore the effect is referred to as circular conversion dichroism.
Like the twist of a 2d-chiral pattern appears reversed for opposite directions of observation, 2d-chiral materials have interchanged properties for left-handed and right-handed circularly polarized waves that are incident on their front and back. In particular left-handed and right-handed circularly polarized waves experience opposite directional transmission (reflection and absorption) asymmetries.
Circular conversion dichroism with almost ideal efficiency has been achieved in metamaterial-based chiral mirrors. In contrast to conventional mirrors, a chiral mirror reflects circularly polarized waves of one handedness without handedness change, while absorbing circularly polarized waves of the other handedness. Chiral mirrors can be realized by placing a 2d-chiral metamaterial in front of a conventional mirror. The concept has been exploited in holography to realize independent holograms for left-handed and right-handed circularly polarized electromagnetic waves. Active chiral mirrors that can be switched between left and right, or chiral mirror and conventional mirror, have been reported.
Linear conversion dichroism
3D chirality of anisotropic structures is associated with directionally asymmetric transmission (reflection and absorption) of linearly polarized electromagnetic waves. Different levels of total transmission (reflection and absorption) for the same linearly polarized wave incident on their front and back arise from different, e.g. x-to-y, linear polarization conversion efficiencies for opposite propagation directions of the incident wave and therefore the effect is referred to as linear conversion dichroism. The x-to-y and y-to-x polarization conversion efficiencies are interchanged for opposite directions of wave propagation. Linear conversion dichroism has been observed in metamaterials with intrinsic and extrinsic 3d chirality. Active metamaterials, where the effect can be turned on and off have been realized by controlling 3d chirality with phase transitions.
See also
Bi isotropic
Metamaterial
Chirality (chemistry)
Planar chirality
Circular Polarization
References
from Ames Laboratory
in support of the series on U.S. military standards relating to telecommunications, MIL-STD-188
Further reading
External links
Ames Laboratory. Press release archives. accessed:2010-06-28.
Electromagnetism
Metamaterials | Chiral media | [
"Physics",
"Materials_science",
"Engineering"
] | 2,688 | [
"Electromagnetism",
"Physical phenomena",
"Metamaterials",
"Materials science",
"Fundamental interactions"
] |
25,546,470 | https://en.wikipedia.org/wiki/Seven-dimensional%20space | In mathematics, a sequence of n real numbers can be understood as a location in n-dimensional space. When n = 7, the set of all such locations is called 7-dimensional space. Often such a space is studied as a vector space, without any notion of distance. Seven-dimensional Euclidean space is seven-dimensional space equipped with a Euclidean metric, which is defined by the dot product.
More generally, the term may refer to a seven-dimensional vector space over any field, such as a seven-dimensional complex vector space, which has 14 real dimensions. It may also refer to a seven-dimensional manifold such as a 7-sphere, or a variety of other geometric constructions.
Seven-dimensional spaces have a number of special properties, many of them related to the octonions. An especially distinctive property is that a cross product can be defined only in three or seven dimensions. This is related to Hurwitz's theorem, which prohibits the existence of algebraic structures like the quaternions and octonions in dimensions other than 2, 4, and 8. The first exotic spheres ever discovered were seven-dimensional.
Geometry
7-polytope
A polytope in seven dimensions is called a 7-polytope. The most studied are the regular polytopes, of which there are only three in seven dimensions: the 7-simplex, 7-cube, and 7-orthoplex. A wider family are the uniform 7-polytopes, constructed from fundamental symmetry domains of reflection, each domain defined by a Coxeter group. Each uniform polytope is defined by a ringed Coxeter-Dynkin diagram. The 7-demicube is a unique polytope from the D7 family, and 321, 231, and 132 polytopes from the E7 family.
6-sphere
The 6-sphere or hypersphere in seven-dimensional Euclidean space is the six-dimensional surface equidistant from a point, e.g. the origin. It has symbol , with formal definition for the 6-sphere with radius r of
The volume of the space bounded by this 6-sphere is
which is 4.72477 × r7, or 0.0369 of the 7-cube that contains the 6-sphere
Applications
Cross product
A cross product, that is a vector-valued, bilinear, anticommutative and orthogonal product of two vectors, is defined in seven dimensions. Along with the more usual cross product in three dimensions it is the only such product, except for trivial products.
Exotic spheres
In 1956, John Milnor constructed an exotic sphere in 7 dimensions and showed that there are at least 7 differentiable structures on the 7-sphere. In 1963 he showed that the exact number of such structures is 28.
See also
Euclidean geometry
List of geometry topics
List of regular polytopes
References
H.S.M. Coxeter: Regular Polytopes. Dover, 1973
J.W. Milnor: On manifolds homeomorphic to the 7-sphere. Annals of Mathematics 64, 1956
External links
Dimension
Multi-dimensional geometry
7 (number) | Seven-dimensional space | [
"Physics"
] | 637 | [
"Geometric measurement",
"Dimension",
"Physical quantities",
"Theory of relativity"
] |
1,967,236 | https://en.wikipedia.org/wiki/Antarctic%20Technology%20Offshore%20Lagoon%20Laboratory | The Antarctic Technology Offshore Lagoon Laboratory (ATOLL) was a floating oceanographic laboratory for in situ observation experiments. This facility also tested instruments and equipment for polar expeditions. The ATOLL hull was the largest fiberglass structure ever built at that time. It was in operation from 1982 to 1995.
Structure and infrastructure
The ATOLL was composed of three curved fiberglass elements, each long and having a draught of only . For towing, the elements could be assembled in a long S-shape; in operation, the elements would form a horseshoe shape surrounding water surface. The lab provided ample space for twelve researchers. The laboratory contained a lab, storage and supply facilities, a dormitory, computer room, and a fireplace.
The laboratory was installed and operated in the Baltic Sea (and the Bay of Kiel in particular) at the initiative and under the direction of Uwe Kils, at the Institute of Oceanography (Institut für Meereskunde) of the University of Kiel. The fiberglass hulls themselves were bought from Waki Zöllner's "Atoll" company.
The onboard computer was a NeXT and the first versions of a virtual microscope of Antarctic krill for interactive dives into their morphology and behavior were developed here, finding later mention in Science magazine. The lab was connected to the Internet via a radio link, and the first images of ocean critters on the internet came from this NeXT. The first ever in situ videos of Atlantic herring feeding on copepods were recorded from this lab.
An underwater observation and experimentation room allowed direct observation and manipulation through large portholes.
The technical equipment included an ultra-high-resolution scanning sonar that was used for locating schools of juvenile herring, for guiding a ROV, which was controlled via a cyberhelmet and glove, and for determining positions, distances, and speeds. Probes measured the water salinity, temperature, and oxygen levels. Special instruments could measure plankton-, particle-, and bubble-concentrations and their size distributions. Imaging equipment included low-light still and high speed video cameras using shuttered Laser-sheet or infrared LED illumination. An endoscope-system for non-invasive optical measurements called ecoSCOPE, which could also be mounted on an ROV, was developed and used to record the microscale dynamics and behavior of the highly evasive herring.
Research
Scientific investigations aboard the ATOLL concentrated on one of the most important food chain transitions: the linkages between the early life stages of herring (Clupea harengus) and their principal prey, the copepods. A major hypotheses of fisheries ecologists is that the microdistribution of prey, the microturbulence of the ocean, or the retention conditions are normally not suited to allow strong year classes of fish to develop. In most years more than 99% of herring larvae do not survive. Occasionally however, physical and biotic conditions are favorable, larval survival is high, and large year-classes result. Research work at the ATOLL investigated the effects of small-scale dynamics on fish feeding and predator avoidance and their correlation to year-class strength.
Research questions investigated by students during courses and their thesis work at the Laboratory included: What are the effects of the natural light gradient on predator-prey interactions? How can the predator best see the prey without being seen? How does the focussing of small waves oscillating light regime influence camouflage and attack strategy? What are the influences of the different frequencies of microturbulences? How do such effects change at the moment when herring larvae join into schools? What role does the phenomenon of aggregation play? Do ocean physics create or alter organism-aggregations? Can the dynamics of aggregations effect ocean physics at the microscales? Are there effects of the surface waves? What are the distribution and dynamics of microbubbles caused by turbulences and gas-oversaturations? How can the organisms orient in respect to micro-gradients of the ocean physics? How do they survive in the direct vicinity of undulating anoxia and hypoxia? Why are eelpouts, sticklebacks and herrings so extremely successful in the Baltic while cod is not? What are the effects and functions of schooling for feeding and microscale-orientation? What is the behavior of fish in netcages and how much food is lost from the cages.
The ATOLL mainly served as a test bed for the development and field testing of equipment such as developing ROVs that were to be used later in Antarctic expeditions, e.g. for in situ imaging of transparent organisms of krill size under the ice.
References
External links
ATOLL laboratory
Kils, U.: The ATOLL Laboratory and other Instruments Developed at Kiel; U.S. GLOBEC NEWS Technology Forum Number 8: 6–9; 1995. Also available as a PDF file.
Waki Zoellner Zweites Deutsches Fernsehen, ZDF, 1978, Drehscheibe: Waki Zöllner und sein Atoll vor Travemünde. Norddeutsches Fernsehen, NDR, 1978, Atoll vor Travemünde. Bayerisches Fernsehen, BR3, 1980, Samstags-Club: Maria Schell und Waki Zöllner. Zweites Deutsches Fernsehen, ZDF, 1990, Teleillustrierte: Waki Zöllner und sein Floatel
Newspaper article
Newspaper article2
Oceanography
Science and technology in Antarctica | Antarctic Technology Offshore Lagoon Laboratory | [
"Physics",
"Environmental_science"
] | 1,108 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
1,967,797 | https://en.wikipedia.org/wiki/Satin%20glass | Satin glass is glass that has been chemically treated to give it a misty-looking finish. The term "satin glass" is frequently used to refer to a collectible type of pressed glass.
Satin glass can be used for decorative items. However, satin glass is also used to provide privacy where the full transparency of glass is undesirable.
The satin finish is produced by treating the glass with hydrofluoric acid or hydrofluoric acid fumes.
Satin glass was first made as decorative pressed glass in England and the United States during the 1880s. Many companies have produced this type of satin glass.
It is similar to milk glass in that it is opaque, and has decorative surface patterns molded into it, however, satin glass has a satin, rather than glossy surface. Satin glass is typically tinted with a pastel color, blue is the most common.
It was produced by the Fenton Art Glass Company between 1972 and 1984 in large quantities.
Satin glass, like milk glass and carnival glass, is considered a collectible. Due to recent high production volume, prices commanded by satin glass are relatively low. However, certain large pieces produced in low volume can command high prices, especially if in perfect condition.
Burnishing a piece of satin glass will polish the satin finish away, leaving a glossy spot and greatly reducing the value as a collectable. Even friction from repeated ordinary handling, such as dusting with a cloth, will eventually add glossy spots to the finish, so the most desirable pieces become more rare even without breakage and chipping.
See also
Glass etching
References
Collecting
Glass coating and surface modification | Satin glass | [
"Chemistry"
] | 325 | [
"Glass chemistry",
"Coatings",
"Glass coating and surface modification"
] |
1,967,838 | https://en.wikipedia.org/wiki/Radiosensitivity | Radiosensitivity is the relative susceptibility of cells, tissues, organs or organisms to the harmful effect of ionizing radiation.
Cells types affected
Cells are least sensitive when in the S phase, then the G1 phase, then the G2 phase, and most sensitive in the M phase of the cell cycle. This is described by the 'law of Bergonié and Tribondeau', formulated in 1906: X-rays are more effective on cells which have a greater reproductive activity.
From their observations, they concluded that quickly dividing tumor cells are generally more sensitive than the majority of body cells. This is not always true. Tumor cells can be hypoxic and therefore less sensitive to X-rays because most of their effects are mediated by the free radicals produced by ionizing oxygen.
It has meanwhile been shown that the most sensitive cells are those that are undifferentiated, well nourished, dividing quickly and highly active metabolically. Amongst the body cells, the most sensitive are spermatogonia and erythroblasts, epidermal stem cells, gastrointestinal stem cells. The least sensitive are nerve cells and muscle fibers.
Very sensitive cells are also oocytes and lymphocytes, although they are resting cells and do not meet the criteria described above. The reasons for their sensitivity are not clear.
There also appears to be a genetic basis for the varied vulnerability of cells to ionizing radiation. This has been demonstrated across several cancer types and in normal tissues.
Cell damage classification
The damage to the cell can be lethal (the cell dies) or sublethal (the cell can repair itself). Cell damage can ultimately lead to health effects which can be classified as either Tissue Reactions or Stochastic Effects according to the International Commission on Radiological Protection.
Tissue reactions
Tissue reactions have a threshold of irradiation under which they do not appear and above which they typically appear. Fractionation of dose, dose rate, the application of antioxidants and other factors may affect the precise threshold at which a tissue reaction occurs. Tissue reactions include skin reactions (epilation, erythema, moist desquamation), cataracts, circulatory disease, and other conditions. Seven proteins were discovered in a systematic review, which correlated with radiosensitivity in normal tissues: γH2AX, TP53BP1, VEGFA, CASP3, CDKN2A, IL6, and IL1B.
Stochastic effects
Stochastic effects do not have a threshold of irradiation, are coincidental, and cannot be avoided. They can be divided into somatic and genetic effects. Among the somatic effects, secondary cancer is the most important. It develops because radiation causes DNA mutations directly and indirectly. Direct effects are those caused by ionizing particles and rays themselves, while the indirect effects are those that are caused by free radicals, generated especially in water radiolysis and oxygen radiolysis. The genetic effects confer the predisposition of radiosensitivity to the offspring. The process is not well understood yet.
Target structures
For decades, the main cellular target for radiation induced damage was thought to be the DNA molecule. This view has been challenged by data indicating that in order to increase survival, the cells must protect their proteins, which in turn repair the damage in the DNA. An important part of protection of proteins (but not DNA) against the detrimental effects of reactive oxygen species (ROS), which are the main mechanism of radiation toxicity, is played by non-enzymatic complexes of manganese ions and small organic metabolites. These complexes were shown to protect the proteins from oxidation in vitro and also increased radiation survival in mice. An application of the synthetically reconstituted protective mixture with manganese was shown to preserve the immunogenicity of viral and bacterial epitopes at radiation doses far above those necessary to kill the microorganisms, thus opening a possibility for a quick whole-organism vaccine production. The intracellular manganese content and the nature of complexes it forms (both measurable by electron paramagnetic resonance) were shown to correlate with radiosensitivity in bacteria, archaea, fungi and human cells. An association was also found between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients.
See also
Background radiation
Cell death
Lethal dose, LD50
LNT model, Linear no-threshold response model for ionizing radiation
Radiation sensitivity, the susceptibility of a material to physical or chemical changes induced by radiation
References
Radiobiology
Radioactivity
Radiation health effects
Oncology | Radiosensitivity | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 972 | [
"Radiation health effects",
"Radiobiology",
"Nuclear physics",
"Radiation effects",
"Radioactivity"
] |
1,967,967 | https://en.wikipedia.org/wiki/Department%20of%20Defense%20Architecture%20Framework | The Department of Defense Architecture Framework (DoDAF) is an architecture framework for the United States Department of Defense (DoD) that provides visualization infrastructure for specific stakeholders concerns through viewpoints organized by various views. These views are artifacts for visualizing, understanding, and assimilating the broad scope and complexities of an architecture description through tabular, structural, behavioral, ontological, pictorial, temporal, graphical, probabilistic, or alternative conceptual means. The current release is DoDAF 2.02.
This Architecture Framework is especially suited to large systems with complex integration and interoperability challenges, and it is apparently unique in its employment of "operational views". These views offer overview and details aimed to specific stakeholders within their domain and in interaction with other domains in which the system will operate.
Overview
The DoDAF provides a foundational framework for developing and representing architecture descriptions that ensure a common denominator for understanding, comparing, and integrating architectures across organizational, joint, and multinational boundaries. It establishes data element definitions, rules, and relationships and a baseline set of products for consistent development of systems, integrated, or federated architectures. These architecture descriptions may include families of systems (FoS), systems of systems (SoS), and net-centric capabilities for interoperating and interacting in the non-combat environment.
DoD Components are expected to conform to DoDAF to the maximum extent possible in development of architectures within the department. Conformance ensures that reuse of information, architecture artifacts, models, and viewpoints can be shared with common understanding. All major U.S. DoD weapons and information technology system acquisitions are required to develop and document an enterprise architecture (EA) using the views prescribed in the DoDAF. While it is clearly aimed at military systems, DoDAF has broad applicability across the private, public and voluntary sectors around the world, and represents one of a large number of systems architecture frameworks.
The purpose of DoDAF is to define concepts and models usable in DoD's six core processes:
Joint Capabilities Integration and Development (JCIDS)
Planning, Programming, Budgeting, and Execution (PPBE)
Defense Acquisition System (DAS)
Systems Engineering (SE)
Operational Planning (OPLAN)
Capability Portfolio Management (CPM)
In addition, DoDAF 2.0's specific goals were to:
Establish guidance for architecture content as a function of purpose – “fit for purpose”
Increase utility and effectiveness of architectures via a rigorous data model – the DoDAF Meta Model (DM2) -- so the architectures can be integrated, analyzed, and evaluated with more precision.
History
The first version of the development DoDAF was developed in the 1990s under the name C4ISR Architecture Framework. In the same period the reference model TAFIM, which was initiated in 1986, was further developed. The first C4ISR Architecture Framework v1.0, released 7 June 1996, was created in response to the passage of the Clinger-Cohen Act. It addressed the 1995 Deputy Secretary of Defense directive that a DoD-wide effort be undertaken to define and develop a better means and process for ensuring that C4ISR capabilities were interoperable and met the needs of the warfighter. Continued development effort resulted in December 1997 in the second version, C4ISR Architecture Framework v2.0.
In August 2003 the DoDAF v1.0 was released, which restructured the C4ISR Framework v2.0 to offer guidance, product descriptions, and supplementary information in two volumes and a Desk Book. It broadened the applicability of architecture tenets and practices to all Mission Areas rather than just the C4ISR community. This document addressed usage, integrated architectures, DoD and Federal policies, value of architectures, architecture measures, DoD decision support processes, development techniques, analytical techniques, and the CADM v1.01, and moved towards a repository-based approach by placing emphasis on architecture data elements that comprise architecture products. In February 2004 the documentation of Version 1.0 was released with volume "I: Definitions and Guidelines", "II: Product Descriptions" and a "Deskbook". In April 2007 the Version 1.5 was released with a documentation of "Definitions and Guidelines", "Product Descriptions" and "Architecture Data Description". This period further developed the concepts and terms that have since been replaced with different approaches. For example, a Mission Needs Statement (MNS) was a U.S. Department of Defense type of document which identified capability needs for a program to satisfy by a combination of solutions (DOTMLPF) to resolve a mission deficiency or to enhance operational capability. This type of document has been superseded by the description of capability needs called an Initial Capabilities Document, as of CJCSI 3170.01E. The CJCSI 3170.01 and 6212.01 were superseded by the CJCSI 5123.01 Series.
This term was introduced as a fundamental step in CJCSI 3170.01B (Apr 2001), 6212.01D (Apr 2005), and the Interim Defense Acquisition Guidebook (Oct 2004).
On May 28, 2009, DoDAF v2.0 was approved by the Department of Defense. The current version is DoDAF 2.02
DoDAF V2.0 is published on a public website.
Other derivative frameworks based on DoDAF include the NATO Architecture Framework (NAF) and Ministry of Defence Architecture Framework. Like other EA approaches, for example The Open Group Architecture Framework (TOGAF), DoDAF is organized around a shared repository to hold work products. The repository is defined by the common database schema Core Architecture Data Model 2.0 and the DoD Architecture Registry System (DARS). A key feature of DoDAF is interoperability, which is organized as a series of levels, called Levels of Information System Interoperability (LISI). The developing system must not only meet its internal data needs but also those of the operational framework into which it is set.
Capabilities and mission
See the diagram for a depiction of the Capabilities Emphasis, as tied in with mission/course of action, threads, activities, and architectures.
The DoD has moved toward a focus on the delivery of capabilities, which are the reason for creating the system/service.
The Capability Models describe capability taxonomy and capability evolution.
A capability thread would equate to the specific activities, rules, and systems that are linked to that particular capability.
The concept of capability, as defined by its Meta-model Data Group allows one to answer questions such as:
How does a particular capability or capabilities support the overall mission/vision?
What outcomes are expected to be achieved by a particular capability or set of capabilities?
What services are required to support a capability?
What is the functional scope and organizational span of a capability or set of capabilities?
What is our current set of capabilities that we are managing as part of a portfolio?
The Mission or Course of Action is described by a Concept of Operations (CONOPS), and is organized by Capabilities.
Capabilities are described by Threads.
Threads are described by Activities executed in serial or parallel.
Activities are grouped into Mission Areas. Activities define operations for an Architecture.
Architectures are organized by mission areas. Architectures provide proper resourcing of capabilities required by the Mission or Course of Action.
Version 1.5 views
The DoDAF V1.5 defines a set of products, a view model, that act as mechanisms for visualizing, understanding, and assimilating the broad scope and complexities of an architecture description through graphic, tabular, or textual means. These products are organized under four views:
All view (AV)
Operational view (OV)
Systems view (SV)
Technical standards view (TV)
Each view depicts certain perspectives of an architecture as described below. Only a subset of the full DoDAF viewset is usually created for each system development. The figure represents the information that links the operational view, systems and services view, and technical standards view. The three views and their interrelationships – driven by common architecture data elements – provide the basis for
deriving measures such as interoperability or performance, and for measuring the impact of the values of these metrics on operational mission and task effectiveness.
All view
All view (AV) products provide overarching descriptions of the entire architecture and define the scope and context of the architecture. The DoDAF V1.5 AV products are defined as:
AV-1 Overview and Summary Information
Scope, purpose, intended users, environment depicted, analytical findings (if applicable)
AV-2 Integrated Dictionary
Definitions of all terms used in all products.
Operational view
Operational View (OV) products provide descriptions of the tasks and activities, operational elements, and information exchanges required to accomplish DoD missions. The OV provides textual and graphical representations of operational nodes and elements, assigned tasks and activities, and information flows between nodes. It defines the type of information exchanged, the frequency of exchanges, the tasks and activities supported by these exchanges and the nature of the exchanges. The DoDAF V1.5 OV products are defined as:
OV-1 High Level Operational Concept Graphic
High level graphical and textual description of operational concept (high level organizations, missions, geographic configuration, connectivity, etc.).
OV-2 Operational Node Connectivity Description
Operational nodes, activities performed at each node, and connectivities and information flow between nodes.
OV-3 Operational Information Exchange Matrix
Information exchanged between nodes and the relevant attributes of that exchange such as media, quality, quantity, and the level of interoperability required.
OV-4 Organizational Relationships Chart
Command, control, coordination, and other relationships among organizations.
OV-5 Operational Activity Model
Activities, relationships among activities, inputs and outputs. In addition, overlays can show cost, performing nodes, or other pertinent information.
OV-6a Operational Rules Model
One of the three products used to describe operational activity sequence and timing that identifies the business rules that constrain the operation.
OV-6b Operational State Transition Description
One of the three products used to describe operational activity sequence and timing that identifies responses of a business process to events.
OV-6c Operational Event-Trace Description
One of the three products used to describe operational activity sequence and timing that traces the actions in a scenario or critical sequence of events.
OV-7 Logical Data Model
Documentation of the data requirements and structural business process rules of the Operational View. (In DoDAF V1.5. This corresponds to DIV-2 in DoDAF V2.0.)
Systems and services view
Systems and services view (SV) is a set of graphical and textual products that describe systems and services and interconnections providing for, or supporting, DoD functions. SV products focus on specific physical systems with specific physical (geographical) locations. The relationship between architecture data elements across the SV to the OV can be exemplified as systems are procured and fielded to support organizations and their operations. The DoDAF V1.5 SV products are:
SV-1 Systems/Services Interface Description
Depicts systems nodes and the systems resident at these nodes to support organizations/human roles represented by operational nodes of the OV-2. SV-1 also identifies the interfaces between systems and systems nodes.
SV-2 Systems/Services Communications Description
Depicts pertinent information about communications systems, communications links, and communications networks. SV-2 documents the kinds of communications media that support the systems and implements their interfaces as described in SV-1. Thus, SV-2 shows the communications details of SV-1 interfaces that automate aspects of the needlines represented in OV-2.
SV-3 Systems-Systems, Services-Systems, Services-Services Matrices
provides detail on the interface characteristics described in SV-1 for the architecture, arranged in matrix form.
SV-4a/SV-4b Systems/Services Functionality Description
The SV-4a documents system functional hierarchies and system functions, and the system data flows between them. The SV-4 from DoDAF v1.0 is designated as 'SV-4a' in DoDAF v1.5. Although there is a correlation between OV-5 or business-process hierarchies and the system functional hierarchy of SV-4a, it need not be a one-to-one mapping, hence, the need for the Operational Activity to Systems Function Traceability Matrix (SV-5a), which provides that mapping.
SV-5a, SV-5b, SV-5c Operational Activity to Systems Function, Operational Activity to Systems and Services Traceability Matrices
Operational Activity to SV-5a and SV-5b is a specification of the relationships between the set of operational activities applicable to an architecture and the set of system functions applicable to that architecture. The SV-5 and extension to the SV-5 from DoDAF v1.0 is designated as 'SV-5a' and ‘SV-5b’ in DoDAF v1.5 respectively.
SV-6 Systems/Services Data Exchange Matrix
Specifies the characteristics of the system data exchanged between systems. This product focuses on automated information exchanges (from OV-3) that are implemented in systems. Non-automated information exchanges, such as verbal orders, are captured in the OV products only.
SV-7 Systems/Services Performance Parameters Matrix
Specifies the quantitative characteristics of systems and system hardware/software items, their interfaces (system data carried by the interface as well as communications link details that implement the interface), and their functions. It specifies the current performance parameters of each system, interface, or system function, and the expected or required performance parameters at specified times in the future. Performance parameters include all technical performance characteristics of systems for which requirements can be developed and specification defined. The complete set of performance parameters may not be known at the early stages of architecture definition, so it should be expected that this product will be updated throughout the system’s specification, design, development, testing, and possibly even its deployment and operations life-cycle phases.
SV-8 Systems/Services Evolution Description
Captures evolution plans that describe how the system, or the architecture in which the system is embedded, will evolve over a lengthy period of time. Generally, the timeline milestones are critical for a successful understanding of the evolution timeline.
SV-9 Systems/Services Technology Forecast
Defines the underlying current and expected supporting technologies that have been targeted using standard forecasting methods. Expected supporting technologies are those that can be reasonably forecast given the current state of technology and expected improvements. New technologies should be tied to specific time periods, which can correlate against the time periods used in SV-8 milestones.
SV-10a Systems/Services Rules Model
Describes the rules under which the architecture or its systems behave under specified conditions.
SV-10b Systems/Services State Transition Description
A graphical method of describing a system (or system function) response to various events by changing its state. The diagram basically represents the sets of events to which the systems in the architecture will respond (by taking an action to move to a new state) as a function of its current state. Each transition specifies an event and an action.
SV-10c Systems/Services Event-Trace Description
Provides a time-ordered examination of the system data elements exchanged between participating systems (external and internal), system functions, or human roles as a result of a particular scenario. Each event-trace diagram should have an accompanying description that defines the particular scenario or situation. SV-10c in the Systems and Services View may reflect system-specific aspects or refinements of critical sequences of events described in the Operational View.
SV-11 Physical Schema
One of the architecture products closest to actual system design in the Framework. The product defines the structure of the various kinds of system data that are utilized by the systems in the architecture. (In DoDAF V1.5. This corresponds to DIV-3 in DoDAF V2.0.)
Technical standards view
Technical standards view (TV) products define technical standards, implementation conventions, business rules and criteria that govern the architecture. The DoDAF V1.5 TV products are as follows:
StdV-1 Technical Standards Profile - Extraction of standards that applies to the given architecture. (In DoDAF V1.5. Renamed to StdV-1 in DoDAF V2.0.)
StdV-2 Technical Standards Forecast - Description of emerging standards that are expected to apply to the given architecture, within an appropriate set of timeframes. (In DoDAF V1.5. Renamed to StdV-2 in DoDAF V2.0.)
Version 2.0 viewpoints
In DoDAF V2.0, architectural viewpoints are composed of data that has been organized to facilitate understanding. To align with ISO Standards, where appropriate, the terminology has changed from Views to Viewpoint (e.g., the Operational View is now the Operational Viewpoint).
All Viewpoint (AV)
Describes the overarching aspects of architecture context that relate to all viewpoints.
Capability Viewpoint (CV)
New in DoDAF V2.0. Articulates the capability requirements, the delivery timing, and the deployed capability.
Data and Information Viewpoint (DIV)
New in DoDAF V2.0. Articulates the data relationships and alignment structures in the architecture content for the capability and operational requirements, system engineering processes, and systems and services.
Operational Viewpoint (OV)
Includes the operational scenarios, activities, and requirements that support capabilities.
Project Viewpoint (PV)
New in DoDAF V2.0. Describes the relationships between operational and capability requirements and the various projects being implemented. The Project Viewpoint also details dependencies among capability and operational requirements, system engineering processes, systems design, and services design within the Defense Acquisition System process.
Services Viewpoint (SvcV)
New in DoDAF V2.0. Presents the design for solutions articulating the Performers, Activities, Services, and their Exchanges, providing for or supporting operational and capability functions.
Standards Viewpoint (StdV)
Renamed from Technical Standards View. Articulates the applicable operational, business, technical, and industry policies, standards, guidance, constraints, and forecasts that apply to capability and operational requirements, system engineering processes, and systems and services.
Systems Viewpoint (SV)
Articulates, for legacy support, the design for solutions articulating the systems, their composition, interconnectivity, and context providing for or supporting operational and capability functions. Note, System has changed in DoDAF V2.0 from DoDAF V1.5: System is not just computer hardware and computer software. System is now defined in the general sense of an assemblage of components - machine, human - that perform activities (since they are subtypes of Performer) and are interacting or interdependent. This could be anything, i.e., anything from small pieces of equipment that have interacting or interdependent elements, to Family of Systems (FoS) and System of Systems (SoS). Note that Systems are made up of Materiel (e.g., equipment, aircraft, and vessels) and Personnel Types.
The architectures for DoDAF V1.0 and DoDAF V1.5 may continue to be used. When appropriate (usually indicated by policy or by the decision-maker), DoDAF V1.0 and V1.5 architectures will need to update their architecture. When pre-DoDAF V2.0 architecture is compared with DoDAF V2.0 architecture, concept differences (such as Node) must be defined or explained for the newer architecture. In regard to DoDAF V1.5 products, they have been transformed into parts of the DoDAF V2.0 models. In most cases, the DoDAF V2.0 Meta-model supports the DoDAF V1.5 data concepts, with one notable exception: Node. Node is a complex, logical concept that is represented with more concrete concepts.
All Viewpoint (AV)
AV-1 Overview and Summary Information
Describes a Project's Visions, Goals, Objectives, Plans, Activities, Events, Conditions, Measures, Effects (Outcomes), and produced objects.
AV-2 Integrated Dictionary
An architectural data repository with definitions of all terms used throughout
Capability Viewpoint (CV)
CV-1 Vision
Addresses the enterprise concerns associated with the overall vision for transformational endeavours and thus defines the strategic context for a group of capabilities. The purpose of the CV-1 is to provide a strategic context for the capabilities described in the Architecture Description.
CV-2 Capability Taxonomy
Captures capability taxonomies. The model presents a hierarchy of capabilities. These capabilities may be presented in the context of a timeline. The CV-2 specifies all the capabilities that are referenced throughout one or more architectures.
CV-3 Capability Phasing
The planned achievement of capability at different points in time or during specific periods of time. The CV-3 shows the capability phasing in terms of the activities, conditions, desired effects, rules complied with, resource consumption and production, and measures, without regard to the performer and location solutions
CV-4 Capability Dependencies
The dependencies between planned capabilities and the definition of logical groupings of capabilities.
CV-5 Capability to Organizational Development Mapping
The fulfillment of capability requirements shows the planned capability deployment and interconnection for a particular Capability Phase. The CV-5 shows the planned solution for the phase in terms of performers and locations and their associated concepts.
CV-6 Capability to Operational Activities Mapping
A mapping between the capabilities required and the operational activities that those capabilities support.
CV-7 Capability to Services Mapping
A mapping between the capabilities and the services that these capabilities enable.
Data and Information Viewpoint (DIV)
DIV-1 Conceptual Data Model
The required high-level data concepts and their relationships.
DIV-2 Logical Data Model
The documentation of the data requirements and structural business process (activity) rules. In DoDAF V1.5, this was the OV-7.
DIV-3 Physical Data Model
The physical implementation format of the Logical Data Model entities, e.g., message formats, file structures, physical schema. In DoDAF V1.5, this was the SV-11.
Note, see Logical data model for discussion of the relationship of these three DIV data models, with comparison of the Conceptual, Logical & Physical Data Models.
Operational Viewpoint (OV)
OV-1 High-Level Operational Concept Graphic
The high-level graphical/textual description of the operational concept.
OV-2 Operational Resource Flow Description
A description of the Resource Flows exchanged between operational activities.
OV-3 Operational Resource Flow Matrix
A description of the resources exchanged and the relevant attributes of the exchanges.
OV-4 Organizational Relationships Chart
The organizational context, role or other relationships among organizations.
OV-5a Operational Activity Decomposition Tree
The capabilities and activities (operational activities) organized in a hierarchal structure.
OV-5b Operational Activity Model
The context of capabilities and activities (operational activities) and their relationships among activities, inputs, and outputs; Additional data can show cost, performers or other pertinent information.
OV-6a Operational Rules Model
One of three models used to describe activity (operational activity). It identifies business rules that constrain operations.
OV-6b State Transition Description
One of three models used to describe operational activity (activity). It identifies business process (activity) responses to events (usually, very short activities).
OV-6c Event-Trace Description
One of three models used to describe activity (operational activity). It traces actions in a scenario or sequence of events.
Project Viewpoint (PV)
PV-1 Project Portfolio Relationships
It describes the dependency relationships between the organizations and projects and the organizational structures needed to manage a portfolio of projects.
PV-2 Project Timelines
A timeline perspective on programs or projects, with the key milestones and interdependencies.
PV-3 Project to Capability Mapping
A mapping of programs and projects to capabilities to show how the specific projects and program elements help to achieve a capability.
Services Viewpoint (SvcV)
SvcV-1 Services Context Description
The identification of services, service items, and their interconnections.
SvcV-2 Services Resource Flow Description
A description of Resource Flows exchanged between services.
SvcV-3a Systems-Services Matrix
The relationships among or between systems and services in a given Architectural Description.
SvcV-3b Services-Services Matrix
The relationships among services in a given Architectural Description. It can be designed to show relationships of interest, (e.g., service-type interfaces, planned vs. existing interfaces).
SvcV-4 Services Functionality Description
The functions performed by services and the service data flows among service functions (activities).
SvcV-5 Operational Activity to Services Traceability Matrix
A mapping of services (activities) back to operational activities (activities).
SvcV-6 Services Resource Flow Matrix
It provides details of service Resource Flow elements being exchanged between services and the attributes of that exchange.
SvcV-7 Services Measures Matrix
The measures (metrics) of Services Model elements for the appropriate timeframe(s).
SvcV-8 Services Evolution Description
The planned incremental steps toward migrating a suite of services to a more efficient suite or toward evolving current services to a future implementation.
SvcV-9 Services Technology & Skills Forecast
The emerging technologies, software/hardware products, and skills that are expected to be available in a given set of time frames and that will affect future service development.
SvcV-10a Services Rules Model
One of three models used to describe service functionality. It identifies constraints that are imposed on systems functionality due to some aspect of system design or implementation.
SvcV-10b Services State Transition Description
One of three models used to describe service functionality. It identifies responses of services to events.
SvcV-10c Services Event-Trace Description
One of three models used to describe service functionality. It identifies service-specific refinements of critical sequences of events described in the Operational Viewpoint.
Standards Viewpoint (StdV)
StdV-1 Standards Profile
The listing of standards that apply to solution elements. In DoDAF V1.5, this was the TV-1.
StdV-2 Standards Forecast
The description of emerging standards and potential impact on current solution elements, within a set of time frames. In DoDAF V1.5, this was the TV-2.
Systems Viewpoint (SV)
SV-1 Systems Interface Description
The identification of systems, system items, and their interconnections.
SV-2 Systems Resource Flow Description
A description of Resource Flows exchanged between systems.
SV-3 Systems-Systems Matrix
The relationships among systems in a given Architectural Description. It can be designed to show relationships of interest, (e.g., system-type interfaces, planned vs. existing interfaces).
SV-4 Systems Functionality Description
The functions (activities) performed by systems and the system data flows among system functions (activities).
SV-5a Operational Activity to Systems Function Traceability Matrix
A mapping of system functions (activities) back to operational activities (activities).
SV-5b Operational Activity to Systems Traceability Matrix
A mapping of systems back to capabilities or operational activities (activities).
SV-6 Systems Resource Flow Matrix
Provides details of system resource flow elements being exchanged between systems and the attributes of that exchange.
SV-7 Systems Measures Matrix
The measures (metrics) of Systems Model elements for the appropriate timeframe(s).
SV-8 Systems Evolution Description
The planned incremental steps toward migrating a suite of systems to a more efficient suite, or toward evolving a current system to a future implementation.
SV-9 Systems Technology & Skills Forecast
The emerging technologies, software/hardware products, and skills that are expected to be available in a given set of time frames and that will affect future system development.
SV-10a Systems Rules Model
One of three models used to describe system functionality. It identifies constraints that are imposed on systems functionality due to some aspect of system design or implementation.
SV-10b Systems State Transition Description
One of three models used to describe system functionality. It identifies responses of systems to events.
SV-10c Systems Event-Trace Description
One of three models used to describe system functionality. It identifies system-specific refinements of critical sequences of events described in the Operational Viewpoint.
Creating an integrated architecture using DoDAF
The DODAF 2.0 Architects Guide repeated DOD Instruction 4630.8 definition of an integrated architecture as "An architecture consisting of multiple views facilitating integration and promoting interoperability across capabilities and among integrated architectures. For the purposes of architecture development, the term integrated means that data required in more than one of the architectural models is commonly defined and understood across those models. Integrated architectures are a property or design principle for architectures at all levels: Capability,Component, Solution, and Enterprise (in the context of the DoD Enterprise Architecture (EA) being a federation [of] architectures). In simpler terms, integration is seen in the connection from items common among architecture products, where items shown in one architecture product (such as sites used or systems interfaced or services provided) should have the identical number, name, and meaning appear in related architecture product views."
There are many different approaches for creating an integrated architecture using DoDAF and for determining which products are required. The approach depends on the requirements and the expected results; i.e., what the resulting architecture will be used for.
As one example, the DoDAF v1.0 listed the following products as the "minimum set of products required to satisfy the definition of an OV, SV and TV." One note: while the DoDAF does not list the OV-1 artifact as a core product, its development is strongly encouraged. The sequence of the artifacts listed below gives a suggested order in which the artifacts could be developed. The actual sequence of view generation and their potential customization is a function of the application domain and the specific needs of the effort.
AV-1 : Overview and Summary Information
AV-2 : Integrated Dictionary
OV-1 : High Level Operational Concept Graphic
OV-5 : Operational Activity Model
OV-2 : Operational Node Connectivity Description
OV-3 : Operational Informational Exchange Matrix
SV-1 : System Interface Description
TV-1 : Technical Standards Profile
One concern about the DoDAF is how well these products meet actual stakeholder concerns for any given system of interest. One can view DoDAF products, or at least the 3 views, as ANSI/IEEE 1471-2000 or ISO/IEC 42010 viewpoints. But to build an architecture description that corresponds to ANSI/IEEE 1471-2000 or ISO/IEC 42010, it is necessary to clearly identify the stakeholders and their concerns that map to each selected DoDAF product. Otherwise there is the risk of producing products with no customers.
The figure "DoDAF V1.5 Products Matrix" shows how the DoD Chairman of the Joint Chiefs of Staff Instruction (CJCSI) 6212.01E specifies which DoDAF V1.5 products are required for each type of analysis, in the context of the Net-Ready Key Performance Parameter (NR-KPP):
Initial Capabilities Document (ICD). Documents the need for a materiel solution to a specific capability gap derived from an initial analysis of alternatives executed by the operational user and, as required, an independent analysis of alternatives. It defines the capability gap in terms of the functional area, the relevant range of military operations, desired effects, and time.
Capability Development Document (CDD). A document that captures the information necessary to develop a proposed program(s), normally using an evolutionary acquisition strategy. The CDD outlines an affordable increment of militarily useful, logistically supportable and technically mature capability.
Capability Production Document (CPD). A document that addresses the production elements specific to a single increment of an acquisition program.
Information Support Plan (ISP). The identification and documentation of information needs, infrastructure support, IT and NSS interface requirements and dependencies focusing on net-centric, interoperability, supportability and sufficiency concerns (DODI 4630.8).
Tailored Information Support Plan (TISP). The purpose of the TISP process is to provide a dynamic and efficient vehicle for certain programs (ACAT II and below) to produce requirements necessary for I&S Certification. Select program managers may request to tailor the content of their ISP (ref ss). For programs not designated OSD special interest by ASD (NII)/DOD CIO, the component will make final decision on details of the tailored plan subject to minimums specified in the TISP procedures linked from the CJCSI 6212 resource page and any special needs identified by the J-6 for the I&S certification process.
Representation
Representations for the DoDAF products may be drawn from many diagramming techniques including:
tables
IDEF
Entity-relationship diagrams (ERDs)
UML
SysML
There is a UPDM (Unified Profile for DoDAF and MODAF) effort within the OMG to standardize the representation of DoDAF products when UML is used.
DoDAF generically describes in the representation of the artifacts to be generated, but allows considerable flexibility regarding the specific formats and modeling techniques. The DoDAF deskbook provides examples in using traditional systems engineering and data engineering techniques, and secondly, UML format. DoDAF proclaims latitude in work product format, without professing one diagramming technique over another.
In addition to graphical representation, there is typically a requirement to provide metadata to the Defense Information Technology Portfolio Repository (DITPR) or other architectural repositories.
Meta-model
DoDAF has a meta-model underpinning the framework, defining the types of modelling elements that can be used in each view and the relationships between them. DoDAF versions 1.0 thru 1.5 used the CADM meta-model, which was defined in IDEF1X (then later in UML) with an XML Schema derived from the resulting relational database. From version 2.0, DoDAF has adopted the IDEAS Group foundation ontology as the basis for its new meta-model. This new meta-model is called "DM2"; an acronym for "DoDAF Meta-Model". Each of these three levels of the DM2 is important to a particular viewer of Departmental processes:
The conceptual level or Conceptual Data Model (CDM) defines the high-level data constructs from which Architectural Descriptions are created in non-technical terms, so that executives and managers at all levels can understand the data basis of Architectural Description. Represented in the DoDAF V2.0 DIV-1 Viewpoint.
The Logical Data Model (LDM) adds technical information, such as attributes to the CDM and, when necessary, clarifies relationships into an unambiguous usage definition. Represented in the DoDAF V2.0 DIV-2 Viewpoint.
The Physical Exchange Specification (PES) consists of the LDM with general data types specified and implementation attributes (e.g., source, date) added, and then generated as an XSD. Represented in the DoDAF V2.0 DIV-3 Viewpoint.
The purposes of the DM2 are:
Establish and define the constrained vocabulary for description and discourse about DoDAF models (formerly “products”) and their usage in the 6 core processes
Specify the semantics and format for federated EA data exchange between:architecture development and analysis tools and architecture databases across the DoD Enterprise Architecture (EA) Community of Interest (COI) and with other authoritative data sources
Support discovery and understandability of EA data:
Discovery of EA data using DM2 categories of information
Understandability of EA data using DM2's precise semantics augmented with linguistic traceability (aliases)
Provide a basis for semantic precision in architectural descriptions to support heterogeneous architectural description integration and analysis in support of core process decision making.
The DM2 defines architectural data elements and enables the integration and federation of Architectural Descriptions. It establishes a basis for semantic (i.e., understanding) consistency within and across Architectural Descriptions. In this manner, the DM2 supports the exchange and reuse of architectural information among JCAs, Components, and Federal and Coalition partners, thus facilitating the understanding and implementation of interoperability of processes and systems. As the DM2 matures to meet the ongoing data requirements of process owners, decision makers, architects, and new technologies, it will evolve to a resource that more completely supports the requirements for architectural data, published in a consistently understandable way, and will enable greater ease for discovering, sharing, and reusing architectural data across organizational boundaries.
To facilitate the use of information at the data layer, the DoDAF describes a set of models for visualizing data through graphic, tabular, or textual means. These views relate to stakeholder requirements for producing an Architectural Description.
Relationship to other architecture frameworks
The UPDM (Unified Profile for DoDAF and MODAF) is an OMG initiative to standardize UML and SysML usage for USA and UK defense architecture frameworks. In addition, the multi-national IDEAS Group, which is supported by Australia, Canada, Sweden, UK, USA, with NATO observers, has launched an initiative to develop a formal ontology for enterprise architectures.
See also
Federal Enterprise Architecture Framework (FEAF)
IDEAS Group
IUID
MODAF Meta-Model
NCOW
References
Further reading
Dennis E. Wisnosky and Joseph Vogel. Dodaf Wizdom: a Practical Guide to Planning, Managing and Executing Projects to Build Enterprise Architectures using the Department of Defense Architecture Framework. Wizdom Systems, Inc., 2004. .
Dr. Steven H. Dam (2015). DoD Architecture Framework 2.0: A Guide to Applying Systems Engineering to Develop Integrated, Executable Architectures''. CreateSpace Independent Publishing Platform, 2015. .
External links
DoDAF Homepage at DoD CIO
DODAF 2.02 pdf, Aug 2010
Volume (Vol) I: Overview and Concepts – Manager’s Guide
Vol II: Architectural Data and Models – Architect’s Guide
Vol III: Meta-model Ontology Foundation and Physical Exchange Specification – Developer’s Guide
Vol IV: Journal - Best Practices
DoDAF v1.5, 23 Apr 2007
Vol I: Definitions and Guidelines pdf
Vol II: Product Descriptions pdf
Vol III: Architecture Data Description pdf
DoDAF V1, 9 Feb 2004
Deskbook
Vol I: Definitions and Guidelines
DoDAF section of Architecture Framework Forum Information resource dedicated to DoDAF as it relates to other architecture frameworks
DoD CMO Business Enterprise Architecture (BEA)
DoD BEA 10.0 Architecture Product Guide
Two Presentations on DoDAF 2.0 from Integrated EA Conferences 2008 and 2009
Department of Defense Information Enterprise Architecture
Metadata Registry
CJCSI 6212.01 Series
CJCSI 6212.01F document
European Space Agency Architectural Framework (ESAAF) - a framework for European space-based Systems of Systems
Architecture Framework
Enterprise architecture frameworks
Systems engineering | Department of Defense Architecture Framework | [
"Engineering"
] | 7,997 | [
"Systems engineering"
] |
1,968,544 | https://en.wikipedia.org/wiki/Cartan%E2%80%93K%C3%A4hler%20theorem | In mathematics, the Cartan–Kähler theorem is a major result on the integrability conditions for differential systems, in the case of analytic functions, for differential ideals . It is named for Élie Cartan and Erich Kähler.
Meaning
It is not true that merely having contained in is sufficient for integrability. There is a problem caused by singular solutions. The theorem computes certain constants that must satisfy an inequality in order that there be a solution.
Statement
Let be a real analytic EDS. Assume that is a connected, -dimensional, real analytic, regular integral manifold of with (i.e., the tangent spaces are "extendable" to higher dimensional integral elements).
Moreover, assume there is a real analytic submanifold of codimension containing and such that has dimension for all .
Then there exists a (locally) unique connected, -dimensional, real analytic integral manifold of that satisfies .
Proof and assumptions
The Cauchy-Kovalevskaya theorem is used in the proof, so the analyticity is necessary.
References
Jean Dieudonné, Eléments d'analyse, vol. 4, (1977) Chapt. XVIII.13
R. Bryant, S. S. Chern, R. Gardner, H. Goldschmidt, P. Griffiths, Exterior Differential Systems, Springer Verlag, New York, 1991.
External links
R. Bryant, "Nine Lectures on Exterior Differential Systems", 1999
E. Cartan, "On the integration of systems of total differential equations," transl. by D. H. Delphenich
E. Kähler, "Introduction to the theory of systems of differential equations," transl. by D. H. Delphenich
Partial differential equations
Theorems in analysis | Cartan–Kähler theorem | [
"Mathematics"
] | 363 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
1,968,588 | https://en.wikipedia.org/wiki/Hypervelocity | Hypervelocity is very high velocity, approximately over 3,000 meters per second (11,000 km/h, 6,700 mph, 10,000 ft/s, or Mach 8.8). In particular, hypervelocity is velocity so high that the strength of materials upon impact is very small compared to inertial stresses. Thus, metals and fluids behave alike under hypervelocity impact. An impact under extreme hypervelocity results in vaporization of the impactor and target. For structural metals, hypervelocity is generally considered to be over 2,500 m/s (5,600 mph, 9,000 km/h, 8,200 ft/s, or Mach 7.3). Meteorite craters are also examples of hypervelocity impacts.
Overview
The term "hypervelocity" refers to velocities in the range from a few kilometers per second to some tens of kilometers per second. This is especially relevant in the field of space exploration and military use of space, where hypervelocity impacts (e.g. by space debris or an attacking projectile) can result in anything from minor component degradation to the complete destruction of a spacecraft or missile. The impactor, as well as the surface it hits, can undergo temporary liquefaction. The impact process can generate plasma discharges, which can interfere with spacecraft electronics.
Hypervelocity usually occurs during meteor showers and deep space reentries, as carried out during the Zond, Apollo and Luna programs. Given the intrinsic unpredictability of the timing and trajectories of meteors, space capsules are prime data gathering opportunities for the study of thermal protection materials at hypervelocity (in this context, hypervelocity is defined as greater than escape velocity). Given the rarity of such observation opportunities since the 1970s, the Genesis and Stardust Sample Return Capsule (SRC) reentries as well as the recent Hayabusa SRC reentry have spawned observation campaigns, most notably at NASA's Ames Research Center.
Hypervelocity collisions can be studied by examining the results of naturally occurring collisions (between micrometeorites and spacecraft, or between meteorites and planetary bodies), or they may be performed in laboratories. Currently, the primary tool for laboratory experiments is a light-gas gun, but some experiments have used linear motors to accelerate projectiles to hypervelocity. The properties of metals under hypervelocity have been integrated with weapons, such as explosively formed penetrator. The vaporization upon impact and liquification of surfaces allow metal projectiles formed under hypervelocity forces to penetrate vehicle armor better than conventional bullets.
NASA studies the effects of simulated orbital debris at the White Sands Test Facility Remote Hypervelocity Test Laboratory (RHTL). Objects smaller than a softball cannot be detected on radar. This has prompted spacecraft designers to develop shields to protect spacecraft from unavoidable collisions. At RHTL, micrometeoroid and orbital debris (MMOD) impacts are simulated on spacecraft components and shields allowing designers to test threats posed by the growing orbital debris environment and evolve shield technology to stay one step ahead. At RHTL, four two-stage light-gas guns propel diameter projectiles to velocities as fast as .
Hypervelocity reentry events
Other definitions of hypervelocity
According to the United States Army, hypervelocity can also refer to the muzzle velocity of a weapon system, with the exact definition dependent upon the weapon in question. When discussing small arms a muzzle velocity of 5,000 ft/s (1524 m/s) or greater is considered hypervelocity, while for tank cannons the muzzle velocity must meet or exceed 3,350 ft/s (1021.08 m/s) to be considered hypervelocity, and the threshold for artillery cannons is 3,500 ft/s (1066.8 m/s).
See also
2009 satellite collision
Hypersonic aircraft
Hypersonic flight
Hypersonic
Hypervelocity star
Impact depth#Newton's approximation for the impact depth
Kinetic energy penetrator
Terminal velocity
References
Collision
Materials science
Physical quantities
Space hazards
Spaceflight concepts
Velocity | Hypervelocity | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 871 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Materials science",
"Motion (physics)",
"Mechanics",
"Vector physical quantities",
"nan",
"Velocity",
"Wikipedia categories named after physical quantities",
"Collision",
"Physical properties"
... |
1,969,364 | https://en.wikipedia.org/wiki/Activated%20alumina | Activated alumina is manufactured from aluminium hydroxide by dehydroxylating it in a way that produces a highly porous material; this material can have a surface area significantly over 200 m2/g. The compound is used as a desiccant (to keep things dry by adsorbing water from the air) and as a filter of fluoride, arsenic and selenium in drinking water. It is made of aluminium oxide (alumina; Al2O3). It has a very high surface-area-to-weight ratio, due to the many "tunnel like" pores that it has. Activated alumina in its phase composition can be represented only by metastable forms (gamma-Al2O3 etc.). Corundum (alpha-Al2O3), the only stable form of aluminum oxide, does not have such a chemically active surface and is not used as a sorbent.
Uses
Catalyst applications
Activated alumina is used for a wide range of adsorbent and catalyst applications including the adsorption of catalysts in polyethylene production, in hydrogen peroxide production, as a selective adsorbent for many chemicals including arsenic, fluoride, in sulfur removal from fluid streams (Claus Catalyst process).
Desiccant
Used as a desiccant, it works by a process called adsorption. The water in the air actually sticks to the alumina itself in between the tiny passages as the air passes through them. The water molecules become trapped so that the air is dried out as it passes through the filter. This process is reversible. If the alumina desiccant is heated to ~200 °C, it will release the trapped water. This process is called regenerating the desiccant.
Fluoride adsorbent
Activated alumina is also widely used to remove fluoride from drinking water. In the US, there are widespread programs to fluoridate drinking water. However, in certain regions, such as the Rajasthan region of India, there is enough fluoride in the water to cause fluorosis. A study from the Harvard school of Public Health found exposure to high levels of fluoride as a child correlated with lower IQ.
Activated alumina filters can easily reduce fluoride levels from 10 ppm to less than 1 ppm. The amount of fluoride leached from the water being filtered depends on how long the water is actually touching the alumina filter media. Basically, the more alumina in the filter, the less fluoride will be in the final, filtered water. Lower temperature water, and lower pH water (acidic water) are filtered more effectively too. Ideal pH for treatment is 5.5, which allows for up to a 95% removal rate.
As per researches conducted by V.K.Chhabra, Chief Chemist (retd.) P.H.E.D. Rajasthan, activated alumina, when used as a fluoride filter, under field conditions can best be regenerated by a solution of lye (sodium hydroxide; NaOH), sulfuric acid (H2SO4).
The fluoride uptake capacity (FUC) of commercial activated alumina can be up to 700 mg/kg. The FUC using V.K. Chhabra's method can be determined as follows:
Fluoride solution: Dissolve 22.1 g anhydrous NaF in distilled water and dilute to 1,000 mL.
1 mL = 10 mg fluoride.
10 mL/L = 100 mg/L fluoride.
Procedure:
To one litre of simulated distilled water containing 100 mg/L of fluoride, agitate at 100 rpm using the jar test machine. Add 10 g of the AA under test. After one hour, switch off the machine and take out the solution. After 5 minutes, carefully decant the supernatant solution and determine the fluoride. Calculate the difference between the original and treated water fluoride concentration. Multiply the difference by 100 to give the fluoride uptake capacity of AA in mg/kg.
Vacuum systems
In high vacuum applications, activated alumina is used as a charge material in fore-line traps to prevent oil generated by rotary vane pumps from back streaming into the system. A baffle of activated alumina can also replace the refrigerated trap often required for diffusion pumps, though this is rarely used.
Biomaterial
Its mechanical properties and non-reactivity in the biological environment allow it to be a suitable material used to cover surfaces in friction in body prostheses (e.g. hip or shoulder prostheses).
Defluoridation
Defluoridation is the downward adjustment of the level of fluoride in drinking water. Activated Alumina process is one of the widely used adsorption methods for the defluoridation of drinking water.
See also
Activated carbon
Silica gel
Synthetic Magnesium Silicate
References
Filters
Desiccants | Activated alumina | [
"Physics",
"Chemistry",
"Engineering"
] | 1,055 | [
"Chemical equipment",
"Filters",
"Desiccants",
"Materials",
"Filtration",
"Matter"
] |
1,970,514 | https://en.wikipedia.org/wiki/Tadelakt | Tadelakt () is a waterproof plaster surface used in Moroccan architecture to make baths, sinks, water vessels, interior and exterior walls, ceilings, roofs, and floors. It is made from lime plaster, which is rammed, polished, and treated with soap to make it waterproof and water-repellent. Tadelakt is labour-intensive to install, but durable. Since it is applied as a paste, tadelakt has a soft, undulating character, it can form curves, and it is seamless. Pigment can be added to give it any colour, but deep red is traditional. It may have a shiny or matte finish.
Etymology and history
The term tadelakt, meaning "to rub in", is an Amazighified expression from the Arabic word , meaning "to rub or massage."
Tadelakt is thought to have evolved from qadad, a similar plaster used in Yemen for millennia that is treated with calcium hydroxide and oils and fats instead of soaps.
Constituents and chemistry
The basic constituents of tadelakt plaster are:
lime plaster (not Portland cement)
in some cases, marble or limestone sand (but not other aggregates)
natural soap (often "black" or olive oil soap) to speed carbonation of the surface and impart water-resistance.
The soap chemically reacts with the lime plaster, forming lime (calcium) soaps. Calcium soaps are insoluble in water, and fairly hard. They are familiar, in areas with calcium-rich ("hard") water, as deposits in bathtubs, sinks, and showers; when soap is mixed with the water's dissolved calcium carbonate/lime, calcium soaps form.
2 C17H35COO−Na+ + Ca2+ → (C17H35COO)2Ca + 2 Na+
Techniques
Traditional application includes polishing with a river stone and treatment with oleic acid, in the form of olive oil soap, to lend it its final appearance and water resistance.
In Morocco, the traditional application technique:
plaster powder is mixed with water for 12 to 15 hours prior to the addition of pigment.
the plaster is applied in one thick coat with a wooden float, and smoothed with the same.
before the plaster sets, a flat, smooth, hard stone is used to compress the plaster, then a plastic trowel used for the final polish.
it is mechanically polished using stones or abrasives harder than the plaster, providing a smooth, sometimes shiny, finish.
lastly, an olive-oil soap solution is used to seal the plaster
Long-term maintenance of tadelakt requires regularly re-sealing the surface with a soap solution; in the case of qadad roofs, this was traditionally done every few years.
Uses
Tadelakt is the traditional coating of the hammams and bathrooms of palaces and riad residences in Morocco. The restoration of riads in Morocco has led to a resurgence in its use.
In modern times, it has been used outside.
See also
References
External links
Culture of Morocco
Moroccan handicrafts
Architecture in Morocco
Moorish architecture
Islamic architectural elements
Building materials
Interior design
Wallcoverings
Plastering
Arab inventions
Moisture protection | Tadelakt | [
"Physics",
"Chemistry",
"Engineering"
] | 655 | [
"Building engineering",
"Coatings",
"Architecture",
"Construction",
"Materials",
"Plastering",
"Matter",
"Building materials"
] |
1,970,679 | https://en.wikipedia.org/wiki/Saucisson%20%28pyrotechnics%29 | In early military engineering, a saucisson (French for a large, dry-filled sausage) was a primitive type of fuse, consisting of a long tube or hose of cloth or leather, typically about an inch and half in diameter (37 mm), damp-proofed with pitch and filled with black powder. It was normally laid in a protective wooden trough, and ignited by use of a torch or slow match. Saucissons were used to fire fougasses, petards, mines and camouflets.
Very long fascines were also called saucissons.
Later, in early 20th century mining jargon, a saucisson referred to the flexible casings used for explosives in mine operations.
References
Explosives
Fuses
Military engineering
Mining terminology
Mining equipment
Pyrotechnics | Saucisson (pyrotechnics) | [
"Chemistry",
"Engineering"
] | 166 | [
"Mining equipment",
"Construction",
"Military engineering",
"Explosives",
"Explosions"
] |
1,970,691 | https://en.wikipedia.org/wiki/Aptamer | Aptamers are oligomers of artificial ssDNA, RNA, XNA, or peptide that bind a specific target molecule, or family of target molecules. They exhibit a range of affinities (KD in the pM to μM range), with variable levels of off-target binding and are sometimes classified as chemical antibodies. Aptamers and antibodies can be used in many of the same applications, but the nucleic acid-based structure of aptamers, which are mostly oligonucleotides, is very different from the amino acid-based structure of antibodies, which are proteins. This difference can make aptamers a better choice than antibodies for some purposes (see antibody replacement).
Aptamers are used in biological lab research and medical tests. If multiple aptamers are combined into a single assay, they can measure large numbers of different proteins in a sample. They can be used to identify molecular markers of disease, or can function as drugs, drug delivery systems and controlled drug release systems. They also find use in other molecular engineering tasks.
Most aptamers originate from SELEX, a family of test-tube experiments for finding useful aptamers in a massive pool of different DNA sequences. This process is much like natural selection, directed evolution or artificial selection. In SELEX, the researcher repeatedly selects for the best aptamers from a starting DNA library made of about a quadrillion different randomly generated pieces of DNA or RNA. After SELEX, the researcher might mutate or change the chemistry of the aptamers and do another selection, or might use rational design processes to engineer improvements. Non-SELEX methods for discovering aptamers also exist.
Researchers optimize aptamers to achieve a variety of beneficial features. The most important feature is specific and sensitive binding to the chosen target. When aptamers are exposed to bodily fluids, as in serum tests or aptamer therapeutics, it is often important for them to resist digestion by DNA- and RNA-destroying proteins. Therapeutic aptamers often must be modified to clear slowly from the body. Aptamers that change their shape dramatically when they bind their target are useful as molecular switches to turn a sensor on and off. Some aptamers are engineered to fit into a biosensor or in a test of a biological sample. It can be useful in some cases for the aptamer to accomplish a pre-defined level or speed of binding. As the yield of the synthesis used to produce known aptamers shrinks quickly for longer sequences, researchers often truncate aptamers to the minimal binding sequence to reduce the production cost.
Etymology
The word "aptamer" is a neologism coined by Andrew Ellington and Jack Szostak in their first publication on the topic. They did not provide a precise definition, stating "We have termed these individual RNA sequences 'aptamers', from the Latin 'aptus', to fit."
The word itself, however, derives from the Greek word ἅπτω, to connect or fit (as used by Homer (c. 8th century BC) ) and μέρος, a component of something larger.
Classification
A typical aptamer is a synthetically generated ligand exploiting the combinatorial diversity of DNA, RNA, XNA, or peptide to achieve strong, specific binding for a particular target molecule or family of target molecules. Aptamers are occasionally classified as "chemical antibodies" or "antibody mimics". However, most aptamers are small, with a molecular weight of 6-30 kDa, in contrast to the 150 kDa size of antibodies, and contain one binding site rather than the two matching antigen binding regions of a typical antibody.
History
Since its first application in 1967, directed evolution methodologies have been used to develop biomolecules with new properties and functions. Early examples include the modification of the bacteriophage Qbeta replication system and the generation of ribozymes with modified cleavage activity.
In 1990, two teams independently developed and published SELEX (Systematic Evolution of Ligands by EXponential enrichment) methods and generated RNA aptamers: the lab of Larry Gold, using the term SELEX for their process of selecting RNA ligands against T4 DNA polymerase and the lab of Jack Szostak, selecting RNA ligands against various organic dyes. Two years later, the Szostak lab and Gilead Sciences, acting independently of one another, used in vitro selection schemes to generate DNA aptamers for organic dyes and human thrombin, respectively. In 2001, SELEX was automated by J. Colin Cox in the Ellington lab, reducing the duration of a weeks-long selection experiment to just three days.
In 2002, two groups led by Ronald Breaker and Evgeny Nudler published the first definitive evidence for a riboswitch, a nucleic acid-based genetic regulatory element, the existence of which had previously been suspected. Riboswitches possess similar molecular recognition properties to aptamers. This discovery added support to the RNA World hypothesis, a postulated stage in time in the origin of life on Earth.
Properties
Structure
Most aptamers are based on a specific oligomer sequence of 20-100 bases and 3-20 kDa. Some have chemical modifications for functional enhancements or compatibility with larger engineered molecular systems. DNA, RNA, XNA, and peptide aptamer chemistries can each offer distinct profiles in terms of shelf stability, durability in serum or in vivo, specificity and sensitivity, cost, ease of generation, amplification, and characterization, and familiarity to users. Typically, DNA- and RNA-based aptamers exhibit low immunogenicity, are amplifiable via Polymerase Chain Reaction (PCR), and have complex secondary structure and tertiary structure. DNA- and XNA-based aptamers exhibit superior shelf stability. XNA-based aptamers can introduce additional chemical diversity to increase binding affinity or greater durability in serum or in vivo.
As 22 genetically-encoded and over 500 naturally-occurring amino acids exist, peptide aptamers, as well as antibodies, have much greater potential combinatorial diversity per unit length relative to the 4 nucleic acids in DNA or RNA. Chemical modifications of nucleic acid bases or backbones increase the chemical diversity of standard nucleic acid bases.
Split aptamers are composed of two or more DNA strands that are pieces of a larger parent aptamer that has been broken in two by a molecular nick. The ability of each component strand to bind targets will depend on the location of the nick, as well as the secondary structures of the daughter strands. The presence of a target molecule supports the joining of DNA fragments. This can be used as the basis for biosensors. Once assembled, the two separate DNA strands can be ligated into a single strand.
Unmodified aptamers are cleared rapidly from the bloodstream, with a half-life of seconds to hours. This is mainly due to nuclease degradation, which physically destroys the aptamers, as well as clearance by the kidneys, a result of the aptamer's low molecular weight and size. Several modifications, such as 2'-fluorine-substituted pyrimidines and polyethylene glycol (PEG) linkage, permit a serum half-life of days to weeks. PEGylation can add sufficient mass and size to prevent clearance by the kidneys in vivo. Unmodified aptamers can treat coagulation disorders. The problem of clearance and nuclease digestion is diminished when they are applied to the eye, where there is a lower concentration of nuclease and the rate of clearance is lower. Rapid clearance from serum can also be useful in some applications, such as in vivo diagnostic imaging.
In a study on aptamers designed to bind with proteins associated with Ebola infection, a comparison was made among three aptamers isolated for their ability to bind the target protein EBOV sGP. Although these aptamers vary in both sequence and structure, they exhibit remarkably similar relative affinities for sGP from EBOV and SUDV, as well as EBOV GP1.2. Notably, these aptamers demonstrated a high degree of specificity for the GP gene products. One aptamer, in particular, proved effective as a recognition element in an electrochemical sensor, enabling the detection of sGP and GP1.2 in solution, as well as GP1.2 within a membrane context.The results of this research point to the intriguing possibility that certain regions on protein surfaces may possess aptatropic qualities. Identifying the key features of such sites, in conjunction with improved 3-D structural predictions for aptamers, holds the potential to enhance the accuracy of predicting aptamer interaction sites on proteins. This, in turn, may help identify aptamers with a heightened likelihood of binding proteins with high affinity, as well as shed light on protein mutations that could significantly impact aptamer binding.This comprehensive understanding of the structure-based interactions between aptamers and proteins is vital for refining the computational predictability of aptamer-protein binding. Moreover, it has the potential to eventually eliminate the need for the experimental SELEX protocol.
Targets
Aptamer targets can include small molecules and heavy metal ions, larger ligands such as proteins, and even whole cells. These targets include lysozyme, thrombin, human immunodeficiency virus trans-acting responsive element (HIV TAR), hemin, interferon γ, vascular endothelial growth factor (VEGF), prostate specific antigen (PSA), dopamine, and the non-classical oncogene, heat shock factor 1 (HSF1).
Aptamers have been generated against cancer cells, prions, bacteria, and viruses. Viral targets of aptamers include influenza A and B viruses, Respiratory syncytial virus (RSV), SARS coronavirus (SARS-CoV) and SARS-CoV-2.
Aptamers may be particularly useful for environmental science proteomics. Antibodies, like other proteins, are more difficult to sequence than nucleic acids. They are also costly to maintain and produce, and are at constant risk of contamination, as they are produced via cell culture or are harvested from animal serum. For this reason, researchers interested in little-studied proteins and species may find that companies will not produce, maintain, or adequately validate the quality of antibodies against their target of interest. By contrast, aptamers are simple to sequence and cost nothing to maintain, as their exact structure can be stored digitally and synthesized on demand. This may make them more economically feasible as research tools for underfunded biological research subjects. Aptamers exist for plant compounds, such as theophylline (found in tea) and abscisic acid (a plant immune hormone). An aptamer against a-amanitin (the toxin that causes lethal Amanita poisoning) has been developed, an example of an aptamer against a mushroom target.
Aptamer applications can be roughly grouped into sensing, therapeutic, reagent production, and engineering categories. Sensing applications are important in environmental, biomedical, epidemiological, biosecurity, and basic research applications, where aptamers act as probes in assays, imaging methods, diagnostic assays, and biosensors. In therapeutic applications and precision medicine, aptamers can function as drugs, as targeted drug delivery vehicles, as controlled release mechanisms, and as reagents for drug discovery via high-throughput screening for small molecules and proteins. Aptamers have application for protein production monitoring, quality control, and purification. They can function in molecular engineering applications as a way to modify proteins, such as enhancing DNA polymerase to make PCR more reliable.
Because the affinity of the aptamer also affects its dynamic range and limit of detection, aptamers with a lower affinity may be desirable when assaying high concentrations of a target molecule. Affinity chromatography also depends on the ability of the affinity reagent, such as an aptamer, to bind and release its target, and lower affinities may aid in the release of the target molecule. Hence, specific applications determine the useful range for aptamer affinity.
Antibody replacement
Aptamers can replace antibodies in many biotechnology applications. In laboratory research and clinical diagnostics, they can be used in aptamer-based versions of immunoassays including enzyme-linked immunosorbent assay (ELISA), western blot, immunohistochemistry (IHC), and flow cytometry. As therapeutics, they can function as agonists or antagonists of their ligand. While antibodies are a familiar technology with a well-developed market, aptamers are a relatively new technology to most researchers, and aptamers have been generated against only a fraction of important research targets. Unlike antibodies, unmodified aptamers are more susceptible to nuclease digestion in serum and renal clearance in vivo. Aptamers are much smaller in size and mass than antibodies, which could be a relevant factor in choosing which is best suited for a given application. When aptamers are available for a particular application, their advantages over antibodies include potentially lower immunogenicity, greater replicability and lower cost, a greater level of control due to the in vitro selection conditions, and capacity to be efficiently engineered for durability, specificity, and sensitivity.
In addition, aptamers contribute to reduction of research animal use. While antibodies often rely on animals for initial discovery, as well as for production in the case of polyclonal antibodies, both the selection and production of aptamers is typically animal-free. However, phage display methods allow for selection of antibodies in vitro, followed by production from a monoclonal cell line, avoiding the use of animals entirely.
Controlled release of therapeutics
The ability of aptamers to reversibly bind molecules such as proteins has generated increasing interest in using them to facilitate controlled release of therapeutic biomolecules, such as growth factors. This can be accomplished by tuning the binding strength to passively release the growth factors, along with active release via mechanisms such as hybridization of the aptamer with complementary oligonucleotides or unfolding of the aptamer due to cellular traction forces.
Tissue Engineering Application
Aptamer, known for their ability to bind specific molecules reversibly, have been used in 3D bioprinting tissues to precisely deliver growth factors to promote vascularization. This controlled delivery allows growth factors to be released at the right place and time, encouraging the formation of localized and complex vascular networks. Additionally, the properties of these networks can be fine-tuned by adjusting how growth factors are released over time, making this approach a powerful tool for creating vascularized engineered tissues.
AptaBiD
AptaBiD (Aptamer-Facilitated Biomarker Discovery) is an aptamer-based method for biomarker discovery.
Peptide Aptamers
While most aptamers are based on DNA, RNA, or XNA, peptide aptamers are artificial proteins selected or engineered to bind specific target molecules.
Structure
Peptide aptamers consist of one or more peptide loops of variable sequence displayed by a protein scaffold. Derivatives known as tadpoles, in which peptide aptamer "heads" are covalently linked to unique sequence double-stranded DNA "tails", allow quantification of scarce target molecules in mixtures by PCR (using, for example, the quantitative real-time polymerase chain reaction) of their DNA tails. The peptides that form the aptamer variable regions are synthesized as part of the same polypeptide chain as the scaffold and are constrained at their N and C termini by linkage to it. This double structural constraint decreases the diversity of the 3D structures that the variable regions can adopt, and this reduction in structural diversity lowers the entropic cost of molecular binding when interaction with the target causes the variable regions to adopt a uniform structure.
Selection
The most common peptide aptamer selection system is the yeast two-hybrid system. Peptide aptamers can also be selected from combinatorial peptide libraries constructed by phage display and other surface display technologies such as mRNA display, ribosome display, bacterial display and yeast display. These experimental procedures are also known as biopanning. All the peptides panned from combinatorial peptide libraries have been stored in the MimoDB database.
Applications
Libraries of peptide aptamers have been used as "mutagens", in studies in which an investigator introduces a library that expresses different peptide aptamers into a cell population, selects for a desired phenotype, and identifies those aptamers that cause the phenotype. The investigator then uses those aptamers as baits, for example in yeast two-hybrid screens to identify the cellular proteins targeted by those aptamers. Such experiments identify particular proteins bound by the aptamers, and protein interactions that the aptamers disrupt, to cause the phenotype. In addition, peptide aptamers derivatized with appropriate functional moieties can cause specific post-translational modification of their target proteins, or change the subcellular localization of the targets.
Industry and Research Community
Commercial products and companies based on aptamers include the drug Macugen (pegaptanib) and the clinical diagnostic company SomaLogic. The International Society on Aptamers (INSOAP), a professional society for the aptamer research community, publishes a journal devoted to the topic, Aptamers. Apta-index is a current database cataloging and simplifying the ordering process for over 700 aptamers.
See also
References
Further reading
Cho EJ, Lee JW, Ellington, AD
Genetics techniques
Nucleic acids
Peptides
Biotechnology | Aptamer | [
"Chemistry",
"Engineering",
"Biology"
] | 3,713 | [
"Genetics techniques",
"Biomolecules by chemical classification",
"Genetic engineering",
"Biotechnology",
"nan",
"Molecular biology",
"Peptides",
"Nucleic acids"
] |
1,971,410 | https://en.wikipedia.org/wiki/Electronic%20organizer | An electronic organizer (or electric organizer) is a small calculator-sized computer, often with an built-in diary application and other functions such as an address book and calendar, replacing paper-based personal organizers. Typically, it has a small alphanumeric keypad and an LCD screen of one, two, or three lines.
They were very popular especially with businessmen during the 1990s, but because of the advent of personal digital assistants and palmtop PCs in the 1990s, as well as smartphones in the 2010s, all of which have a larger set of features, electronic organizers are mostly seen today for research purposes. One of the leading research topics being the study of how electronics can help people with mental disabilities use this type of equipment to aid their daily life. Electronic organizers have more recently been used to support people with Alzheimer's disease to have a visual representation of a schedule.
History
The electronic diary or organizer was first patented by Indian businessman Satyanarayan Pitroda in 1975.
Casio digital diary
Casio digital diaries were produced by Casio in the early and mid 1990s, but have since been entirely superseded by mobile phones and PDAs.
Other electronic organizers
While Casio was a major role player in the field of electronic organizers there were many different ideas, patent requests, and manufacturers of electronic organizers. Rolodex, widely known for their index card holders in the 1980s, Sharp Electronics, mostly known for their printers and audio visual equipment, and lastly Royal electronics were all large contributors to the electronic organizer in its heyday.
Features
Telephone directory
Schedule keeper: Keep track of appointments.
Memo function: Store text data such as price lists, airplane schedules, and more.
To do list: Keep track of daily tasks, checking off items as you complete them.
World time: Find out the current time in virtually any location on the globe.
Secret memory area: The secret memory area keeps personal data private. Once a password is registered, data is locked away until the password is used to access the secret area.
Alarm
Metric conversion function: Conversion between metric units and another measurement unit.
Currency conversion function
Game: Some machines included a game such as Poker or Blackjack.
See also
Pocket electronic dictionary
Personal digital assistant (PDA)
Smartphone
Pocket computer
References
External links
Mobile computers | Electronic organizer | [
"Technology"
] | 460 | [
"Mobile computer stubs",
"Mobile technology stubs"
] |
1,972,009 | https://en.wikipedia.org/wiki/3-Hydroxy-3-methylglutaryl-CoA%20lyase%20deficiency | 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency, (HMGCLD) also known as HMGCL deficiency, HMG-CoA lyase deficiency, or hydroxymethylglutaric aciduria, is an uncommon autosomal recessive inborn error in ketone body production and leucine breakdown caused by HMGCL gene mutations. HMGCL, located on chromosome 1p36.11's short arm, codes for HMG-CoA lyase, which aids in the metabolism of dietary proteins by converting HMG-CoA into acetyl-CoA and acetoacetate.
3-Hydroxy-3-methylglutaryl-CoA lyase deficiency presents in various ways, from severe neonatal symptoms to adult symptoms. Symptoms include frequent vomiting, convulsions, and decreased alertness. Laboratory results include higher plasma/serum transaminase activity, hyperammonemia, acidosis, hypoglycemia, and an increased anion gap.
3-Hydroxy-3-methylglutaryl-CoA lyase deficiency can be identified during newborn screening using tandem mass spectrometry, and is confirmed by enzyme activity testing in lymphocytes, immortalized lymphoblastoid cells, or fibroblasts, as well as HMGCL gene mutation studies.
There are no controlled treatment studies for 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency, making it difficult to determine the need for specific diet or carnitine supplements. The main therapy is avoiding fasting, with L-carnitine supplementation potentially detoxifying and preventing secondary insufficiency.
Signs and symptoms
3-Hydroxy-3-methylglutaryl-CoA lyase deficiency can appear in a variety of ways in terms of clinical presentation, from a severe neonatal onset with potentially fatal consequences to an adult presentation. Clinical signs of 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency appear either early in the neonatal stage or later in the first year of life. Typically, nonspecific symptoms such as frequent vomiting, convulsions, and decreased alertness are displayed by patients. Typical laboratory results include higher plasma/serum transaminase activity, hyperammonemia, acidosis, hypoglycemia, and an increased anion gap.
Causes
3-Hydroxy-3-methylglutaryl-CoA lyase deficiency is the result of HMGCL gene mutations. HMGCL is found on chromosome 1p36.11's short arm and codes for the enzyme 3-hydroxymethyl-3-methylglutaryl-coenzyme A lyase (HMG-CoA lyase). This mitochondrial enzyme contributes to the metabolism of dietary proteins by converting HMG-CoA into acetyl-CoA and acetoacetate, which is the last stage of the breakdown of leucine and fat for energy. As a result, the body is unable to produce ketone bodies, which are necessary for generating energy during fasting. 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency is passed down as an autosomal recessive trait.
Mechanism
The pathophysiology of 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency, like that of many other inborn errors of metabolism, can be explained by the accumulation of potentially harmful metabolites (leucine) and a lack of products (ketone bodies). Hypoglycemia severely impairs counterregulatory compensation because it affects leucine catabolism as well as fat oxidation, which results in secondary metabolic dysfunction. Metabolite levels in the leucine oxidation pathway may be significantly raised, including 3-MGL and 3-HIVA. Additionally, patients with MRI spectroscopy have shown 3-HIVA and 3-HMG, suggesting that these proximal metabolites may play a role in pathogenesis. Depletion of Coenzyme A recycling for other activities can also result from intramitochondrial buildup of acetyl-coA. The relationship between 3-MGC accumulation as a measure of mitochondrial malfunction and leucine oxidation in terms of symptomatology is still unknown.
Diagnosis
Since 3-hydroxy isovaleryl carnitine (C5-OH) is typically elevated in this condition, 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency can be identified during newborn screening by testing it using tandem mass spectrometry methodology. Enzyme activity testing in lymphocytes, immortalized lymphoblastoid cells, or fibroblasts, as well as HMGCL gene mutation studies, may confirm the diagnosis of 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency.
Treatment
As with other uncommon inherited metabolic illnesses, there are no controlled treatment studies for 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency. Consequently, it is impossible to make any judgments about whether a particular diet or carnitine supplements are required. Clinical reports and pathobiochemical considerations suggest that the mainstay of therapy is avoiding fasting. L-carnitine supplementation may have detoxifying properties, prevent intracellular loss of free coenzyme A, and prevent secondary L-carnitine insufficiency.
Outlook
The overall mortality rate of 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency is 16%.
Epidemiology
The incidence of 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency is fewer than 1/100,000 live births.
History
3-Hydroxy-3-methylglutaryl-CoA lyase deficiency was initially reported in 1976, and the gene was discovered and cloned in 1993. The first case in the literature was published in Western Australia in 1976, with usual findings of hypoglycemia and acidosis.
See also
HMG-CoA
Mevalonate pathway
References
Further reading
External links
Amino acid metabolism disorders
Cholesterol and steroid metabolism disorders
Lipid metabolism
Rare diseases | 3-Hydroxy-3-methylglutaryl-CoA lyase deficiency | [
"Chemistry"
] | 1,297 | [
"Lipid biochemistry",
"Lipid metabolism",
"Metabolism"
] |
303,101 | https://en.wikipedia.org/wiki/Structural%20genomics | Structural genomics seeks to describe the 3-dimensional structure of every protein encoded by a given genome. This genome-based approach allows for a high-throughput method of structure determination by a combination of experimental and modeling approaches. The principal difference between structural genomics and traditional structural prediction is that structural genomics attempts to determine the structure of every protein encoded by the genome, rather than focusing on one particular protein. With full-genome sequences available, structure prediction can be done more quickly through a combination of experimental and modeling approaches, especially because the availability of large number of sequenced genomes and previously solved protein structures allows scientists to model protein structure on the structures of previously solved homologs.
Because protein structure is closely linked with protein function, the structural genomics has the potential to inform knowledge of protein function. In addition to elucidating protein functions, structural genomics can be used to identify novel protein folds and potential targets for drug discovery. Structural genomics involves taking a large number of approaches to structure determination, including experimental methods using genomic sequences or modeling-based approaches based on sequence or structural homology to a protein of known structure or based on chemical and physical principles for a protein with no homology to any known structure.
As opposed to traditional structural biology, the determination of a protein structure through a structural genomics effort often (but not always) comes before anything is known regarding the protein function. This raises new challenges in structural bioinformatics, i.e. determining protein function from its 3D structure.
Structural genomics emphasizes high throughput determination of protein structures. This is performed in dedicated centers of structural genomics.
While most structural biologists pursue structures of individual proteins or protein groups, specialists in structural genomics pursue structures of proteins on a genome wide scale. This implies large-scale cloning, expression and purification. One main advantage of this approach is economy of scale. On the other hand, the scientific value of some resultant structures is at times questioned. A Science article from January 2006 analyzes the structural genomics field.
One advantage of structural genomics, such as the Protein Structure Initiative, is that the scientific community gets immediate access to new structures, as well as to reagents such as clones and protein. A disadvantage is that many of these structures are of proteins of unknown function and do not have corresponding publications. This requires new ways of communicating this structural information to the broader research community. The Bioinformatics core of the Joint center for structural genomics (JCSG) has recently developed a wiki-based approach namely Open protein structure annotation network (TOPSAN) for annotating protein structures emerging from high-throughput structural genomics centers.
Goals
One goal of structural genomics is to identify novel protein folds. Experimental methods of protein structure determination require proteins that express and/or crystallize well, which may inherently bias the kinds of proteins folds that this experimental data elucidate. A genomic, modeling-based approach such as ab initio modeling may be better able to identify novel protein folds than the experimental approaches because they are not limited by experimental constraints.
Protein function depends on 3-D structure and these 3-D structures are more highly conserved than sequences. Thus, the high-throughput structure determination methods of structural genomics have the potential to inform our understanding of protein functions. This also has potential implications for drug discovery and protein engineering. Furthermore, every protein that is added to the structural database increases the likelihood that the database will include homologous sequences of other unknown proteins. The Protein Structure Initiative (PSI) is a multifaceted effort funded by the National Institutes of Health with various academic and industrial partners that aims to increase knowledge of protein structure using a structural genomics approach and to improve structure-determination methodology.
Methods
Structural genomics takes advantage of completed genome sequences in several ways in order to determine protein structures. The gene sequence of the target protein can also be compared to a known sequence and structural information can then be inferred from the known protein's structure. Structural genomics can be used to predict novel protein folds based on other structural data. Structural genomics can also take modeling-based approach that relies on homology between the unknown protein and a solved protein structure.
de novo methods
Completed genome sequences allow every open reading frame (ORF), the part of a gene that is likely to contain the sequence for the messenger RNA and protein, to be cloned and expressed as protein. These proteins are then purified and crystallized, and then subjected to one of two types of structure determination: X-ray crystallography and nuclear magnetic resonance (NMR). The whole genome sequence allows for the design of every primer required in order to amplify all of the ORFs, clone them into bacteria, and then express them. By using a whole-genome approach to this traditional method of protein structure determination, all of the proteins encoded by the genome can be expressed at once. This approach allows for the structural determination of every protein that is encoded by the genome.
Modelling-based methods
ab initio modeling
This approach uses protein sequence data and the chemical and physical interactions of the encoded amino acids to predict the 3-D structures of proteins with no homology to solved protein structures. One highly successful method for ab initio modeling is the Rosetta program, which divides the protein into short segments and arranges short polypeptide chain into a low-energy local conformation. Rosetta is available for commercial use and for non-commercial use through its public program, Robetta.
Sequence-based modeling
This modeling technique compares the gene sequence of an unknown protein with sequences of proteins with known structures. Depending on the degree of similarity between the sequences, the structure of the known protein can be used as a model for solving the structure of the unknown protein. Highly accurate modeling is considered to require at least 50% amino acid sequence identity between the unknown protein and the solved structure. 30-50% sequence identity gives a model of intermediate-accuracy, and sequence identity below 30% gives low-accuracy models. It has been predicted that at least 16,000 protein structures will need to be determined in order for all structural motifs to be represented at least once and thus allowing the structure of any unknown protein to be solved accurately through modeling. One disadvantage of this method, however, is that structure is more conserved than sequence and thus sequence-based modeling may not be the most accurate way to predict protein structures.
Threading
Threading bases structural modeling on fold similarities rather than sequence identity. This method may help identify distantly related proteins and can be used to infer molecular functions.
Examples of structural genomics
There are currently a number of on-going efforts to solve the structures for every protein in a given proteome.
Thermotoga maritima proteome
One current goal of the Joint Center for Structural Genomics (JCSG), a part of the Protein Structure Initiative (PSI) is to solve the structures for all the proteins in Thermotoga maritima, a thermophillic bacterium. T. maritima was selected as a structural genomics target based on its relatively small genome consisting of 1,877 genes and the hypothesis that the proteins expressed by a thermophilic bacterium would be easier to crystallize.
Lesley et al used Escherichia coli to express all the open-reading frames (ORFs) of T. martima. These proteins were then crystallized and structures were determined for successfully crystallized proteins using X-ray crystallography. Among other structures, this structural genomics approach allowed for the determination of the structure of the TM0449 protein, which was found to exhibit a novel fold as it did not share structural homology with any known protein.
Mycobacterium tuberculosis proteome
The goal of the TB Structural Genomics Consortium is to determine the structures of potential drug targets in Mycobacterium tuberculosis, the bacterium that causes tuberculosis. The development of novel drug therapies against tuberculosis are particularly important given the growing problem of multi-drug-resistant tuberculosis.
The fully sequenced genome of M. tuberculosis has allowed scientists to clone many of these protein targets into expression vectors for purification and structure determination by X-ray crystallography. Studies have identified a number of target proteins for structure determination, including extracellular proteins that may be involved in pathogenesis, iron-regulatory proteins, current drug targets, and proteins predicted to have novel folds. So far, structures have been determined for 708 of the proteins encoded by M. tuberculosis.
Protein structure databases and classifications
Protein Data Bank (PDB): repository for protein sequence and structural information
UniProt: provides sequence and functional information
Structural Classification of Proteins (SCOP Classifications): hierarchical-based approach
Class, Architecture, Topology and Homologous superfamily (CATH): hierarchical-based approach
See also
Hi-C (genomic analysis technique)
Genomics
Omics
Structural proteomics
Protein Structure Initiative
References
Further reading
External links
Protein Structure Initiative (PSI)
PSI Structural Biology Knowledgebase: A Nature Gateway
Genomics
Genome projects
Structural biology
Bioinformatics | Structural genomics | [
"Chemistry",
"Engineering",
"Biology"
] | 1,857 | [
"Biological engineering",
"Bioinformatics",
"Genome projects",
"Structural biology",
"Biochemistry"
] |
303,544 | https://en.wikipedia.org/wiki/Lawrence%20Bragg | Sir William Lawrence Bragg (31 March 1890 – 1 July 1971), known as Lawrence Bragg, was an Australian-born British physicist and X-ray crystallographer, discoverer (1912) of Bragg's law of X-ray diffraction, which is basic for the determination of crystal structure. He was joint recipient (with his father, William Henry Bragg) of the Nobel Prize in Physics in 1915, "For their services in the analysis of crystal structure by means of X-rays"; an important step in the development of X-ray crystallography.
Bragg was knighted in 1941. As of 2024, he is the youngest ever Nobel laureate in physics, or in any science category, having received the award at the age of 25. Bragg was the director of the Cavendish Laboratory, Cambridge, when the discovery of the structure of DNA was reported by James D. Watson and Francis Crick in February 1953.
Biography
Early years
Bragg was born in Adelaide, South Australia to Sir William Henry Bragg (1862–1942), Elder Professor of Mathematics and Physics at the University of Adelaide, and Gwendoline (1869–1929), daughter of Sir Charles Todd, government astronomer of South Australia.
In 1900, Bragg was a student at Queen's School, North Adelaide, followed by five years at St Peter's College, Adelaide. He went to the University of Adelaide at the age of 16 to study mathematics, chemistry and physics, graduating in 1908. In the same year his father accepted the Cavendish chair of physics at the University of Leeds, and brought the family to England. Bragg entered Trinity College, Cambridge in the autumn of 1909 and received a major scholarship in mathematics, despite taking the exam while in bed with pneumonia. After initially excelling in mathematics, he transferred to the physics course in the later years of his studies, and graduated with first class honours in 1911. In 1914 Bragg was elected to a Fellowship at Trinity College – a Fellowship at a Cambridge college involves the submission and defence of a thesis.
Among Bragg's other interests was shell collecting; his personal collection amounted to specimens from some 500 species; all personally collected from South Australia. He discovered a new species of cuttlefish – Sepia braggi, named for him by Joseph Verco.
Career
X-rays and the Bragg equation
The composition of X-rays was unknown, his father argued that X-rays are streams of particles, others argued that they are waves. Max von Laue directed an X-ray beam at a crystal in front of a photographic plate; alongside of the spot where the beam struck there were additional spots from deflected rays – hence X-rays are waves. In 1912, as a first-year research student at Cambridge, W. L. Bragg, while strolling by the river, had the insight that crystals made from parallel sheets of atoms would not diffract X-ray beams that struck their surface at most angles because X-rays deflected by collisions with atoms would be out of phase, cancelling one another out. However, when the X-ray beam struck at an angle at which the distances it passed between atomic sheets in the crystal equalled the X-ray's wavelength then those deflected would be in phase and produce a spot on a nearby film. From this insight he wrote the simple Bragg equation that relates the wavelength of the X-ray and the distance between atomic sheets in a simple crystal to the angles at which an impinging X-ray beam would be reflected.
His father built an apparatus in which a crystal could be rotated to precise angles while measuring the energy of reflections. This enabled father and son to measure the distances between the atomic sheets in a number of simple crystals. They calculated the spacing of the atoms from the weight of the crystal and the Avogadro constant, which enabled them to measure the wavelengths of the X-rays produced by different metallic targets in the X-ray tubes. W. H. Bragg reported their results at meetings and in a paper, giving credit to "his son" (unnamed) for the equation, but not as a co-author, which gave his son "some heartaches", which he never overcame.
Work on sound ranging
Bragg was commissioned early in World War I in the Royal Horse Artillery as a second lieutenant of the Leicestershire battery. In 1915 he was seconded to the Royal Engineers to develop a method to localise enemy artillery from the boom of their firing. On 2 September 1915 his brother was killed during the Gallipoli Campaign. Shortly afterwards, he and his father were awarded the Nobel Prize in Physics. He was 25 years old and remains the youngest science laureate. The problem with sound ranging was that the heavy guns boomed at too low a frequency to be detected by a microphone. After months of frustrating failure he and his group devised a hot wire air wave detector that solved the problem. In this work he was aided by Charles Galton Darwin, William Sansome Tucker, Harold Roper Robinson, Edward Andrade and Henry Harold Hemming. British sound ranging was very effective; there was a unit in every British Army and their system was adopted by the Americans when they entered the war. For his work during the war he was awarded the Military Cross and appointed Officer of the Order of the British Empire. He was also Mentioned in Despatches on 16 June 1916, 4 January 1917 and 7 July 1919.
Hot wire sound ranging was used in World War II during which he served as a civilian adviser.
Between the wars, from 1919 to 1937, he worked at the Victoria University of Manchester as Langworthy Professor of Physics. He became the director of the National Physical Laboratory in Teddington in 1937.
After World War II, Bragg returned to Cambridge, splitting the Cavendish Laboratory into research groups. He believed that "the ideal research unit is one of six to twelve scientists and a few assistants".
University of Manchester (1919–1937)
When demobilised he returned to crystallography at Cambridge. They had agreed that father would study organic crystals, son would investigate inorganic compounds. In 1919 when Ernest Rutherford, a long-time family friend, moved to Cambridge, Lawrence Bragg replaced him as Langworthy Professor of Physics at the Victoria University of Manchester. He recruited an excellent faculty, including former sound rangers, but he believed that his knowledge of physics was weak and he had no classroom experience. The students, many veterans, were critical and rowdy. He was deeply shaken but with family support he pulled himself together and prevailed. He and R. W. James measured the absolute energy of reflected X-rays, which validated a formula derived by C. G. Darwin before the war. Now they could determine the number of electrons in the reflecting targets, and they were able to decipher the structures of more complicated crystals like silicates. It was still difficult: requiring repeated guessing and retrying. In the late 1920s they eased the analysis by using Fourier transforms on the data.
In 1930, he became deeply disturbed while weighing a job offer from Imperial College, London. His family rallied around and he recovered his balance while they spent 1931 in Munich, where he did research.
National Physical Laboratory (1937–1938)
He became director of the National Physical Laboratory in Teddington in 1937, bringing some co-workers along. However, administration and committees took much of his time away from the workbench.
University of Cambridge (1938–1954)
Rutherford died and the search committee named Lawrence Bragg as next in the line of the Cavendish Professors who direct the Cavendish Laboratory. The Laboratory had an eminent history in atomic physics and some members were wary of a crystallographer, which Bragg surmounted by even-handed administration. He worked on improving the interpretation of diffraction patterns. In the small crystallography group was a refugee research student without a mentor: Max Perutz. He showed Bragg X-ray diffraction data from haemoglobin, which suggested that the structure of giant biological molecules might be deciphered. Bragg appointed Perutz as his research assistant and within a few months obtained additional support with a grant from the Rockefeller Foundation. The work was suspended during the Second World War when Perutz was interned as an enemy alien and then worked in military research.
During the war the Cavendish offered a shortened graduate course which emphasised the electronics needed for radar. Bragg worked on the structure of metals and consulted on sonar and sound ranging, for which the Tucker microphone was still used. Bragg was knighted and became Sir Lawrence in 1941. After his father died in 1942, Bragg served for six months as Scientific Liaison Officer to Canada. He also organised periodic conferences on X-ray analysis, which was widely used in military research.
After the war Bragg led in the formation of the International Union of Crystallography and was elected its first president. He reorganised the Cavendish into units to reflect his conviction that "the ideal research unit is one of six to twelve scientists and a few assistants, helped by one or more first-class instrument mechanics and a workshop in which the general run of apparatus can be constructed." Senior members of staff now had offices, telephones, and secretarial support. The scope of the department was enlarged with a new unit on radio astronomy. Bragg's own work focused on the structure of metals, using both X-rays and the electron microscope. In 1947 he persuaded the Medical Research Council (MRC) to support what he described as the "gallant attempt" to determine protein structure as the Laboratory of Molecular Biology, initially consisting of Perutz, John Kendrew and two assistants. Bragg worked with them and by 1960 they had resolved the structure of myoglobin to the atomic level. After this Bragg was less involved; their analysis of haemoglobin was easier after they incorporated two mercury atoms as markers in each molecule. The first monumental triumph of the MRC was decoding the structure of DNA by James Watson and Francis Crick. Bragg announced the discovery at a Solvay conference on proteins in Belgium on 8 April 1953, though it went unreported by the press. He then gave a talk at Guy's Hospital Medical School in London on Thursday, 14 May 1953, which resulted in an article by Ritchie Calder in the News Chronicle of London on Friday, 15 May 1953, entitled "Why You Are You. Nearer Secret of Life". Bragg nominated Crick, Watson and Maurice Wilkins for the 1962 Nobel Prize in Physiology or Medicine; Wilkins' share recognised the contribution of X-ray crystallographers at King's College London. Among them was Rosalind Franklin, whose "photograph 51" showed that DNA was a double helix, not the triple helix that Linus Pauling had proposed. Franklin died before the prize (which only goes to living people) was awarded.
The Royal Institution (1954–1971)
In 1953 the Braggs moved into the elegant flat for the Resident Professor in the Royal Institution in London, the position his father had occupied when he died. In 1934 and 1961 Lawrence had delivered the Royal Institution Christmas Lecture and since 1938 he had been Professor of Natural Philosophy in the Institution, delivering an annual lecture. His father's successors had weakened the Institution, so Bragg had to rebuild it. He bolstered finances by enlisting corporate sponsors, the traditional Friday Evening Discourses were followed by a dinner party for the speaker and carefully selected possible patrons, more than 120 of them each year. "Two of these Discourses in 1965 gave him particular pleasure. On 7 May, Lady Bragg, who had been a member of the Royal Commission on Marriage and Divorce (1951–55) and was Chairman of the National Marriage Guidance Council, lectured on 'Changing patterns in marriage and divorce'; and on 15 November, Bragg listened with evident pride to the Discourse on 'Oscillations and noise in jet engines' given by his engineer-son Stephen, who was then Chief Scientist at Rolls-Royce Ltd and later became Vice-Chancellor of Brunel University." He also introduced a programme of highly regarded Schools' Lectures, enlivened by the elaborate demonstrations that were a hallmark of the Institution. He gave three of these lectures on "electricity".
He continued research in the Institution by recruiting a small group to work in the Davy-Faraday Laboratory in the basement and in the adjoining house, supported by grants he obtained. A visitor to the laboratory succeeded in inserting heavy metals into the enzyme lysozyme; the structure of its crystal was solved in 1965 at the Royal Institution by D. C. Phillips and his coworkers, with the computations on the 9,040 reflections performed on the digital computer at the University of London, which greatly facilitated the work. Two of the illustrations of the positioning of amino acids in the chain were drawn by Bragg. Unlike myoglobin, in which nearly 80 per cent of the amino-acid residues are in the alpha-helix conformation, in lysozyme the alpha-helix content is only about 40 per cent of the amino-acid residues found in four main stretches. Other stretches are of the 310 helix, a conformation that they had proposed earlier. In this conformation, every third peptide is hydrogen-bonded back to the first peptide, thus forming a ring containing ten atoms. They had the complete structure of an enzyme in time for Bragg's 75th birthday. He became Professor Emeritus in 1966.
X-ray analysis of protein structure flourished in subsequent years, determining the structures of scores of proteins in laboratories around the world. Twenty eight Nobel Prizes have been awarded for work using X-ray analysis. The disadvantage of the method is that it must be done on crystals, which precludes seeing changes in shape when enzymes bind substrates and the like. This problem was solved by the development of another line Bragg had initiated, using modified electron microscopes to image single frozen molecules: cryo-electron microscopy.
In his long association with the Royal Institution he was:
Professor of Natural Philosophy, 1938–1953
Fullerian Professor of Chemistry, 1954–1966
Superintendent of the House, 1954–1966
Director of the Davy-Faraday Research Laboratory, 1954–1966
Director of the Royal Institution, 1965–1966
Emeritus Professor, 1966–1971
Personal life
In 1921 he married Alice Hopkinson (1899–1989), a cousin of a Cecil Hopkinson (1891–1917) who shared rooms with Bragg, and was one of his closest friends whilst they were both studying at Cambridge. Cecil was the son of John Hopkinson who was Alice's uncle.
They had four children, the engineer Stephen Lawrence (1923–2014), David William (1926–2005), Margaret Alice (1931–2022) (who married the diplomat Mark Heath), and Patience Mary (1935–2020) (who married David, the son of Sir George Thomson the Nobel prize winning physicist). Alice was on the staff at Withington Girls' School until Bragg was appointed director of the National Physical Laboratory in 1937. She was active in a number of public bodies and served as Mayor of Cambridge from 1945 to 1946.
Bragg's hobbies included drawing – family letters were illustrated with lively sketches – painting, literature and a lifelong interest in gardening. When he moved to London, he missed having a garden and so worked as a part-time gardener, unrecognised by his employer, until a guest at the house expressed surprise at seeing him there. He died at a hospital near his home at Waldringfield, Ipswich, Suffolk. He was buried in Trinity College, Cambridge; his son David is buried in the Parish of the Ascension Burial Ground in Cambridge, his grave is within a few paces of that of Bragg's close friend, Rudolph Cecil Hopkinson, who incurred a severe head wound in the 1914–19 war and died a few months after being invalided back to the UK.
In August 2013, Bragg's relative, the broadcaster Melvyn Bragg, presented a BBC Radio 4 programme ("Bragg on the Braggs") on the 1915 Nobel Prize in Physics winners.
Honours and awards
Bragg was elected a Fellow of the Royal Society (FRS) in 1921 – "a qualification that makes other ones irrelevant". He was knighted by King George VI in the 1941 New Year Honours, and received both the Copley Medal and the Royal Medal of the Royal Society. Although Graeme Hunter, in his book on Bragg Light is a Messenger, argued that he was more a crystallographer than a physicist, Bragg's lifelong activity showed otherwise—he was more of a physicist than anything else. Thus, from 1939 to 1943, he served as President of the Institute of Physics, London. In the 1967 New Year Honours he was appointed Member of the Order of the Companions of Honour by Queen Elizabeth II.
Since 1967 the Institute of Physics has awarded the Lawrence Bragg Medal and Prize. Additionally since 1992, the Australian Institute of Physics has awarded the Bragg Gold Medal for Excellence in Physics to commemorate Lawrence Bragg (in front on the medal) and his father, William Bragg, for the best PhD thesis by a student at an Australian university.
Nobel Prize (1915)
Matteucci Medal (1915)
Hughes Medal (1931)
Royal Medal (1946)
Guthrie Lecture (1952)
Copley Medal (1966)
The Electoral district of Bragg, in the South Australian House of Assembly, was created in 1970, and was named after both William and Lawrence Bragg.
See also
List of Nobel laureates in Physics
Tactical artillery terms from World War I
Tube Alloys
References
Further reading
(This book is about the MRC Laboratory of Molecular Biology, Cambridge.)
External links
First press stories on DNA
Nobelprize.org – The Nobel Prize for Physics in 1915
including the Nobel Lecture, September 6, 1922 The Diffraction of X-Rays by Crystals
A collection of digitised materials related to Bragg's and Linus Pauling's structural chemistry research.
Key Participants: Sir William Lawrence Bragg – Linus Pauling and the Race for DNA: A Documentary History
NOVA Episode on Photograph 51
Oral History interview transcript with William Lawrence Bragg on 20 June 1969, American Institute of Physics, Niels Bohr Library and Archives
Bragg, Lawrence (Sir) (1890–1971) National Library of Australia, Trove, People and Organisation record for William Lawrence Bragg
The Nature of Things: Oil, Soap and Detergent, Ri Channel video, November 1959
The Nature of Things: Atoms and Molecules, Ri Channel video, October 1959
The Nature of Things: Solids, Liquids and Gases, Ri Channel video, November 1959
1890 births
1971 deaths
Military personnel from Adelaide
British Army personnel of World War I
Royal Horse Artillery officers
20th-century British physicists
Academics of the Victoria University of Manchester
Alumni of Trinity College, Cambridge
Australian Nobel laureates
Nobel laureates in Physics
Australian people of English descent
20th-century Australian physicists
Experimental physicists
Optical physicists
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Australian Knights Bachelor
Australian Members of the Order of the Companions of Honour
Australian Officers of the Order of the British Empire
Scientists from Adelaide
Scientists from Ipswich
Recipients of the Copley Medal
Australian recipients of the Military Cross
Royal Medal winners
University of Adelaide alumni
People educated at St Peter's College, Adelaide
Presidents of the Institute of Physics
X-ray crystallography
British crystallographers
English Nobel laureates
British Nobel laureates
Honorary Fellows of the Royal Society of Edinburgh
Foreign fellows of the Indian National Science Academy
Directors of the National Physical Laboratory (United Kingdom)
Directors of the Royal Institution
Members of the German Academy of Sciences at Berlin
Leeds Blue Plaques
Recipients of the Matteucci Medal
Recipients of the Dalton Medal
Manchester Literary and Philosophical Society
Cavendish Professors of Physics
Presidents of the International Union of Crystallography
Members of the American Philosophical Society | Lawrence Bragg | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,996 | [
"X-ray crystallography",
"Crystallography",
"Experimental physics",
"Experimental physicists"
] |
303,802 | https://en.wikipedia.org/wiki/Phase%20rule | In thermodynamics, the phase rule is a general principle governing multi-component, multi-phase systems in thermodynamic equilibrium. For a system without chemical reactions, it relates the number of freely varying intensive properties () to the number of components (), the number of phases (), and number of ways of performing work on the system ():
Examples of intensive properties that count toward are the temperature and pressure. For simple liquids and gases, pressure-volume work is the only type of work, in which case .
The rule was derived by American physicist Josiah Willard Gibbs in his landmark paper titled On the Equilibrium of Heterogeneous Substances, published in parts between 1875 and 1878.
The number of degrees of freedom (also called the variance) is the number of independent intensive properties, i.e., the largest number of thermodynamic parameters such as temperature or pressure that can be varied simultaneously and independently of each other.
An example of a one-component system () is a pure chemical. A two-component system () has two chemically independent components, like a mixture of water and ethanol. Examples of phases that count toward are solids, liquids and gases.
Foundations
A phase is a form of matter that is homogeneous in chemical composition and physical state. Typical phases are solid, liquid and gas. Two immiscible liquids (or liquid mixtures with different compositions) separated by a distinct boundary are counted as two different phases, as are two immiscible solids.
The number of components (C) is the number of chemically independent constituents of the system, i.e. the minimum number of independent species necessary to define the composition of all phases of the system.
The number of degrees of freedom (F) in this context is the number of intensive variables which are independent of each other.
The basis for the rule is that equilibrium between phases places a constraint on the intensive variables. More rigorously, since the phases are in thermodynamic equilibrium with each other, the chemical potentials of the phases must be equal. The number of equality relationships determines the number of degrees of freedom. For example, if the chemical potentials of a liquid and of its vapour depend on temperature (T) and pressure (p), the equality of chemical potentials will mean that each of those variables will be dependent on the other. Mathematically, the equation , where , the chemical potential, defines temperature as a function of pressure or vice versa. (Caution: do not confuse as pressure with , number of phases.)
To be more specific, the composition of each phase is determined by intensive variables (such as mole fractions) in each phase. The total number of variables is , where the extra two are temperature T and pressure p. The number of constraints is , since the chemical potential of each component must be equal in all phases. Subtract the number of constraints from the number of variables to obtain the number of degrees of freedom as .
The rule is valid provided the equilibrium between phases is not influenced by gravitational, electrical or magnetic forces, or by surface area, and only by temperature, pressure, and concentration.
Consequences and examples
Pure substances (one component)
For pure substances so that . In a single phase () condition of a pure component system, two variables (), such as temperature and pressure, can be chosen independently to be any pair of values consistent with the phase. However, if the temperature and pressure combination ranges to a point where the pure component undergoes a separation into two phases (), decreases from 2 to 1. When the system enters the two-phase region, it is no longer possible to independently control temperature and pressure.
In the phase diagram to the right, the boundary curve between the liquid and gas regions maps the constraint between temperature and pressure when the single-component system has separated into liquid and gas phases at equilibrium. The only way to increase the pressure on the two phase line is by increasing the temperature. If the temperature is decreased by cooling, some of the gas condenses, decreasing the pressure. Throughout both processes, the temperature and pressure stay in the relationship shown by this boundary curve unless one phase is entirely consumed by evaporation or condensation, or unless the critical point is reached. As long as there are two phases, there is only one degree of freedom, which corresponds to the position along the phase boundary curve.
The critical point is the black dot at the end of the liquid–gas boundary. As this point is approached, the liquid and gas phases become progressively more similar until, at the critical point, there is no longer a separation into two phases. Above the critical point and away from the phase boundary curve, and the temperature and pressure can be controlled independently. Hence there is only one phase, and it has the physical properties of a dense gas, but is also referred to as a supercritical fluid.
Of the other two-boundary curves, one is the solid–liquid boundary or melting point curve which indicates the conditions for equilibrium between these two phases, and the other at lower temperature and pressure is the solid–gas boundary.
Even for a pure substance, it is possible that three phases, such as solid, liquid and vapour, can exist together in equilibrium (). If there is only one component, there are no degrees of freedom () when there are three phases. Therefore, in a single-component system, this three-phase mixture can only exist at a single temperature and pressure, which is known as a triple point. Here there are two equations , which are sufficient to determine the two variables T and p. In the diagram for CO2 the triple point is the point at which the solid, liquid and gas phases come together, at 5.2 bar and 217 K. It is also possible for other sets of phases to form a triple point, for example in the water system there is a triple point where ice I, ice III and liquid can coexist.
If four phases of a pure substance were in equilibrium (), the phase rule would give , which is meaningless, since there cannot be −1 independent variables. This explains the fact that four phases of a pure substance (such as ice I, ice III, liquid water and water vapour) are not found in equilibrium at any temperature and pressure. In terms of chemical potentials there are now three equations, which cannot in general be satisfied by any values of the two variables T and p, although in principle they might be solved in a special case where one equation is mathematically dependent on the other two. In practice, however, the coexistence of more phases than allowed by the phase rule normally means that the phases are not all in true equilibrium.
Two-component systems
For binary mixtures of two chemically independent components, so that . In addition to temperature and pressure, the other degree of freedom is the composition of each phase, often expressed as mole fraction or mass fraction of one component.
As an example, consider the system of two completely miscible liquids such as toluene and benzene, in equilibrium with their vapours. This system may be described by a boiling-point diagram which shows the composition (mole fraction) of the two phases in equilibrium as functions of temperature (at a fixed pressure).
Four thermodynamic variables which may describe the system include temperature (T), pressure (p), mole fraction of component 1 (toluene) in the liquid phase (x1L), and mole fraction of component 1 in the vapour phase (x1V). However, since two phases are present () in equilibrium, only two of these variables can be independent (). This is because the four variables are constrained by two relations: the equality of the chemical potentials of liquid toluene and toluene vapour, and the corresponding equality for benzene.
For given T and p, there will be two phases at equilibrium when the overall composition of the system (system point) lies in between the two curves. A horizontal line (isotherm or tie line) can be drawn through any such system point, and intersects the curve for each phase at its equilibrium composition. The quantity of each phase is given by the lever rule (expressed in the variable corresponding to the x-axis, here mole fraction).
For the analysis of fractional distillation, the two independent variables are instead considered to be liquid-phase composition (x1L) and pressure. In that case the phase rule implies that the equilibrium temperature (boiling point) and vapour-phase composition are determined.
Liquid–vapour phase diagrams for other systems may have azeotropes (maxima or minima) in the composition curves, but the application of the phase rule is unchanged. The only difference is that the compositions of the two phases are equal exactly at the azeotropic composition.
Aqueous solution of 4 kinds of salts
Consider an aqueous solution containing sodium chloride (NaCl), potassium chloride (KCl), sodium bromide (NaBr), and potassium bromide (KBr), in equilibrium with their respective solid phases. Each salt, in solid form, is a different phase, because each possesses a distinct crystal structure and composition. The aqueous solution itself is another phase, because it forms a homogeneous liquid phase separate from the solid salts, with its own distinct composition and physical properties. Thus we have P = 5 phases.
There are 6 elements present (H, O, Na, K, Cl, Br), but we have 2 constraints:
The stoichiometry of water: n(H) = 2n(O).
Charge balance in the solution: n(Na) + n(K) = n(Cl) + n(Br).
giving C = 6 - 2 = 4 components. The Gibbs phase rule states that F = 1. So, for example, if we plot the P-T phase diagram of the system, there is only one line at which all phases coexist. Any deviation from the line would either cause one of the salts to completely dissolve or one of the ions to completely precipitate from the solution.
Phase rule at constant pressure
For applications in materials science dealing with phase changes between different solid structures, pressure is often imagined to be constant (for example at 1 atmosphere), and is ignored as a degree of freedom, so the formula becomes:
This is sometimes incorrectly called the "condensed phase rule", but it is not applicable to condensed systems subject to high pressures (for example, in geology), since the effects of these pressures are important.
Phase rule in colloidal mixtures
In colloidal mixtures quintuple and sixtuple points have been described in violation of Gibbs phase rule but it is argued that in these systems the rule can be generalized to where accounts for additional parameters of interaction among the components like the diameter of one type of particle in relation to the diameter of the other particles in the solution.
References
Further reading
Chapter 9. Thermodynamics Aspects of Stability
Equilibrium chemistry
Laws of thermodynamics | Phase rule | [
"Physics",
"Chemistry"
] | 2,269 | [
"Equilibrium chemistry",
"Thermodynamics",
"Laws of thermodynamics"
] |
304,604 | https://en.wikipedia.org/wiki/Mechatronics | Mechatronics engineering, also called mechatronics, is an interdisciplinary branch of engineering that focuses on the integration of mechanical engineering, electrical engineering, electronic engineering and software engineering, and also includes a combination of robotics, computer science, telecommunications, systems, control, automation and product engineering.
As technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics, electrical and electronics, hence the name being a portmanteau of the words "mechanics" and "electronics"; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas.
The word mechatronics originated in Japanese-English and was created by Tetsuro Mori, an engineer of Yaskawa Electric Corporation. The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word to the public, and the word began being used globally. Currently the word is translated into many languages and is considered an essential term for advanced automated industry.
Many people treat mechatronics as a modern buzzword synonymous with automation, robotics and electromechanical engineering.
French standard NF E 01-010 gives the following definition: "approach aiming at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality".
History
The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word to the public, and the word began being used globally.
With the advent of information technology in the 1980s, microprocessors were introduced into mechanical systems, improving performance significantly. By the 1990s, advances in computational intelligence were applied to mechatronics in ways that revolutionized the field.
Description
A mechatronics engineer unites the principles of mechanics, electrical, electronics, and computing to generate a simpler, more economical and reliable system.
Engineering cybernetics deals with the question of control engineering of mechatronic systems. It is used to control or regulate such a system (see control theory). Through collaboration, the mechatronic modules perform the production goals and inherit flexible and agile manufacturing properties in the production scheme. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most known architectures involve hierarchy, polyarchy, heterarchy, and hybrid. The methods for achieving a technical effect are described by control algorithms, which might or might not utilize formal methods in their design. Hybrid systems important to mechatronics include production systems, synergy drives,
exploration rovers, automotive subsystems such as anti-lock braking systems and spin-assist, and everyday equipment such as autofocus cameras, video, hard disks, CD players and phones.
Subdisciplines
Mechanical
Mechanical engineering is an important part of mechatronics engineering. It includes the study of mechanical nature of how an object works. Mechanical elements refer to mechanical structure, mechanism, thermo-fluid, and hydraulic aspects of a mechatronics system. The study of thermodynamics, dynamics, fluid mechanics, pneumatics and hydraulics. Mechatronics engineer who works a mechanical engineer can specialize in hydraulics and pneumatics systems, where they can be found working in automobile industries. A mechatronics engineer can also design a vehicle since they have strong mechanical and electronical background. Knowledge of software applications such as computer-aided design and computer aided manufacturing is essential for designing products. Mechatronics covers a part of mechanical syllabus which is widely applied in automobile industry.
Mechatronic systems represent a large part of the functions of an automobile. The control loop formed by sensor—information processing—actuator—mechanical (physical) change is found in many systems. The system size can be very different. The anti-lock braking system (ABS) is a mechatronic system. The brake itself is also one. And the control loop formed by driving control (for example cruise control), engine, vehicle driving speed in the real world and speed measurement is a mechatronic system, too. The great importance of mechatronics for automotive engineering is also evident from the fact that vehicle manufacturers often have development departments with "Mechatronics" in their names.
Electronics and electricals
Electronics and telecommunication engineering specializes in electronics devices and telecom devices of a mechatronics system. A mechatronics engineer specialized in electronics and telecommunications have knowledge of computer hardware devices. The transmission of signal is the main application of this subfield of mechatronics. Where digital and analog systems also forms an important part of mechatronics systems. Telecommunications engineering deals with the transmission of information across a medium.
Electronics engineering is related to computer engineering and electrical engineering. Control engineering has a wide range of electronic applications from the flight and propulsion systems of commercial airplanes to the cruise control present in many modern cars. VLSI designing is important for creating integrated circuits. Mechatronics engineers have deep knowledge of microprocessors, microcontrollers, microchips and semiconductors. The application of mechatronics in electronics manufacturing industry can conduct research and development on consumer electronic devices such as mobile phones, computers, cameras etc. For mechatronics engineers it is necessary to learn operating computer applications such as MATLAB and Simulink for designing and developing electronic products.
Mechatronics engineering is a interdisciplinary course, it includes concepts of both electrical and mechanical systems. A mechatronics engineer engages in designing high power transformers or radio-frequency module transmitters.
Avionics
Avionics is also considered a variant of mechatronics as it combines several fields such as electronics and telecom with aerospace engineering. It is the subdiscipline of mechatronics engineering and aerospace engineering which is engineering branch focusing on electronics systems of aircraft. The word avionics is a blend of aviation and electronics. The electronics system of aircraft includes aircraft communication addressing and reporting system, air navigation, aircraft flight control system, aircraft collision avoidance systems, flight recorder, weather radar and lightning detector. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.
Advanced mechatronics
Another variant is motion control for advanced mechatronics, presently recognized as a key technology in mechatronics. The robustness of motion control will be represented as a function of stiffness and a basis for practical realization. Target of motion is parameterized by control stiffness which could be variable according to the task reference. The system robustness of motion always requires very high stiffness in the controller.
Industrial
The branch of industrial engineer includes the design of machinery, assembly and process lines of various manufacturing industries. This branch can be said somewhat similar to automation and robotics. Mechatronics engineers who works as industrial engineers design and develop infrastructure of a manufacturing plant. Also it can be said that they are architect of machines. One can work as an industrial designer to design the industrial layout and plan for setting up of a manufacturing industry or as an industrial technician to lookover the technical requirements and repairing of the particular factory.
Robotics
Robotics is one of the newest emerging subfield of mechatronics. It is the study of robots and how they are manufactured and operated. Since 2000, this branch of mechatronics is attracting a number of aspirants. Robotics is interrelated with automation because here also not much human intervention is required. In a large number of factories, especially in automobile factories, robots are found in assembly lines, where they perform the job of drilling, installation and fitting. Programming skills are necessary for specialization in robotics. Knowledge of programming language—ROBOTC—is important for functioning robots. An industrial robot is a prime example of a mechatronics system; it includes aspects of electronics, mechanics and computing to do its day-to-day jobs.
Computer
The Internet of things (IoT) is the inter-networking of physical devices, embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data. IoT and mechatronics are complementary. Many of the smart components associated with the Internet of Things will be essentially mechatronic. The development of the IoT is forcing mechatronics engineers, designers, practitioners and educators to research the ways in which mechatronic systems and components are perceived, designed and manufactured. This allows them to face up to new issues such as data security, machine ethics and the human-machine interface.
Knowledge of programming is very important. A mechatronics engineer has to do programming in different levels – for example, PLC programming, drone programming, hardware programming, CNC programming, etc. Due to combination of electronics engineering, soft skills from computer side is important. Important programming languages for mechatronics engineer to learn are Java, Python, Rust, C++ and C programming language.
See also
Automation engineering
Cybernetics
Control theory
Ecomechatronics
Electromechanics
Materials engineering
Mechanical engineering technology
Robotics
Systems engineering
Biomechatronics
References
Sources
Bradley, Dawson et al., Mechatronics, Electronics in products and processes, Chapman and Hall Verlag, London, 1991.
Karnopp, Dean C., Donald L. Margolis, Ronald C. Rosenberg, System Dynamics: Modeling and Simulation of Mechatronic Systems, 4th Edition, Wiley, 2006. Bestselling system dynamics book using bond graph approach.
Cetinkunt, Sabri, Mechatronics, John Wiley & Sons, Inc, 2007
Zhang, Jianhua . Mechatronics and Automation Engineering. Proceedings of the International Conference on Mechatronics and Automation Engineering (ICMAE2016). Xiamen, China, 2016.
Further reading
Bishop, Robert H., Mechatronics: an introduction. CRC Press, 2006.
De Silva, Clarence W., Mechatronics: an integrated approach. CRC Press, 2005
Onwubolu, Godfrey C., Mechatronics: principles and applications. Butterworth-Heinemann, 2005.
Rankers, Adrian M., Machine Dynamics in Mechatronic Systems. University Twente, 1997
External links
IEEE/ASME Transactions on Mechatronics.
Mechatronics Journal – Elsevier
mechatronic applications and realisation List of publications concerning examples
Institution of Mechanical Engineers - Mechatronics, Informatics and Control Group (MICG)
NF E 01-010 2008 – AFNOR (French standard NF E 01-010)
XP E 01-013 2009 – AFNOR (French standard NF E 01-013)
Embedded systems
Electromechanical engineering
Wasei-eigo | Mechatronics | [
"Technology",
"Engineering"
] | 2,307 | [
"Computer engineering",
"Embedded systems",
"Computer systems",
"Computer science",
"Mechanical engineering by discipline",
"Electromechanical engineering",
"Electrical engineering"
] |
304,774 | https://en.wikipedia.org/wiki/Coincidence%20circuit | In physics and electrical engineering, a coincidence circuit or coincidence gate is an electronic device with one output and two (or more) inputs. The output activates only when the circuit receives signals within a time window accepted as at the same time and in parallel at both inputs. Coincidence circuits are widely used in particle detectors and in other areas of science and technology.
Walther Bothe shared the Nobel Prize for Physics in 1954 "...for his discovery of the method of coincidence and the discoveries subsequently made by it." Bruno Rossi invented the electronic coincidence circuit for implementing the coincidence method.
History
Bothe and Geiger, 1924-1925
In his Nobel Prize lecture, Bothe described how he had implemented the coincidence method in an experiment on Compton scattering in 1924. The experiment aimed to check whether Compton scattering produces a recoil electron simultaneously with the scattered gamma ray. Bothe used two point discharge counters connected to separate fibre electrometers and recorded the fibre deflections on a moving photographic film. On the film record he could discern coincident discharges with a time resolution of approximately 1 millisecond.
Bothe and Kohlhörster, 1929
In 1929, Walther Bothe and Werner Kolhörster published the description of a coincidence experiment with tubular discharge counters that Hans Geiger and Walther Müller had invented in 1928. The Bothe-Kohlhörster experiment showed penetrating charged particles in cosmic rays. They used the same mechanical-photographic method for recording simultaneous discharges which, in this experiment, signalled the passage of a charged cosmic ray particle through both counters and through thick wall of lead and iron that surrounded the counters. Their paper, entitled Das Wesen der Höhenstrahlung", was published in the Zeitschrift für Physik v.56, p.751 (1929).
Rossi, 1930
Bruno Rossi, at the age of 24, was in his first job as assistant in the Physics Institute of the University of Florence when he read the Bothe-Kohlhörster paper. It inspired him to begin his own research on cosmic rays. He fabricated Geiger tubes according to the published recipe, and he invented the first practical electronic coincident circuit. It employed several triode vacuum tubes, and could register coincident pulses from any number of counters with a tenfold improvement in time resolution over the mechanical method of Bothe. Rossi described his invention in a paper entitled "Method of Registering Multiple Simultaneous Impulses of Several Geiger Counters", published in Nature v.125, p.636 (1930). The Rossi coincidence circuit was rapidly adopted by experimenters around the world. It was the first practical AND circuit, precursor of the AND logic circuits of electronic computers.
To detect the voltage pulse produced by the coincidence circuit when a coincidence event occurred, Rossi first used earphones and counted the ‘clicks’, and soon an electro-mechanical register to count the coincidence pulses automatically. Rossi used a triple-coincidence version of his circuit with various configurations of Geiger counters in a series of experiments during the period from 1930 to 1943 that laid an essential part of the foundations of cosmic-ray and particle physics.
About the same time, and independently of Rossi, Bothe devised a less practical electronic coincidence device. It used a single pentode vacuum tube and could register only twofold coincidences.
Probability
The main idea of 'coincidence detection' in signal processing is that if a detector detects a signal pulse in the midst of random noise pulses inherent in the detector, there is a certain probability, , that the detected pulse is actually a noise pulse. But if two detectors detect the signal pulse simultaneously, the probability that it is a noise pulse in the detectors is . Suppose . Then . Thus the chance of a false detection is reduced by the use of coincidence detection.
See also
Coincidence detection in neurobiology
References
Computer-related introductions in 1924
History of computing hardware
Experimental particle physics
Neuroethology concepts
Coincidence | Coincidence circuit | [
"Physics",
"Technology"
] | 798 | [
"Experimental physics",
"Particle physics",
"History of computing hardware",
"Experimental particle physics",
"History of computing"
] |
304,799 | https://en.wikipedia.org/wiki/Equivalent%20potential%20temperature | Equivalent potential temperature, commonly referred to as theta-e , is a quantity that is conserved during changes to an air parcel's pressure (that is, during vertical motions in the atmosphere), even if water vapor condenses during that pressure change. It is therefore more conserved than the ordinary potential temperature, which remains constant only for unsaturated vertical motions (pressure changes).
is the temperature a parcel of air would reach if all the water vapor in the parcel were to condense, releasing its latent heat, and the parcel was brought adiabatically to a standard reference pressure, usually 1000 hPa (1000 mbar) which is roughly equal to atmospheric pressure at sea level.
Use in estimating atmospheric stability
Stability of incompressible fluid
Like a ball balanced on top of a hill, denser fluid lying above less dense fluid would be dynamically unstable: overturning motions (convection) can lower the center of gravity, and thus will occur spontaneously, rapidly producing a stable stratification (see also stratification (water)) which is thus the observed condition almost all the time. The condition for stability of an incompressible fluid is that density decreases monotonically with height.
Stability of compressible air: Potential temperature
If a fluid is compressible like air, the criterion for dynamic stability instead involves potential density, the density of the fluid at a fixed reference pressure. For an ideal gas (see gas laws), the stability criterion for an air column is that potential temperature increases monotonically with height.
To understand this, consider dry convection in the atmosphere, where the vertical variation in pressure is substantial and adiabatic temperature change is important: As a parcel of air moves upward, the ambient pressure drops, causing the parcel to expand. Some of the internal energy of the parcel is used up in doing the work required to expand against the atmospheric pressure, so the temperature of the parcel drops, even though it has not lost any heat. Conversely, a sinking parcel is compressed and becomes warmer even though no heat is added.
Air at the top of a mountain is usually colder than the air in the valley below, but the arrangement is not unstable: if a parcel of air from the valley were somehow lifted up to the top of the mountain, when it arrived it would be even colder than the air already there, due to adiabatic cooling; it would be heavier than the ambient air, and would sink back toward its original position. Similarly, if a parcel of cold mountain-top air were to make the trip down to the valley, it would arrive warmer and lighter than the valley air, and would float back up the mountain.
So cool air lying on top of warm air can be stable, as long as the temperature decrease with height is less than the adiabatic lapse rate; the dynamically important quantity is not the temperature, but the potential temperature—the temperature the air would have if it were brought adiabatically to a reference pressure. The air around the mountain is stable because the air at the top, due to its lower pressure, has a higher potential temperature than the warmer air below.
Effects of water condensation: Equivalent potential temperature
A rising parcel of air containing water vapor, if it rises far enough, reaches its lifted condensation level: it becomes saturated with water vapor (see Clausius–Clapeyron relation). If the parcel of air continues to rise, water vapor condenses and releases its latent heat to the surrounding air, partially offsetting the adiabatic cooling. A saturated parcel of air therefore cools less than a dry one would as it rises (its temperature changes with height at the moist adiabatic lapse rate, which is smaller than the dry adiabatic lapse rate). Such a saturated parcel of air can achieve buoyancy, and thus accelerate further upward, a runaway condition (instability) even if potential temperature increases with height. The sufficient condition for an air column to be absolutely stable, even with respect to saturated convective motions, is that the equivalent potential temperature must increase monotonically with height.
Formula
The definition of the equivalent potential temperature is:
Where:
is the temperature [K] of air at pressure ,
is a reference pressure that is taken as 1000 hPa,
is the pressure at the point,
and are the specific gas constants of dry air and of water vapour, respectively,
and are the specific heat capacities of dry air and of liquid water, respectively,
and are the total water and water vapour mixing ratios, respectively,
is the relative humidity,
is the latent heat of vapourisation of water.
A number of approximate formulations are used for calculating equivalent potential temperature, since it is not easy to compute integrations along motion of the parcel. Bolton (1980) gives review of such procedures with estimates of error. His best approximation formula is used when accuracy is needed:
Where:
is (dry) potential temperature [K] at the lifted condensation level (LCL),
is (approximated) temperature [K] at LCL,
is dew point temperature at pressure ,
is the water vapor pressure (to obtain for dry air),
is the ratio of the specific gas constant to the specific heat of dry air at constant pressure (0.2854),
is mixing ratio of water vapor mass per mass [kg/kg] (sometimes value is given in [g/kg] and that should be divided by 1000).
A little more theoretical formula is commonly used in literature like Holton (1972) when theoretical explanation is important:
Where:
is saturated mixing ratio of water at temperature , the temperature at the saturation level of the air,
is latent heat of evaporation at temperature (2406 kJ/kg {at 40 °C} to 2501 kJ/kg {at 0 °C}), and
is specific heat of dry air at constant pressure (1005.7 J/(kg·K)).
Further more simplified formula is used (in, for example, Stull 1988 §13.1 p. 546) for simplicity, if it is desirable to avoid computing :
Where:
= equivalent temperature
= specific gas constant for air (287.04 J/(kg·K))
Usage
This applies on the synoptic scale for characterisation of air masses. For instance, in a study of the North American Ice Storm of 1998, professors Gyakum (McGill University, Montreal) and Roebber (University of Wisconsin-Milwaukee) have demonstrated that the air masses involved originated from high Arctic at an altitude of 300 to 400 hPa the previous week, went down toward the surface as they moved to the Tropics, then moved back up along the Mississippi Valley toward the St. Lawrence Valley. The back trajectories were evaluated using the constant equivalent potential temperatures.
In the mesoscale, equivalent potential temperature is also a useful measure of the static stability of the unsaturated atmosphere. Under normal, stably stratified conditions, the potential temperature increases with height,
and vertical motions are suppressed. If the equivalent potential temperature decreases with height,
the atmosphere is unstable to vertical motions, and convection is likely. Situations in which the equivalent potential temperature decreases with height, indicating instability in saturated air, are quite common.
See also
Meteorology
Moist static energy
Potential temperature
Weather forecasting
Bibliography
M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages.
References
Atmospheric thermodynamics
Equivalent units | Equivalent potential temperature | [
"Mathematics"
] | 1,548 | [
"Equivalent units",
"Quantity",
"Equivalent quantities",
"Units of measurement"
] |
304,840 | https://en.wikipedia.org/wiki/San%20Joaquin%20River | The San Joaquin River ( ; ) is the longest river of Central California. The long river starts in the high Sierra Nevada and flows through the rich agricultural region of the northern San Joaquin Valley before reaching Suisun Bay, San Francisco Bay, and the Pacific Ocean. An important source of irrigation water as well as a wildlife corridor, the San Joaquin is among the most heavily dammed and diverted of California's rivers.
People have inhabited the San Joaquin Valley for more than 8,000 years, and it was one of the major population centers of pre-Columbian California. Starting in the late 18th century, successive waves of explorers then settlers, mainly Spanish and American, emigrated to the San Joaquin basin. When Spain colonized the area, they sent soldiers from Mexico, who were usually of mixed native Mexican and Spanish birth, led by Spanish officers. Franciscan missionaries from Spain came with expeditions to evangelize the natives by teaching them about the Catholic faith.
Once an inland sea, most of the San Joaquin Valley has a very uniform topography, and much of the lower river formed a huge flood basin. In the 20th century, many levees and dams were built on the San Joaquin and all of its major tributaries. These engineering works changed the fluctuating nature of the river forever and cut off the Tulare Basin from the rest of the San Joaquin watershed. Once a habitat for hundreds of thousands of spawning salmon and millions of migratory birds, today the river is subject to tremendous water supply, navigation, and regulation works by various federal agencies, which have dramatically reduced the flow of the river since the 20th century.
Name
The river was called many different names; at times different parts of the river were known by different names. The southern Yokuts called the river, Tihshachu (Tih-shah-choo), meaning salmon-spearing place. The present name of the river dates to 1805–1808, when Spanish explorer Gabriel Moraga was surveying east from Mission San José in order to find possible sites for a mission. Moraga named a tributary of the river (it is not known which one) for Saint Joachim, husband of Saint Anne and father of Mary, the mother of Jesus. The name Moraga chose was later applied to the entire river; it was in common use by 1810.
In 1827, Jedediah Smith wrote in his journal that an unknown group of Native Americans called the river the Peticutry, a name which is listed as a variant in the U.S. Geological Survey (USGS) Geographic Names Information System.
In the Mono language, the river is called typici h huu''', which means "important or great river."
An earlier name for the lower section of the San Joaquin was Rio de San Francisco, which was the name Father Juan Crespí gave to the river he could see entering the Sacramento-San Joaquin Delta from the south. A member of the Pedro Fages party in 1772, Crespi's vantage point was the hilltops behind modern Antioch. Another early name was Rio San Juan Bautista, the origin of which is unknown.
Course
The river's source is located in the Ansel Adams Wilderness, in the south-central Sierra Nevada at the confluence of three major affluents: the Middle Fork, which rises from Thousand Island Lake at almost above sea level, meets the North Fork, which starts southeast of Mount Lyell, and the South Fork, which begins at Martha Lake in Kings Canyon National Park and flows through Florence Lake, joins a short distance downstream. The Middle Fork is considered the largest of the 3 forks. From the mountainous alpine headwaters, the San Joaquin flows generally south into the foothills of the Sierra, passing through four hydroelectric dams. It eventually emerges from the foothills at what was once the town of Millerton, the location of Friant Dam since 1942, which forms Millerton Lake.
Below Friant Dam (RM267), the San Joaquin flows west-southwest out into the San Joaquin Valley – the southern part of the Great Central Valley – passing north of Fresno. With most of its water diverted into aqueducts, the river frequently runs dry in a 150-mile section. This lack of riverwater begins in the between Friant Dam and Mendota, where it is only replenished by the Delta-Mendota Canal (RM 205) and the Fresno Slough, when the Kings River is flooding. From Mendota, the San Joaquin swings northwest, passing through many different channels, some natural and some man-made. Northeast of Dos Palos, it is only joined by the Fresno and Chowchilla Rivers when they reach flood stage. downstream, the Merced River empties into an otherwise dry San Joaquin (RM118).
The majority of the river flows through quiet agricultural bottom lands, and as a result its meandering course manages to avoid most of the urban areas and cities in the San Joaquin Valley. About west of Modesto, the San Joaquin meets its largest tributary, the Tuolumne. Near Vernalis, it is joined by another major tributary, the Stanislaus River. The river passes between Manteca and Tracy, where a pair of distributaries – the Old River and Middle River – split off from the main stem just above the Sacramento-San Joaquin Delta, a huge inverted river delta formed by sediment deposits of the Sacramento and San Joaquin Rivers.
About from the mouth, the river draws abreast to the western flank of Stockton, one of the basin's largest cities. From here to the mouth, the river is dredged as part of a navigation project, the Stockton Deep Water Ship Channel. Past the head of tide, amid the many islands of the delta, the San Joaquin is joined by two more tributaries: the Calaveras River and the larger Mokelumne. The river grows to almost wide before ending at its confluence with the Sacramento River, in Antioch, forming the head of Suisun Bay. The combined waters from the two rivers then flow west through the Carquinez Strait and San Francisco Bay into the Pacific.
Discharge
The natural annual discharge of the San Joaquin before agricultural development is believed to have been between 6–7.9 million acre feet (7.4–9.7 million m3), equaling a flow of roughly . Some early estimates even range as high as 14 million acre-feet (17.3 million m3), or more than . The numerous tributaries of the San Joaquin – the Fresno, Chowchilla, Merced, Tuolumne, Mariposa Creek, Calaveras, Mokelumne, and others – flowed freely across alluvial flood plains to join the river. All of the major tributaries of the river originate in the Sierra Nevada; most of the streams that start in the Coast Range are intermittent, and contribute little to the flow of the San Joaquin. During the winter, spring, and early summer, storms and snowmelt swell the river; in 1914 – before the development of major dams and irrigation diversions – the California Department of Engineering estimated the river's flow in full flood at . In late summer and autumn, there is little water left over to replenish stream flow. Historically, groundwater seepage from Tulare Lake maintained a significant base flow in the river during the dry months – some accounts suggest over 50 percent.
The present conditions of the San Joaquin River at Friant Dam, the average discharge was , or 0.234 million acre-feet per year. The highest average discharge between 1941 and 2015 was in 1983 when the discharge was , with an average of those years of . The water temperature range from 7°Celsius (44.5 °F) in February 2006 to a high of on Oct 4, 2014. This range is less extreme than the temperatures below Vernalis where the range is from 2°Celsius (35.6 °F)on Dec 26, 1987 to on August 9, 1990. The river typically ends above the Mendota Pool. Larger flows in the fall may make it possible for the river to extend further towards the ocean, but for the last several years, this is a rare occurrence. The water is largely held behind Friant Dam.
The typical monthly flow of the San Joaquin River near the Sack Dam is 0. There have been seepage concerns below this part of the river, so current flows are restricted below the Sack Dam.
The present annual flow of the San Joaquin River near Vernalis is about , or 4.5 million acre-feet (5.6 million m3) per year. According to USGS stream gauge #11303500 at Vernalis, above Suisun Bay and below the mouth of the Stanislaus River, the average discharge of the San Joaquin River between 1924 and 2011 was , or 3.3 million acre-feet (4.0 million m3) per year. The highest recorded annual mean was , 15.4 million acre-feet (19.0 million m3), in 1983, while the lowest was , , in 1977. The maximum peak flow occurred on December 9, 1950, at , and a low flow of was recorded on August 10, 1961.
San Joaquin River monthly discharges at Vernalis
Geology
In a geologic context, the San Joaquin River can be divided into two major segments. The upper above Friant Dam in the Sierra is characterized as a steep-gradient, rocky mountain stream. Over millions of years, the upper San Joaquin, as well as the upper reaches of most of its tributaries, have eroded enormous amounts of rock and sediment from the mountains. Most of the Sierras are underlain by granitic igneous and metamorphic rock dating back to the Mesozoic Era (250–66 MYA); in addition many of the San Joaquin's tributaries flow across a foothills region of metamorphosed volcanic rock more famously known as the Mother Lode Gold Belt.
The lower part of the river, in sharp contrast, is a meandering stream flowing over Cenozoic alluvial deposits (66 MYA-present), which together comprise the flat floor of the Central Valley. The tremendous volume of sediments that underlie the lower San Joaquin River range from deep, with distance to bedrock generally increasing in a northerly direction. Prior to the uplift of the California Coast Ranges, more than of sediments were deposited at the foot of the Sierras by tidal activity, and the ancestral San Joaquin and its tributaries flowed west over this alluvial plain to the sea, dumping their own sediments onto the marine deposits. Compressional forces along the boundary of the North American and Pacific Plates between 2–4 MYA resulted in the uplift of the Coast Ranges, creating an enclosed basin today known as the Central Valley and resulted in the San Joaquin's present path to the sea.
Because of its highly permeable nature, the San Joaquin River's valley is underlain by one of the largest aquifers in the Western United States. The aquifer underlying the San Joaquin River and Tulare Basin is estimated to hold nearly of water, of which about half can be pumped economically or is clean enough for human use. The aquifer receives in excess of of inflow per year, mostly from precipitation and irrigation water seepage. Concentration of chloride and other minerals generally increases from east to west across the basin.
History
Indigenous people
Archaeological finds near the southern end of the San Joaquin Valley suggested that humans first arrived in the region as early as 12,000 but no later than 5,000 years ago. The two major ethnic groups were the Miwok people, who inhabited the northern end of the San Joaquin Valley and the Sacramento-San Joaquin Delta region, and the Yokuts tribes, scattered around the more arid southern portion of the basin. During these pre-European times, the San Joaquin River flowed through rich grasslands and sprawling marshes, flooding every few years and transforming much of the valley into lakes. At the southern end of the valley lay the vast Tulare Lake, formerly the largest freshwater lake in the western United States, which connected with the San Joaquin through a series of marshes and sloughs. The rich vegetation and wildlife surrounding these bodies of water made the San Joaquin Valley a favored home as well as a stopping-off place for other nomadic peoples. The native people, mostly hunter-gatherers, lived off this land of abundance; during the 18th century, the population of the San Joaquin Valley was estimated at more than 69,000, representing one of the greatest concentrations of native people anywhere in North America.
Before European colonization, the Yokuts occupied the entire San Joaquin Valley, from the Sacramento-San Joaquin River Delta south to the Tehachapi Mountains, including the adjacent foothills of the Sierra Nevada to the east and parts of the Coast Range to the west. In contrast, the Miwok occupied land deeper within the Sierra Nevada stretching north from the Merced River to the Mokelumne or the American, a tributary of the Sacramento, and west to the Delta region. Most of the Miwok people in the watershed were part of the appropriately named Sierra Miwok group.
The Yokuts were unique among California natives in that they were divided into true tribes. Each had a name, a language, and a territory.
Of the about 63 known Yokuts tribes, 33 lived along or around the San Joaquin River and its tributaries.
The staple food for San Joaquin Valley inhabitants was the acorn, which when ground, could be made into various foods such as cakes. Grinding the acorns was a simple process where they crushed the nuts using rocks in natural granite depressions. Many of the surviving examples of acorn milling areas can still be found in the foothills, especially around the Kaweah River area.
Spanish and Mexican influence
The first recorded non-indigenous person to see the San Joaquin River was Don Pedro Fages in 1772. Fages, accompanied by Father Juan Crespí, reached Mount Diablo near Suisun Bay on March 30 and there gazed upon the merging courses of the Sacramento, San Joaquin and Mokelumne Rivers.Johnson, Haslam and Dawson, p. 28 Another narrative does not mention Fages' name, but does say that Crespí was the one who reached Suisun Bay in 1772. During this visit, Crespí called the San Joaquin River "El Rio de San Francisco", a name that was not used widely due to the river's remoteness but persisted until the early 19th century.
In the autumn of 1772 Fages set out from Mission San Luis Obispo de Tolosa in pursuit of deserters from the Spanish army, and traveled east then north over the Tehachapi Mountains through Tejón Pass, which today carries Interstate 5 into the San Joaquin valley. After crossing the mountains, he came upon the shore of Buena Vista Lake at the southern end of the San Joaquin Valley, and gave the name Buena Vista ("beautiful view") to the pass and a nearby Native American village. However, Fages did not venture farther north, and thus did not further explore the main stem of the San Joaquin River.
The San Joaquin River region remained largely unknown except for the fact of its existence until 1806, when Spanish explorer Gabriel Moraga led the first of several subsequent expeditions into the Central Valley, in order to find potential mission sites. Moraga started out from Mission San Juan Bautista, in present-day San Benito County, on September 21 of that year and traveled east into the San Joaquin Valley. The group skirted the western foothills of the Sierra and christened many place names that remain in use today. In 1807 and 1808, Moraga set out again to the San Joaquin Valley. It was during one of these expeditions that he gave the river its present name, after St. Joachim. He also gave names to many tributaries of the river, such as the Merced River (El Río de Nuestra Señora de Merced, "River of our Lady of Mercy").
Relations between the Spanish and the Native Americans in the earlier expeditions to the valley were initially friendly, and the indigenous people began to grow accustomed to the Spanish that later came to the San Joaquin River region. As early as the 1807 Moraga expedition, it was reported that some natives were hostile and attempting to steal their horses. Indeed, when the natives began to rustle cattle and horses for food, the Spanish retaliated by burning camps and villages. Such conflict created enormous cultural loss, and violence continually escalated between the two sides, with no apparent end in sight.Pritzker, p. 157
California became part of Mexico in 1821. The new government secularized the Spanish missions and as a result the conversos in the missions were no longer protected by the missionaries from exploitation. The Mexican government began to tax the missions excessively. From 1820, El Camino Viejo, a route between Los Angeles and the San Francisco Bay along the west side of the San Joaquin Valley, brought settlements from the United States into the valley. During Mexican rule, the mission lands in the San Joaquin Valley were subdivided to wealthy landowners (rancheros). The mission lands that were supposed to be given to the natives were also fraudulently taken over by American settlers. A famous leader of the natives was the Yokuts Estanislao, who led revolts against the Mexicans in the late 1820s until finally defeated in 1829 on the Stanislaus River, which bears his name today.
Early American era
The first American known to see the San Joaquin River was likely Jedediah Smith, a renowned mountain man, fur trapper and explorer. In 1826, Smith arrived in Mission San Gabriel Arcángel, California, when the region was under control of the Mexican government. As this was in violation of a law which prevented foreigners from entering California, and he could have been arrested for spying, he traveled north into the San Joaquin Valley, searching for populations of beaver. Smith noted the fertility and natural beauty of the area, and the apparent peace of the Native Americans living in the villages he passed. His expedition then turned east in an attempt to cross the Sierra Nevada. They tried to summit the range by way of both the Kings River and the American River (a tributary of the Sacramento), but it was early spring and the snow was too deep. They crossed the mountains along the Stanislaus River canyon, becoming the first recorded whites to cross the Sierra Nevada on foot. It is still disputed over whether Smith's party discovered gold on the San Joaquin or one of its tributaries. Although some of his men confirmed it, Smith did not make any mention in his journal.
In the early 1830s, a few fur trappers from the Pacific Northwest exploring southwards into the San Joaquin Valley saw an epidemic of smallpox and malaria brought unintentionally by the Europeans that had swept down the San Joaquin River corridor during the summer of 1833, killing between 50 and 75 percent of the entire native population in the valley.Exploring the Great Rivers of North America, p. 128 The outbreak continued year after year with diminishing acuteness until about 50,000–60,000 indigenous people were dead. Explorer Kit Carson noted in 1839 that "... cholera or some other fearful scourge broke out among them and raged with such fearful fatality that they were unable either to bury or burn their dead, and the air was filled with the stench of their decaying bodies."
During the time Mexico was in control of California, the San Joaquin River region was only sparsely populated, and used almost exclusively for cattle ranching. When California won independence from Mexico in 1846, becoming part of the United States the following month, a flood of American settlers descended upon the valley. Just a year before, Benjamin Davis Wilson "drove a herd of cattle from his Riverside rancho through the San Joaquin Valley to Stockton and reported seeing not a single white man". After the Americans took over, emigrants began trickling in increased numbers, establishing the towns of Kingston City, Millerton, and Fresno City. The newcomers also included a group of Mormons led by Samuel Brannan who established a settlement at the confluence of the San Joaquin and the Stanislaus, called New Hope or San Joaquin City.Rose, p. 26
The real influx came in 1848, when a gold strike on the American set off the California Gold Rush. Within one year, the population of the San Joaquin Valley increased by more than 80,000. The city of Stockton, on the lower San Joaquin, quickly grew from a sleepy backwater to a thriving trading center, the stopping-off point for miners headed to the gold fields in the foothills of the Sierra. Rough ways such as the Millerton Road which later became the Stockton - Los Angeles Road quickly extended the length of the valley, some following old cattle routes and Native American trails, and were served by mule teams and covered wagons. Riverboat navigation quickly became an important transportation link on the San Joaquin River, and during the "June Rise", as boat operators called the San Joaquin's annual high water levels during snowmelt, large craft could make it as far upstream as Fresno. During the peak years of the gold rush, the river in the Stockton area was reportedly crowded with hundreds of abandoned oceangoing craft, whose crew had deserted for the gold fields. The multitude of idle ships was such a blockade that at several occasions they were burned just to clear a way for riverboat traffic.
Irrigation era
Although the gold rush attracted tens of thousands of newcomers to the San Joaquin River area, deposits of the precious mineral petered out within a few years, especially on the upper reaches of the San Joaquin and its tributaries which were only suitable for placer mining. Many of these people settled in the San Joaquin Valley, most in the existing towns such as Stockton, Fresno and Bakersfield but some establishing new settlements. These included San Joaquin City, near the confluence of the San Joaquin with the Stanislaus, probably the largest of these post-gold-rush boomtowns. Established in 1851, the town maintained considerable size until 1880, when trade competition from nearby Stockton caused it to diminish. Another notable but much smaller settlement was Las Juntas, near present-day Mendota. This was a haven for criminals and fugitives, and was frequented by the infamous bandits Joaquín Murrieta and Tiburcio Vásquez.
It was in the mid-1860s that the San Joaquin River and its surrounds underwent a substantial change: the introduction of irrigated agriculture. As early as 1863, small irrigation canals were built in the Centerville area, southeast of Fresno, but were destroyed in subsequent floods. The vulnerability of the small local infrastructure led to the establishment of irrigation districts, which were formed to construct and maintain canals in certain areas of the valley. One of the first was the Robla Canal Company in the Merced River area, which went into operation in March 1876, but was soon surpassed by the Farmers Canal Company. The district built a diversion dam on the Merced, sending its water into a pair of canals still in use today.
One of the most powerful early irrigation empires was the Kern County Land and Water Company, established in 1873 by land speculator James Ben Ali Haggin, which grew to supply over through their canal system. Haggin soon ran into conflicts with other landowners over riparian water rights, as the larger districts, including his, had more financial reserves and engineering expertise, and were the first to build dams and diversions on a large scale. This resulted in the drying out of streams and rivers before they reached downstream users and sparked conflict over how much water could be allotted to whom. In Haggin's case, his company ran into problems with the Miller & Lux Corporation, run by Henry Miller and Charles Lux, who owned more than throughout the San Joaquin Valley, Tulare Basin, and other regions of California. The court battle that resulted would change water laws and rights in the San Joaquin River valley, and ended up promoting large-scale agribusiness over small farmers.
Miller and Lux were not any newer to the San Joaquin Valley than had been Haggin, but were the driving influence on valley agribusiness until well into the early 20th century. The corporation had begun acquiring land in the valley in 1858, eventually holding sway over an enormous swath reaching from the Kern River in the south to the Chowchilla River in the north. Much of the land that Miller and Lux acquired was swamp and marsh, which was considered virtually worthless. However, with their huge capital, they could afford to drain thousands of acres of it, beginning an enormous environmental change that eventually resulted in the loss of over 95 percent of the wetlands adjoining the San Joaquin River and Tulare Basin.
Henry Miller exercised enormous political power in the state, and most San Joaquin Valley inhabitants either were avid supporters of him or despised him. When Miller died in 1916, his company owned in the San Joaquin Valley alone with hundreds of miles of well-developed, maintained irrigation canals. As said by Tom Mott, the son of Miller and Lux' irrigation superintendent, "Miller realized you couldn't do anything with the land unless you had the water to go with it. Perhaps more than any other person, Miller had more of a lasting impact on the San Joaquin River than any other individual."
By the early 20th century, so much water was being diverted off the San Joaquin River and its tributaries that the river was no longer suitable for navigational purposes. As a result, commercial navigation began a decline starting in the late 19th century and was completely gone by 1911. With over under irrigation along the river by 1900 – this figure has only grown hugely since then – the river and its tributaries became much narrower, siltier and shallower, with large consequences for the natural environment, for sustainability of water supplies in its valley, and also huge changes for water politics in the state. The San Joaquin and its tributaries seemed to give rise to just about every single possible argument over water, including such cases as "When is a river not a river?" referring to the difference between a slough and a marsh. It has been said that fights over the river have caused "some of the most bitter and longest running lawsuits ever to clog the courts. Arguably, it is the most litigated river in America."
Dams, diversions and engineering
Hydroelectric development in the early 1900s
By the early 20th century, Californian cities as far south as Los Angeles were looking to new sources for electricity because of their rapidly growing populations and industries. Two visionaries, railroad baron Henry E. Huntington and engineer John S. Eastwood established a fledgling power company in 1902 today known as Southern California Edison, and acquired water rights to the upper San Joaquin River from the Miller and Lux Corporation. During that year, Huntington and Eastwood devised plans to utilize the water of the San Joaquin River and some of its headwaters tributaries in what would become one of the most extensive hydroelectric systems in the world, known as the Big Creek Hydroelectric Project.
Construction of the system's facilities, which included Mammoth Pool and Redinger Dams on the San Joaquin and four additional reservoirs on its tributaries with a total storage capacity of , started in 1911. A total of eight dams and tunnels, the longest of which stretches , and nine powerhouses with a total installed capacity of 1,014 MW were built in stages spanning the 20th century, with the last powerhouse coming on line in 1987. The consistent use and reuse of the waters of the San Joaquin River, its South Fork, and the namesake of the project, Big Creek, over a vertical drop of , have over the years inspired a nickname, "The Hardest Working Water in the World".
Early 1900s hydroelectric development in the upper San Joaquin River basin was not limited to Southern California Edison. In 1910, the San Joaquin Electric Company built a dam on Willow Creek, a tributary of the San Joaquin River, forming Bass Lake as part of the Crane Valley hydroelectric project. Water from Bass Lake was diverted to a powerhouse on the San Joaquin River beginning in 1917, and two more powerhouses were added in 1919, increasing the total generating capacity to about 28 MW. The Crane Valley project and San Joaquin Electric was purchased by San Joaquin Light and Power Company in 1909, which was in turn purchased by Pacific Gas and Electric Company (PG&E) in 1936. In 1920, Kerckhoff Dam was completed on the San Joaquin River about southwest of Big Creek as part of PG&E's Kerckhoff hydroelectric project. The dam initially operated with a capacity of 38 MW in the Kerckhoff No. 1 Powerhouse on Millerton Lake. In 1983, the 155 MW Kerckhoff No. 2 Powerhouse was added, bringing the total capacity to 193 MW.
Stockton Ship Channel
The San Joaquin was once navigable by steamboats as far upstream as Fresno, but agricultural diversions have greatly slowed and shallowed the river. In addition, this has caused the river to drop large amounts of silt that formerly washed out to sea in its bed, further reducing the depth. In the late 19th century, the city of Stockton, once an important seaport for the San Joaquin Valley, found itself increasingly landlocked because the San Joaquin River, its main connection to the sea, was rapidly silting up. The early 20th century saw proposals to maintain a minimum depth in the lower river by dredging, but these were interrupted by the onset of World War I. In 1925 the city put forth a $1.3 million bond for dredging the lower San Joaquin from its mouth to the Port of Stockton – a distance of by river.Rose, p. 109
In 1926, Stockton pooled its finances with the federal and state governments for a total of $8.2 million. Construction on the channel, which included widening and deepening the riverbed and cutting off meanders and oxbow lakes, began in earnest in 1928. These included major cuts at Hog Island, Venice Island and Mandeville Island, plus five smaller straightening projects. The navigation project shortened the river length by and deepened it to . Additional deepening work was carried out in 1968 and 1982. Today, the navigation channel, known as the Stockton Deep Water Ship Channel, can handle fully loaded vessels of up to and up to long. However, the navigation works have unexpectedly led to low dissolved oxygen levels in the lower San Joaquin River, which has hurt fish populations. This is believed to be a result of the combination of the abrupt geometry change from the shallow river upstream of Stockton to the deep water channel, in addition to pollution from the harbor and city and poor tidal mixing.
Federal and state projects
Central Valley Project
As early as the 1870s, state and federal agencies were already looking at the Central Valley as an area in need of a large water transport project. In 1931 California's Department of Water Resources came up with the State Water Plan, which entailed the construction of dams and canals to transport water from the Sacramento River to the rapidly dwindling San Joaquin. The project was still in its planning stages when the Great Depression hit the United States, and California was unable to raise the funds necessary for building the various facilities. As a result, the project was transferred to the federal government and switched hands several times between the U.S. Army Corps of Engineers (USACE) and U.S. Bureau of Reclamation (USBR) before finally being authorized in the Rivers and Harbors Act of 1937 as a USBR undertaking and part of the New Deal, a series of large-scale reforms and construction projects intended to provide jobs for the millions of unemployed during the Depression.
Construction of Friant Dam, the main dam on the San Joaquin River, began in 1937 and was completed in 1942. The dam serves for irrigation and flood control storage, but its main purpose is to divert water into the Madera Canal, which runs northwest along the San Joaquin Valley to the Chowchilla River, and the Friant-Kern Canal, which carries San Joaquin water all the way south into the Tulare Basin, terminating at the Kern River. Both are irrigation and municipal supply canals serving primarily agricultural interests. Water from the Friant Dam supplies almost of farmland in Fresno, Kern, Madera, and Tulare counties. The diversion of water off the San Joaquin at Friant Dam leaves little more than a trickle below the dam in most years, except for releases serving farms in the immediate area downstream of Friant.
A key point for irrigation water distribution on the San Joaquin, despite its small size, is Mendota Dam. Built in 1871 at the juncture of the San Joaquin River and Fresno Slough, it initially served to divert water into the Main Canal, an irrigation canal for the riverside bottomlands in the San Joaquin Valley. In 1951, Mendota became the terminus of the Delta-Mendota Canal, a USBR project which conveys up to for from the mouth of the Sacramento River to the usually dry San Joaquin at that point. Water from Mendota is distributed in two directions: released into the San Joaquin for downstream diversions at the Sack Dam, another small diversion dam; and into Fresno Slough during the dry season, when no water is flowing in from the Kings River. The latter sends water into the Tulare Basin through the natural channel of the slough, which in essence conveys water in either direction depending on the time of year – north into the San Joaquin during the rainy season, south into the Tulare during the dry months.
Eastside Bypass
Even with the presence of Friant and numerous other flood control dams, large floods still caused significant damage along the San Joaquin River all the way through the late 1950s. The passage of the Flood Control Act of 1944 included provisions for the construction of a levee system along part of the San Joaquin River, but valley farmers were not entirely satisfied. After years of lobbying, farmers convinced the state government to authorize a massive flood-control system of diversion channels and levees whose main component is the Eastside Bypass, so named because of its location east of and parallel to the San Joaquin. Groundbreaking of the huge project was in 1959 and construction was finished in 1966.
The bypass system starts with the Chowchilla Canal Bypass, which can divert up to off the San Joaquin, a few miles above Mendota. After intercepting the flow of the Fresno River, the system is known as the Eastside Bypass, which runs northwest, crossing numerous tributaries: Berenda and Ash Sloughs, the Chowchilla River, Owens Creek and Bear Creek. Near the terminus, the bypass channel has a capacity of roughly . The Eastside Bypass ends just upstream of the Merced River confluence, where the San Joaquin levee system is better designed. However, the levees on the bypass channel are generally more well-built than those on the San Joaquin mainstem and thus the channel of the San Joaquin runs dry in some places where the entire flow has been diverted to the bypass system.
Proposed dams
Although fairly large with a capacity of , Millerton Lake, the reservoir of Friant Dam, is small compared to other reservoirs in the San Joaquin basin, such as Don Pedro and Pine Flat. The Bureau of Reclamation in conjunction with the California Department of Water Resources has proposed the construction of a new dam on the San Joaquin, Temperance Flat Dam, a few miles upstream of Friant. The proposed $1.2-3.5 billion dam would stand high and create a reservoir of , well over twice the capacity of Millerton Lake. Proponents of the project cite numerous benefits: flood control, increased storage, hydroelectric potential, and capacity to provide a greater flow in the downstream river during the dry season. It would also give dam operators the advantage of being able to maintain the river to a lower temperature due to the reservoir's great depth. The new reservoir would provide an estimated annual yield of . In November 2014 the dam received $171 million of state funding from Proposition 1A, though project backers had sought $1 billion in funding.
However, opponents assert that the upper San Joaquin River, which is already controlled by dozens of smaller reservoirs upstream of Millerton Lake, will not provide sufficient discharge to fill the reservoir except in very wet years. The new reservoir would flood of the San Joaquin River including whitewater runs, fishing areas and historic sites. It would also inundate two Big Creek Hydro powerhouses, causing a potential net loss of electricity generation. There would be significant evaporation losses from the reservoir, and the water required to fill it would put further pressure on already stressed water resources in the San Joaquin River basin.
Ecology and environment
Hundreds of years ago, the San Joaquin flowed freely through a grass and marsh-dominated region variously known as the "California prairie", "California annual grassland", or "Central Valley grasslands". It is widely believed that the dominant grass species throughout the San Joaquin River valley and Tulare Basin, as well as the Sacramento Valley, the Sierra foothills and Coast Ranges, was Nassella pulchra, a type of bunchgrass more commonly known as purple needlegrass. Today, this vegetation community exists only in isolated pockets because of development of the valley for agriculture, and in much of the remnant open areas where it once thrived, now grows introduced flora such as annual rye and wild oat. The vegetation communities created by the introduced grasses are sometimes referred to as "valley grassland", which is highly seasonal but is spread throughout the Central Valley from near Redding to south of Bakersfield. These grasses all thrive in the Mediterranean climate that dominates much of the San Joaquin Valley.
The San Joaquin River and its marshes and wetlands provide a critical resting and breeding stop for migratory birds along the Pacific Flyway. Once, the seasonal bird populations in the San Joaquin basin were immense, especially in the now-dry Tulare Lake region: "It took something different, though, to capture the sound of the blue sky as it turned dark and deafening from the wings and cries of millions of native and migratory birds – Canada geese, mallards, swans, pelicans, cranes, teal, and curlews. How to mimic the sudden flight of flocks so immense they extinguished the sun? One of the first white men to camp along the lake could think of only one noise, the roar of a freight train, that compared with the takeoff of the birds." Historically, the grasslands and the fringes of the great marshes and lakes provided habitat for large grazing animals including pronghorn, mule deer and the endemic tule elk, as well as predators such as the San Joaquin kit fox; all of these species have seen dramatic population declines as their native habitat has fallen under the plow.
Human activities have replaced or altered over 95 percent of the historic wetlands as well as the California oak woodland habitat, which originally occurred along stream and river corridors in the foothills, and the tule grass that once thrived in huge stands on the edges of marshes and lakes. Some of the richest remaining marsh habitats are in the Sacramento-San Joaquin Delta, which, despite significant agricultural and infrastructure development, has retained many of its original swamps and backwaters. Before the 19th century, the delta was a region of numerous islands of nutrient-rich peat, alluvial deposits, winding channels and waterways. Since then, most of the Delta islands have been cultivated, and consistent use of the land has resulted in subsidence, in some cases up to . Water diversions from the rivers feeding it have increased salinity, which has in turn caused declines in fish populations that once thrived in the region.
As defined by the World Wildlife Fund, the San Joaquin River watershed is part of the Sacramento-San Joaquin freshwater ecoregion, which supports almost 40 species of freshwater fish. These include several types of lampreys, sturgeons, sunfish, perch and various anadromous fish species such as salmon and steelhead. Some of these fish are believed to have descended from fishes of the Columbia Basin in geologically ancient times, when the upper reaches of the Sacramento River watershed were connected with that of the Snake River. Up to 75 percent of the historic species were endemic to the Sacramento-San Joaquin basin. Most native fish stocks have suffered because of predation by introduced species and dam construction. In a study from 1993 to 1995, it was found that the main stem of the San Joaquin River was mainly populated by fathead minnow, red shiner, threadfin shad and inland silverside, all of which were introduced. Downstream portions of the river's main tributaries were populated mainly by largemouth and smallmouth bass, redear sunfish and white catfish, while native species have survived relatively well in the upper reaches of the river and its tributaries, which also play host to introduced brown trout.
Pollution
For its size, the San Joaquin River is one of the most polluted rivers in the United States, especially in its lower course. Years of pesticides and fertilizers being applied to the surrounding lands as well as municipal runoff have led to elevated levels of selenium, fluoride, nitrates and other substances in the river and its tributaries; pesticide pollution is now considered "ubiquitous" to the San Joaquin River system. The selenium is believed to originate from soils on the west side of the valley and in the Coast Ranges, which are rich with the element. Additionally, the San Joaquin is suffering chronic salinity problems due to high levels of minerals being washed off the land by irrigation practices. Abandoned mines contribute toxic acid mine drainage to some tributaries of the river. One of the worst environmental disasters was at Kesterson Reservoir, a disposal site for agricultural drain water which doubled as a wildlife refuge. Initially, animals and plants thrived in the artificial wetlands that were created here, but in 1983, it was found that birds had suffered severe deformities and deaths due to steadily increasing levels of chemicals and toxins. In the next few years, all the fish species died except for the mosquito fish, and algae blooms proliferated in the foul water. These have had not only an adverse impact on San Joaquin River ecology, but also cause pollution to the sources of most of the large aqueducts in the state – the California and Delta-Mendota Canals, for example.
Salmon
Before irrigation development, the San Joaquin River and its tributaries once supported the third largest run of Pacific salmon in California, including prodigious spring, summer, fall, and late-fall runs of chinook salmon. The California Department of Fish and Game estimated in the 1930s that the historic salmon run was likely in the vicinity of 200,000 to 500,000 spawners annually, but by the mid-20th century, reduction in river flows led to the run dropping to about 3,000-7,000. Some sources put the historic populations as high as three hundred thousand, but this is highly unlikely because of the limited habitat available in the watershed.
Construction of dams in the San Joaquin watershed has led to nearly all of the good spawning streams, located higher in the mountains near the headwaters of the San Joaquin and its tributaries, being cut off to the salmon. The remaining areas left for the salmon to spawn in are undesirable. Prior to the building of the Friant Dam in 1944, the San Joaquin River was believed to have over 6,000 miles of streambed suitable and available to spawning salmon. However, in 1993 it was estimated that there was just 300 miles.
This significant loss of suitable spawning habitat is due to lack of spawning-sized gravel recruitment from lateral and upstream sources due to the installation of manmade barriers that block this gravel from passing naturally downstream. Without recruitment, high flow releases scour gravel from spawning beds, so that they gradually become smaller and smaller. The smaller pieces of gravel are more easily picked up and moved downstream, leaving behind larger pieces that the salmon are unable to move to bury their eggs in a process called redd construction. This further crowds the salmon into smaller and smaller suitable spawning beds, which increases the likelihood of redd superimposition. This is when spawners construct their redds, or pockets in the gravel to lay their eggs in, on top of preexisting redds, thereby killing or burying some of the eggs in the pre-existing redds. The EPA has also stated that the remaining gravel is "probably fairly highly embedded and therefore of reduced quality and unavailable to fish for spawning". The Temperature of the water can also affect the level of spawning. The best temperatures for spring and fall run Chinook salmon are between 42 °F and 57 °F. However, in 77% of the years studied by the San Joaquin River Restoration Program, water temperatures were elevated above 57 °F and abnormalities and egg morbidity rates increased.
The introduction of predatory fish species such as bluegill and various types of bass that prey on salmon smolt are also a major contributor to the decline. The San Joaquin's point of divergence with the distributary Old River has historically created a sort of bottleneck for out migrating salmon because the Old River in turn branches into many side-channels and sloughs of the delta as well as various canal intakes. Recent years have seen work by the California Department of Water Resources and the California Department of Fish and Game to construct and manage temporary rock barriers at the head of the Old River in order to keep fish in the main channel of the San Joaquin River. In the fall of 2009, just 2,236 salmon returned to the entire river system to spawn; this has led to a government ban on salmon fishing off the coasts of northern California and Oregon. Recent years have seen efforts to bring back some of the salmon population to the San Joaquin and some of its tributaries; these include the establishment of fish hatcheries. , plans had been finalized for a $14.5 million hatchery near Friant Dam as part of a $20 million federal restoration project for the fish.
In late 2012, even though there are still dams in the way of allowing salmon to navigate the entire river, attempts were made to get salmon to lay eggs in the upper portion of the river near Friant Dam.
The closest example of a revived salmon population is the Butte River near Chico.
Groundwater overdraft
Because of the heavy demand of water for agriculture and the insufficient flow of the San Joaquin River, groundwater in the San Joaquin Valley's rich aquifer has been an important source of irrigation supply since the late 19th century. Historically, surface water was able to provide all the needed supply, but as agricultural lands spread across more and more of the valley, groundwater pumping became increasingly common. Also, with the introduction of better technology that allowed farmers to dig deeper wells and install electrical pumps, groundwater was seen as an often cheaper and easily accessible source compared to the river. Water imported from Northern California through the Central Valley Project managed to stave off increasing rates of withdrawal for several years, until the 1977 California drought heavily decreased water supplies from these sources for a while and caused many farmers to return to groundwater pumping.
Groundwater withdrawal reached its peak during the 1960s with more than being drawn from the aquifer each year – over twice the present flow of the San Joaquin River – which accounted for 69.6% of all the groundwater pumped within the Central Valley, and nearly 14 percent of all the groundwater withdrawn in the United States. In a 1970 study, it was found that more than of the San Joaquin River valley had been affected by land subsidence in excess of . The maximum drop in elevation was near Mendota, the northward bend of the San Joaquin River, at over . In some areas, the water table has declined more than vertically, forcing farmers to sink their wells as deep as to hit more abundant pockets of the underground water.
Even after better practices and new sources of water through federal projects, groundwater pumping continues at a tremendous rate. The San Joaquin Valley aquifer lost about from 1961 to 2003, which despite being a dramatic reduction from the 1960s, still amounts to nearly per year. The subsidence caused by groundwater withdrawal has threatened infrastructure in the San Joaquin Valley, including the California Aqueduct, a State Water Project facility that conveys water from the delta region to coastal-central and southern California. Subsidence has also damaged highways and power lines as well as causing some areas to be more susceptible to flooding.
Reconnecting to the ocean
In 2009, the Bureau of Reclamation began to release water from Friant Dam in an effort to restore two once-dry stretches of the San Joaquin of about . These two reaches are from below the dam to Mendota Pool, and from the Sack Dam, a diversion dam approximately downstream, to the confluence with the Merced River. The flows were initially in that year. The increased flows will not only begin to restore large areas of desiccated riverside habitat below the dam, but will serve the primary purpose of restoring salmon runs in the upper San Joaquin watershed. The restoration flows, however, will cause a 12 to 15 percent reduction of water provided by the Friant Division of the CVP with complaints from irrigators in the valley as a result. There has also been a lawsuit regarding damage to farmland west of the San Joaquin, near the town of Los Baños, claiming that the flows are seeping through levees that have not seen use in years due to the drying out of the river channel.
In addition to the dry reaches, the higher discharge in the San Joaquin will help restore a total of of river, figuring in stretches with low or polluted flows. It is also hoped that the water will help dilute contaminants in the river caused by pesticides and fertilizers that are applied to the surrounding farmland. In turn, the boost in flow could assist restoration efforts and help flush out saline water in the Sacramento-San Joaquin Delta, where water is pumped into state aqueducts which provide water supplies to two-thirds of Californians.
Phase One improvements include the following: The San Joaquin River east of Los Banos will be increased to route up to of water by December 2011. Salmon will be re-introduced to the river in 2012, most habitat conservation work on the San Joaquin River will be completed by the end of 2013. Phase One will include interim flows of up to . Capacity will be increased to handle up to from Friant Dam to the Mendota Dam by December 2013. A fish ladder will also be added to the Sack Dam.
Phase Two improvements include the following: The USBR will begin releasing up to , or "full restoration flows", from the Friant Dam in 2014. Flows will depend on whether the year is wet, dry or for intermediate levels of precipitation. Additional rehabilitation work is projected to continue until 2016 to ensure that the river can transport of water through rainfall and releases from Friant Dam all the way to Bear Creek and the Eastside bypass. The higher capacity of the river is also to accommodate rainfall or prevent flooding.
The total cost of the restoration, one of the largest river recovery efforts in the United States, could be up to $800 million or even a billion dollars. Approximately $330 million will be paid by Central Valley farmers, and the state and federal governments will provide the remaining funding.
The San Joaquin River was last connected to the ocean in 2010. By reconnecting, it means the portion from Friant Dam to where the San Joaquin River joins the Merced river above Vernalis. This may happen in 2016 or 2017.
Wetlands
Efforts have been under way to restore wetlands along the San Joaquin River as well as on the historic shores of Tulare Lake. These primarily entail the cleanup of existing wetlands and procuring additional water supplies, rather than converting agricultural land back into the original swamp and marsh. Since wetlands can provide a natural form of flood control – wetlands act like sponges, absorbing flood flows during the rainy season and releasing the accumulated water during the dry season – and also can filter out many forms of toxins, especially from fertilizers, they are important to maintaining good water quality in the San Joaquin River. Wetlands preservation group Ducks Unlimited was awarded a $1 million grant in 2005 as part of the North American Wetlands Conservation Act in order to conduct restoration work on of swamps and marshes throughout the San Joaquin Valley.
Watershed
At , the San Joaquin River watershed proper drains a fair swath of inland central California, an area comparable to the size of the Upper Peninsula of Michigan. If combined with the Tulare Basin, which historically (and still rarely) experiences northerly outflow to the San Joaquin River, it would be the largest single drainage basin entirely in the state. The San Joaquin River basin is roughly synonymous with the San Joaquin Valley, and is bounded by the Sierra Nevada to the east, the Coast Ranges on the west, and the Tehachapi Mountains on the south.
The San Joaquin Valley's major southeast–northwest axis runs roughly parallel with the Pacific coast of California; it measures covering all or parts of seventeen California counties stretching from north of Lodi to well south of Bakersfield. Most of the elevation change in the San Joaquin occurs within the first above Friant Dam. Its highest headwaters are at over , but by the time the river reaches the foothills, it is a mere above sea level.
On the west and northwest, the San Joaquin watershed borders those of rivers draining into the Pacific, while beyond almost all of the other divides lie endorheic basins, mostly of the Great Basin. To the north, a low series of ridges separates the San Joaquin River basin from that of the Sacramento River. The Coast Ranges bound the watershed on the west and borders on the drainages of the Pajaro River, Salinas River and the endorheic Carrizo Plain. On the south, the Tehachapis wall off the Tulare Basin from the Mojave Desert. To the east, the Sierra Nevada separate the San Joaquin drainage from those of multiple smaller rivers that terminate in various Great Basin lakes. From north to south, these are the Carson, Walker, and Owens Rivers. The sloping alluvial fan of the Kings River divides the northern San Joaquin Valley from the Tulare Basin.
The overwhelming majority of the economic base in the San Joaquin River watershed is provided by agriculture. The valley is widely considered one of the most productive farming regions in the world, and the top four U.S. counties ranked by agricultural production are all located in the San Joaquin watershed and Tulare Basin. The crops grown in these four counties alone are valued at over $12.6 billion annually, while the production of the entire valley is estimated at more than $14.4 billion. The main crop in the valley by annual sales is cotton, but more than 200 types of produce are grown along the San Joaquin River and in the Tulare Basin, including rice, almonds and lettuce. Livestock raising is also a major business in the valley. This prodigious output has earned the basin many names, including the "breadbasket" or "salad bowl" of the United States.
, the San Joaquin River watershed had a human population of roughly 4,039,000, about 1.9 million of whom live within the section of the watershed not including the Tulare Basin. The largest cities are Bakersfield, near the south end of the valley on the Kern River; Fresno, roughly in the geographic center; Modesto, on the Tuolumne River; and Stockton, on the southeast fringe of the Delta region. Other major cities include Visalia, Tulare, Hanford, Porterville, Madera, Merced, Turlock, Manteca and Lodi. Population growth is among the highest in the state of California and more than twice the U.S. average. In Madera County, near the geographic middle of the basin, growth was 51.5% between 1990 and 2003, the highest in the San Joaquin basin. Most of the major cities lie on the State Route 99 corridor, which runs along the entirety of the San Joaquin Valley and forms the primary thoroughfare for the valley. Interstate 5 provides the major transportation route for the west side of the valley.
Land cover in the watershed is predominantly agriculture and forest, and large expanses of shrubland and semiarid foothill terrain also occupy portions of the basin in addition to a growing urban percentage. Irrigated land covers 30% of the watershed followed by forested areas in addition to national forest and park land that encompass 26.8% of the total land area. Built-up areas use a much smaller percentage of the watershed, at just 1.9%. In the San Joaquin's direct drainage region (not including the Tulare Basin) agricultural cover was 19.2%, forests embraced 28.4%, and urban areas occupied 2.4% as of 1995. The basin's main population centers are in the north and south and population density generally increases from west to east. In fact, despite the relatively small percentage of developed areas, more than 50% of the population lives in the watershed's four largest cities: Fresno, Bakersfield, Stockton and Modesto.
Tributaries
Seven major tributaries flow directly into the San Joaquin River, all of which run from the Sierra Nevada westwards into the main stem. In addition, some of the discharge of the Kings River also enters the San Joaquin directly (but seasonally) through a distributary. Of these, the Tuolumne River is the largest in any respect: longest, greatest drainage basin, and highest discharge. The Merced River is the second largest by length and drainage basin, but the Mokelumne River has a greater flow. Tributaries are listed below proceeding from the mouth upstream, with their respective main-stem length, watershed and discharge noted. Rivers of the Tulare Basin are noted below the San Joaquin's direct tributaries with their individual data. Most of the tributaries had much larger flows before irrigation diversions – for example, the Tuolumne's historic discharge was almost 48% higher than it is now.
Tributaries in the Tulare Lake basin
See also
List of rivers of California
List of most-polluted rivers
Fine Gold Creek
San Joaquin River National Wildlife Refuge
Notes
References
Works cited
Gudde, Erwin G.; Bright, William (2004). California Place Names: The Origin and Etymology of Current Geographical Names.'' University of California Press. .
External links
California Central Valley Grasslands
Great Flood of 1862
Tulare Lake Restoration - Fiction or Fact?
Using Tulare Lake For Water Storage
California Water Plan Update 2005 San Joaquin Hydrologic Region Chapter published by the California Department of Water Resources
Map of San Joaquin Reclamation and Levee Districts
San Joaquin River TMDL Description of DO problems on the San Joaquin River
SanJoaquinBasin.com - A website devoted to the San Joaquin Basin
2006 Agreement
San Joaquin Valley
San Joaquin
Tributaries of San Pablo Bay
Geography of the San Joaquin Valley
Geography of the San Francisco Bay Area
San Francisco Bay watershed
Rivers of Contra Costa County, California
Rivers of Fresno County, California
Rivers of Kern County, California
Rivers of Madera County, California
Rivers of Merced County, California
Rivers of Stanislaus County, California
Rivers of San Joaquin County, California
Rivers of Sacramento County, California
Central Valley Project
Geography of the Central Valley (California)
Rivers of Northern California
Rivers of Southern California | San Joaquin River | [
"Engineering"
] | 12,094 | [
"Irrigation projects",
"Central Valley Project"
] |
304,942 | https://en.wikipedia.org/wiki/Heart%20rate | Heart rate is the frequency of the heartbeat measured by the number of contractions of the heart per minute (beats per minute, or bpm). The heart rate varies according to the body's physical needs, including the need to absorb oxygen and excrete carbon dioxide. It is also modulated by numerous factors, including (but not limited to) genetics, physical fitness, stress or psychological status, diet, drugs, hormonal status, environment, and disease/illness, as well as the interaction between these factors. It is usually equal or close to the pulse rate measured at any peripheral point.
The American Heart Association states the normal resting adult human heart rate is 60–100 bpm. An ultra-trained athlete would have a resting heart rate of 37–38 bpm. Tachycardia is a high heart rate, defined as above 100 bpm at rest. Bradycardia is a low heart rate, defined as below 60 bpm at rest. When a human sleeps, a heartbeat with rates around 40–50 bpm is common and considered normal. When the heart is not beating in a regular pattern, this is referred to as an arrhythmia. Abnormalities of heart rate sometimes indicate disease.
Physiology
While heart rhythm is regulated entirely by the sinoatrial node under normal conditions, heart rate is regulated by sympathetic and parasympathetic input to the sinoatrial node. The accelerans nerve provides sympathetic input to the heart by releasing norepinephrine onto the cells of the sinoatrial node (SA node), and the vagus nerve provides parasympathetic input to the heart by releasing acetylcholine onto sinoatrial node cells. Therefore, stimulation of the accelerans nerve increases heart rate, while stimulation of the vagus nerve decreases it.
As water and blood are incompressible fluids, one of the physiological ways to deliver more blood to an organ is to increase heart rate. Normal resting heart rates range from 60 to 100 bpm. Bradycardia is defined as a resting heart rate below 60 bpm. However, heart rates from 50 to 60 bpm are common among healthy people and do not necessarily require special attention. Tachycardia is defined as a resting heart rate above 100 bpm, though persistent rest rates between 80 and 100 bpm, mainly if they are present during sleep, may be signs of hyperthyroidism or anemia (see below).
Central nervous system stimulants such as substituted amphetamines increase heart rate.
Central nervous system depressants or sedatives decrease the heart rate (apart from some particularly strange ones with equally strange effects, such as ketamine which can cause – amongst many other things – stimulant-like effects such as tachycardia).
There are many ways in which the heart rate speeds up or slows down. Most involve stimulant-like endorphins and hormones being released in the brain, some of which are those that are 'forced'/'enticed' out by the ingestion and processing of drugs such as cocaine or atropine.
This section discusses target heart rates for healthy persons, which would be inappropriately high for most persons with coronary artery disease.
Influences from the central nervous system
Cardiovascular centres
The heart rate is rhythmically generated by the sinoatrial node. It is also influenced by central factors through sympathetic and parasympathetic nerves. Nervous influence over the heart rate is centralized within the two paired cardiovascular centres of the medulla oblongata. The cardioaccelerator regions stimulate activity via sympathetic stimulation of the cardioaccelerator nerves, and the cardioinhibitory centers decrease heart activity via parasympathetic stimulation as one component of the vagus nerve. During rest, both centers provide slight stimulation to the heart, contributing to autonomic tone. This is a similar concept to tone in skeletal muscles. Normally, vagal stimulation predominates as, left unregulated, the SA node would initiate a sinus rhythm of approximately 100 bpm.
Both sympathetic and parasympathetic stimuli flow through the paired cardiac plexus near the base of the heart. The cardioaccelerator center also sends additional fibers, forming the cardiac nerves via sympathetic ganglia (the cervical ganglia plus superior thoracic ganglia T1–T4) to both the SA and AV nodes, plus additional fibers to the atria and ventricles. The ventricles are more richly innervated by sympathetic fibers than parasympathetic fibers. Sympathetic stimulation causes the release of the neurotransmitter norepinephrine (also known as noradrenaline) at the neuromuscular junction of the cardiac nerves. This shortens the repolarization period, thus speeding the rate of depolarization and contraction, which results in an increased heartrate. It opens chemical or ligand-gated sodium and calcium ion channels, allowing an influx of positively charged ions.
Norepinephrine binds to the beta–1 receptor. High blood pressure medications are used to block these receptors and so reduce the heart rate.
Parasympathetic stimulation originates from the cardioinhibitory region of the brain with impulses traveling via the vagus nerve (cranial nerve X). The vagus nerve sends branches to both the SA and AV nodes, and to portions of both the atria and ventricles. Parasympathetic stimulation releases the neurotransmitter acetylcholine (ACh) at the neuromuscular junction. ACh slows HR by opening chemical- or ligand-gated potassium ion channels to slow the rate of spontaneous depolarization, which extends repolarization and increases the time before the next spontaneous depolarization occurs. Without any nervous stimulation, the SA node would establish a sinus rhythm of approximately 100 bpm. Since resting rates are considerably less than this, it becomes evident that parasympathetic stimulation normally slows HR. This is similar to an individual driving a car with one foot on the brake pedal. To speed up, one need merely remove one's foot from the brake and let the engine increase speed. In the case of the heart, decreasing parasympathetic stimulation decreases the release of ACh, which allows HR to increase up to approximately 100 bpm. Any increases beyond this rate would require sympathetic stimulation.
Input to the cardiovascular centres
The cardiovascular centre receive input from a series of visceral receptors with impulses traveling through visceral sensory fibers within the vagus and sympathetic nerves via the cardiac plexus. Among these receptors are various proprioreceptors, baroreceptors, and chemoreceptors, plus stimuli from the limbic system which normally enable the precise regulation of heart function, via cardiac reflexes. Increased physical activity results in increased rates of firing by various proprioreceptors located in muscles, joint capsules, and tendons. The cardiovascular centres monitor these increased rates of firing, suppressing parasympathetic stimulation or increasing sympathetic stimulation as needed in order to increase blood flow.
Similarly, baroreceptors are stretch receptors located in the aortic sinus, carotid bodies, the venae cavae, and other locations, including pulmonary vessels and the right side of the heart itself. Rates of firing from the baroreceptors represent blood pressure, level of physical activity, and the relative distribution of blood. The cardiac centers monitor baroreceptor firing to maintain cardiac homeostasis, a mechanism called the baroreceptor reflex. With increased pressure and stretch, the rate of baroreceptor firing increases, and the cardiac centers decrease sympathetic stimulation and increase parasympathetic stimulation. As pressure and stretch decrease, the rate of baroreceptor firing decreases, and the cardiac centers increase sympathetic stimulation and decrease parasympathetic stimulation.
There is a similar reflex, called the atrial reflex or Bainbridge reflex, associated with varying rates of blood flow to the atria. Increased venous return stretches the walls of the atria where specialized baroreceptors are located. However, as the atrial baroreceptors increase their rate of firing and as they stretch due to the increased blood pressure, the cardiac center responds by increasing sympathetic stimulation and inhibiting parasympathetic stimulation to increase HR. The opposite is also true.
Increased metabolic byproducts associated with increased activity, such as carbon dioxide, hydrogen ions, and lactic acid, plus falling oxygen levels, are detected by a suite of chemoreceptors innervated by the glossopharyngeal and vagus nerves. These chemoreceptors provide feedback to the cardiovascular centers about the need for increased or decreased blood flow, based on the relative levels of these substances.
The limbic system can also significantly impact HR related to emotional state. During periods of stress, it is not unusual to identify higher than normal HRs, often accompanied by a surge in the stress hormone cortisol. Individuals experiencing extreme anxiety may manifest panic attacks with symptoms that resemble those of heart attacks. These events are typically transient and treatable. Meditation techniques have been developed to ease anxiety and have been shown to lower HR effectively. Doing simple deep and slow breathing exercises with one's eyes closed can also significantly reduce this anxiety and HR.
Factors influencing heart rate
Using a combination of autorhythmicity and innervation, the cardiovascular center is able to provide relatively precise control over the heart rate, but other factors can impact on this. These include hormones, notably epinephrine, norepinephrine, and thyroid hormones; levels of various ions including calcium, potassium, and sodium; body temperature; hypoxia; and pH balance.
Epinephrine and norepinephrine
The catecholamines, epinephrine and norepinephrine, secreted by the adrenal medulla form one component of the extended fight-or-flight mechanism. The other component is sympathetic stimulation. Epinephrine and norepinephrine have similar effects: binding to the beta-1 adrenergic receptors, and opening sodium and calcium ion chemical- or ligand-gated channels. The rate of depolarization is increased by this additional influx of positively charged ions, so the threshold is reached more quickly and the period of repolarization is shortened. However, massive releases of these hormones coupled with sympathetic stimulation may actually lead to arrhythmias. There is no parasympathetic stimulation to the adrenal medulla.
Thyroid hormones
In general, increased levels of the thyroid hormones (thyroxine(T4) and triiodothyronine (T3)), increase the heart rate; excessive levels can trigger tachycardia. The impact of thyroid hormones is typically of a much longer duration than that of the catecholamines. The physiologically active form of triiodothyronine, has been shown to directly enter cardiomyocytes and alter activity at the level of the genome. It also impacts the beta-adrenergic response similar to epinephrine and norepinephrine.
Calcium
Calcium ion levels have a great impact on heart rate and myocardial contractility: increased calcium levels cause an increase in both. High levels of calcium ions result in hypercalcemia and excessive levels can induce cardiac arrest. Drugs known as calcium channel blockers slow HR by binding to these channels and blocking or slowing the inward movement of calcium ions.
Caffeine and nicotine
Caffeine and nicotine are both stimulants of the nervous system and of the cardiac centres causing an increased heart rate. Caffeine works by increasing the rates of depolarization at the SA node, whereas nicotine stimulates the activity of the sympathetic neurons that deliver impulses to the heart.
Effects of stress
Both surprise and stress induce physiological response: elevate heart rate substantially. In a study conducted on 8 female and male student actors ages 18 to 25, their reaction to an unforeseen occurrence (the cause of stress) during a performance was observed in terms of heart rate. In the data collected, there was a noticeable trend between the location of actors (onstage and offstage) and their elevation in heart rate in response to stress; the actors present offstage reacted to the stressor immediately, demonstrated by their immediate elevation in heart rate the minute the unexpected event occurred, but the actors present onstage at the time of the stressor reacted in the following 5 minute period (demonstrated by their increasingly elevated heart rate). This trend regarding stress and heart rate is supported by previous studies; negative emotion/stimulus has a prolonged effect on heart rate in individuals who are directly impacted.
In regard to the characters present onstage, a reduced startle response has been associated with a passive defense, and the diminished initial heart rate response has been predicted to have a greater tendency to dissociation. Current evidence suggests that heart rate variability can be used as an accurate measure of psychological stress and may be used for an objective measurement of psychological stress.
Factors decreasing heart rate
The heart rate can be slowed by altered sodium and potassium levels, hypoxia, acidosis, alkalosis, and hypothermia. The relationship between electrolytes and HR is complex, but maintaining electrolyte balance is critical to the normal wave of depolarization. Of the two ions, potassium has the greater clinical significance. Initially, both hyponatremia (low sodium levels) and hypernatremia (high sodium levels) may lead to tachycardia. Severely high hypernatremia may lead to fibrillation, which may cause cardiac output to cease. Severe hyponatremia leads to both bradycardia and other arrhythmias. Hypokalemia (low potassium levels) also leads to arrhythmias, whereas hyperkalemia (high potassium levels) causes the heart to become weak and flaccid, and ultimately to fail.
Heart muscle relies exclusively on aerobic metabolism for energy. Severe myocardial infarction (commonly called a heart attack) can lead to a decreasing heart rate, since metabolic reactions fueling heart contraction are restricted.
Acidosis is a condition in which excess hydrogen ions are present, and the patient's blood expresses a low pH value. Alkalosis is a condition in which there are too few hydrogen ions, and the patient's blood has an elevated pH. Normal blood pH falls in the range of 7.35–7.45, so a number lower than this range represents acidosis and a higher number represents alkalosis. Enzymes, being the regulators or catalysts of virtually all biochemical reactions – are sensitive to pH and will change shape slightly with values outside their normal range. These variations in pH and accompanying slight physical changes to the active site on the enzyme decrease the rate of formation of the enzyme-substrate complex, subsequently decreasing the rate of many enzymatic reactions, which can have complex effects on HR. Severe changes in pH will lead to denaturation of the enzyme.
The last variable is body temperature. Elevated body temperature is called hyperthermia, and suppressed body temperature is called hypothermia. Slight hyperthermia results in increasing HR and strength of contraction. Hypothermia slows the rate and strength of heart contractions. This distinct slowing of the heart is one component of the larger diving reflex that diverts blood to essential organs while submerged. If sufficiently chilled, the heart will stop beating, a technique that may be employed during open heart surgery. In this case, the patient's blood is normally diverted to an artificial heart-lung machine to maintain the body's blood supply and gas exchange until the surgery is complete, and sinus rhythm can be restored. Excessive hyperthermia and hypothermia will both result in death, as enzymes drive the body systems to cease normal function, beginning with the central nervous system.
Physiological control over heart rate
A study shows that bottlenose dolphins can learn – apparently via instrumental conditioning – to rapidly and selectively slow down their heart rate during diving for conserving oxygen depending on external signals. In humans regulating heart rate by methods such as listening to music, meditation or a vagal maneuver takes longer and only lowers the rate to a much smaller extent.
In different circumstances
Heart rate is not a stable value and it increases or decreases in response to the body's need in a way to maintain an equilibrium (basal metabolic rate) between requirement and delivery of oxygen and nutrients. The normal SA node firing rate is affected by autonomic nervous system activity: sympathetic stimulation increases and parasympathetic stimulation decreases the firing rate.
Resting heart rate
Normal pulse rates at rest, in beats per minute (BPM):
The basal or resting heart rate (HRrest) is defined as the heart rate when a person is awake, in a neutrally temperate environment, and has not been subject to any recent exertion or stimulation, such as stress or surprise. The normal resting heart rate is based on the at-rest firing rate of the heart's sinoatrial node, where the faster pacemaker cells driving the self-generated rhythmic firing and responsible for the heart's autorhythmicity are located.
In one study 98% of cardiologists suggested that as a desirable target range, 50 to 90 beats per minute is more appropriate than 60 to 100. The available evidence indicates that the normal range for resting heart rate is 50–90 beats per minute (bpm). In a study of over 35,000 American men and women over age 40 during the 1999–2008 period, 71 bpm was the average for men, and 73 bpm was the average for women.
Resting heart rate is often correlated with mortality. In the Copenhagen City Heart Study a heart rate of 65 bpm rather than 80 bpm was associated with 4.6 years longer life expectancy in men and 3.6 years in women. Other studies have shown all-cause mortality is increased by 1.22 (hazard ratio) when heart rate exceeds 90 beats per minute. ECG of 46,129 individuals with low risk for cardiovascular disease revealed that 96% had resting heart rates ranging from 48 to 98 beats per minute. The mortality rate of patients with myocardial infarction increased from 15% to 41% if their admission heart rate was greater than 90 beats per minute. For endurance athletes at the elite level, it is not unusual to have a resting heart rate between 33 and 50 bpm.
Maximum heart rate
The maximum heart rate (HRmax) is the age-related highest number of beats per minute of the heart when reaching a point of exhaustion without severe problems through exercise stress.
In general it is loosely estimated as 220 minus one's age.
It generally decreases with age. Since HRmax varies by individual, the most accurate way of measuring any single person's HRmax is via a cardiac stress test. In this test, a person is subjected to controlled physiologic stress (generally by treadmill or bicycle ergometer) while being monitored by an electrocardiogram (ECG). The intensity of exercise is periodically increased until certain changes in heart function are detected on the ECG monitor, at which point the subject is directed to stop. Typical duration of the test ranges ten to twenty minutes. Adults who are beginning a new exercise regimen are often advised to perform this test only in the presence of medical staff due to risks associated with high heart rates.
The theoretical maximum heart rate of a human is 300 bpm; however, there have been multiple cases where this theoretical upper limit has been exceeded. The fastest human ventricular conduction rate recorded to this day is a conducted tachyarrhythmia with ventricular rate of 600 beats per minute, which is comparable to the heart rate of a mouse.
For general purposes, a number of formulas are used to estimate HRmax. However, these predictive formulas have been criticized as inaccurate because they only produce generalized population-averages and may deviate significantly from the actual value. (See § Limitations.)
Haskell & Fox (1970)
Notwithstanding later research, the most widely cited formula for HRmax is still:
HRmax = 220 − age
Although attributed to various sources, it is widely thought to have been devised in 1970 by Dr. William Haskell and Dr. Samuel Fox. They did not develop this formula from original research, but rather by plotting data from approximately 11 references consisting of published research or unpublished scientific compilations. It gained widespread use through being used by Polar Electro in its heart rate monitors, which Dr. Haskell has "laughed about", as the formula "was never supposed to be an absolute guide to rule people's training."
While this formula is commonly used (and easy to remember and calculate), research has consistently found that it is subject to bias, particularly in older adults. Compared to the age-specific average HRmax, the Haskell and Fox formula overestimates HRmax in young adults, agrees with it at age 40, and underestimates HRmax in older adults. For example, in one study, the average HRmax at age 76 was about 10bpm higher than the Haskell and Fox equation. Consequently, the formula cannot be recommended for use in exercise physiology and related fields.
Other formulas
HRmax is strongly correlated to age, and most formulas are solely based on this. Studies have been mixed on the effect of gender, with some finding that gender is statistically significant, although small when considering overall equation error, while others finding negligible effect. The inclusion of physical activity status, maximal oxygen uptake, smoking, body mass index, body weight, or resting heart rate did not significantly improve accuracy. Nonlinear models are slightly more accurate predictors of average age-specific HRmax, particularly above 60 years of age, but are harder to apply, and provide statistically negligible improvement over linear models. The Wingate formula is the most recent, had the largest data set, and performed best on a fresh data set when compared with other formulas, although it had only a small amount of data for ages 60 and older so those estimates should be viewed with caution. In addition, most formulas are developed for adults and are not applicable to children and adolescents.
Limitations
Maximum heart rates vary significantly between individuals. Age explains only about half of HRmax variance. For a given age, the standard deviation of HRmax from the age-specific population mean is about 12bpm, and a 95% interval for the prediction error is about 24bpm. For example, Dr. Fritz Hagerman observed that the maximum heart rates of men in their 20s on Olympic rowing teams vary from 160 to 220. Such a variation would equate to an age range of -16 to 68 using the Wingate formula. The formulas are quite accurate at predicting the average heart rate of a group of similarly-aged individuals, but relatively poor for a given individual.
Robergs and Landwehr opine that for VO2 max, prediction errors in HRmax need to be less than ±3 bpm. No current formula meets this accuracy. For prescribing exercise training heart rate ranges, the errors in the more accurate formulas may be acceptable, but again it is likely that, for a significant fraction of the population, current equations used to estimate HRmax are not accurate enough. Froelicher and Myers describe maximum heart formulas as "largely useless". Measurement via a maximal test is preferable whenever possible, which can be as accurate as ±2bpm.
Heart rate reserve
Heart rate reserve (HRreserve) is the difference between a person's measured or predicted maximum heart rate and resting heart rate. Some methods of measurement of exercise intensity measure percentage of heart rate reserve. Additionally, as a person increases their cardiovascular fitness, their HRrest will drop, and the heart rate reserve will increase. Percentage of HRreserve is statistically indistinguishable from percentage of VO2 reserve.
HRreserve = HRmax − HRrest
This is often used to gauge exercise intensity (first used in 1957 by Karvonen).
Karvonen's study findings have been questioned, due to the following:
The study did not use VO2 data to develop the equation.
Only six subjects were used.
Karvonen incorrectly reported that the percentages of HRreserve and VO2 max correspond to each other, but newer evidence shows that it correlated much better with VO2 reserve as described above.
Target heart rate
For healthy people, the Target Heart Rate (THR) or Training Heart Rate Range (THRR) is a desired range of heart rate reached during aerobic exercise which enables one's heart and lungs to receive the most benefit from a workout. This theoretical range varies based mostly on age; however, a person's physical condition, sex, and previous training also are used in the calculation.
By percent, Fox–Haskell-based
The THR can be calculated as a range of 65–85% intensity, with intensity defined simply as percentage of HRmax. However, it is crucial to derive an accurate HRmax to ensure these calculations are meaningful.
Example for someone with a HRmax of 180 (age 40, estimating HRmax As 220 − age):
65% Intensity: (220 − (age = 40)) × 0.65 → 117 bpm
85% Intensity: (220 − (age = 40)) × 0.85 → 154 bpm
Karvonen method
The Karvonen method factors in resting heart rate (HRrest) to calculate target heart rate (THR), using a range of 50–85% intensity:
THR = ((HRmax − HRrest) × % intensity) + HRrest
Equivalently,
THR = (HRreserve × % intensity) + HRrest
Example for someone with a HRmax of 180 and a HRrest of 70 (and therefore a HRreserve of 110):
50% Intensity: ((180 − 70) × 0.50) + 70 = 125 bpm
85% Intensity: ((180 − 70) × 0.85) + 70 = 163 bpm
Zoladz method
An alternative to the Karvonen method is the Zoladz method, which is used to test an athlete's capabilities at specific heart rates. These are not intended to be used as exercise zones, although they are often used as such. The Zoladz test zones are derived by subtracting values from HRmax:
THR = HRmax − Adjuster ± 5 bpm
Zone 1 Adjuster = 50 bpm
Zone 2 Adjuster = 40 bpm
Zone 3 Adjuster = 30 bpm
Zone 4 Adjuster = 20 bpm
Zone 5 Adjuster = 10 bpm
Example for someone with a HRmax of 180:
Zone 1 (easy exercise): 180 − 50 ± 5 → 125 − 135 bpm
Zone 4 (tough exercise): 180 − 20 ± 5 → 155 − 165 bpm
Heart rate recovery
Heart rate recovery (HRR) is the reduction in heart rate at peak exercise and the rate as measured after a cool-down period of fixed duration. A greater reduction in heart rate after exercise during the reference period is associated with a higher level of cardiac fitness.
Heart rates assessed during treadmill stress test that do not drop by more than 12 bpm one minute after stopping exercise (if cool-down period after exercise) or by more than 18 bpm one minute after stopping exercise (if no cool-down period and supine position as soon as possible) are associated with an increased risk of death. People with an abnormal HRR defined as a decrease of 42 beats per minutes or less at two minutes post-exercise had a mortality rate 2.5 times greater than patients with a normal recovery. Another study reported a four-fold increase in mortality in subjects with an abnormal HRR defined as ≤12 bpm reduction one minute after the cessation of exercise. A study reported that a HRR of ≤22 bpm after two minutes "best identified high-risk patients". They also found that while HRR had significant prognostic value it had no diagnostic value.
Development
The human heart beats more than 2.8 billion times in an average lifetime.
The heartbeat of a human embryo begins at approximately 21 days after conception, or five weeks after the last normal menstrual period (LMP), which is the date normally used to date pregnancy in the medical community. The electrical depolarizations that trigger cardiac myocytes to contract arise spontaneously within the myocyte itself. The heartbeat is initiated in the pacemaker regions and spreads to the rest of the heart through a conduction pathway. Pacemaker cells develop in the primitive atrium and the sinus venosus to form the sinoatrial node and the atrioventricular node respectively. Conductive cells develop the bundle of His and carry the depolarization into the lower heart.
The human heart begins beating at a rate near the mother's, about 75–80 beats per minute (bpm). The embryonic heart rate then accelerates linearly for the first month of beating, peaking at 165–185 bpm during the early 7th week, (early 9th week after the LMP). This acceleration is approximately 3.3 bpm per day, or about 10 bpm every three days, an increase of 100 bpm in the first month.
After peaking at about 9.2 weeks after the LMP, it decelerates to about 150 bpm (+/-25 bpm) during the 15th week after the LMP. After the 15th week the deceleration slows reaching an average rate of about 145 (+/-25 bpm) bpm at term. The regression formula which describes this acceleration before the embryo reaches 25 mm in crown-rump length or 9.2 LMP weeks is:
Clinical significance
Manual measurement
Heart rate is measured by finding the pulse of the heart. This pulse rate can be found at any point on the body where the artery's pulsation is transmitted to the surface by pressuring it with the index and middle fingers; often it is compressed against an underlying structure like bone. The thumb should not be used for measuring another person's heart rate, as its strong pulse may interfere with the correct perception of the target pulse.
The radial artery is the easiest to use to check the heart rate. However, in emergency situations the most reliable arteries to measure heart rate are carotid arteries. This is important mainly in patients with atrial fibrillation, in whom heart beats are irregular and stroke volume is largely different from one beat to another. In those beats following a shorter diastolic interval left ventricle does not fill properly, stroke volume is lower and pulse wave is not strong enough to be detected by palpation on a distal artery like the radial artery. It can be detected, however, by doppler.
Possible points for measuring the heart rate are:
The ventral aspect of the wrist on the side of the thumb (radial artery).
The ulnar artery.
The inside of the elbow, or under the biceps muscle (brachial artery).
The groin (femoral artery).
Behind the medial malleolus on the feet (posterior tibial artery).
Middle of dorsum of the foot (dorsalis pedis).
Behind the knee (popliteal artery).
Over the abdomen (abdominal aorta).
The chest (apex of the heart), which can be felt with one's hand or fingers. It is also possible to auscultate the heart using a stethoscope.
In the neck, lateral of the larynx (carotid artery)
The temple (superficial temporal artery).
The lateral edge of the mandible (facial artery).
The side of the head near the ear (posterior auricular artery).
Electronic measurement
A more precise method of determining heart rate involves the use of an electrocardiograph, or ECG (also abbreviated EKG). An ECG generates a pattern based on electrical activity of the heart, which closely follows heart function. Continuous ECG monitoring is routinely done in many clinical settings, especially in critical care medicine. On the ECG, instantaneous heart rate is calculated using the R wave-to-R wave (RR) interval and multiplying/dividing in order to derive heart rate in heartbeats/min. Multiple methods exist:
HR = 1000 · 60/(RR interval in milliseconds)
HR = 60/(RR interval in seconds)
HR = 300/number of "large" squares between successive R waves.
HR= 1,500 number of large blocks
Heart rate monitors allow measurements to be taken continuously and can be used during exercise when manual measurement would be difficult or impossible (such as when the hands are being used). Various commercial heart rate monitors are also available. Some monitors, used during sport, consist of a chest strap with electrodes. The signal is transmitted to a wrist receiver for display.
Alternative methods of measurement include seismocardiography.
Optical measurements
Pulse oximetry of the finger and laser Doppler imaging of the eye fundus are often used in the clinics. Those techniques can assess the heart rate by measuring the delay between pulses.
Tachycardia
Tachycardia is a resting heart rate more than 100 beats per minute. This number can vary as smaller people and children have faster heart rates than average adults.
Physiological conditions where tachycardia occurs:
Pregnancy
Emotional conditions such as anxiety or stress.
Exercise
Pathological conditions where tachycardia occurs:
Sepsis
Fever
Anemia
Hypoxia
Hyperthyroidism
Hypersecretion of catecholamines
Cardiomyopathy
Valvular heart diseases
Acute Radiation Syndrome
Dehydration
Metabolic myopathies (At rest, tachycardia is commonly seen in fatty acid oxidation disorders. An inappropriate rapid heart rate response to exercise is seen in muscle glycogenoses and mitochondrial myopathies, where the tachycardia is faster than would be expected during exercise).
Bradycardia
Bradycardia was defined as a heart rate less than 60 beats per minute when textbooks asserted that the normal range for heart rates was 60–100 bpm. The normal range has since been revised in textbooks to 50–90 bpm for a human at total rest. Setting a lower threshold for bradycardia prevents misclassification of fit individuals as having a pathologic heart rate. The normal heart rate number can vary as children and adolescents tend to have faster heart rates than average adults. Bradycardia may be associated with medical conditions such as hypothyroidism, heart disease, or inflammatory disease. At rest, although tachycardia is more commonly seen in fatty acid oxidation disorders, more rarely acute bradycardia can occur.
Trained athletes tend to have slow resting heart rates, and resting bradycardia in athletes should not be considered abnormal if the individual has no symptoms associated with it. For example, Miguel Indurain, a Spanish cyclist and five time Tour de France winner, had a resting heart rate of 28 beats per minute, one of the lowest ever recorded in a healthy human. Daniel Green achieved the world record for the slowest heartbeat in a healthy human with a heart rate of just 26 bpm in 2014.
Arrhythmia
Arrhythmias are abnormalities of the heart rate and rhythm (sometimes felt as palpitations). They can be divided into two broad categories: fast and slow heart rates. Some cause few or minimal symptoms. Others produce more serious symptoms of lightheadedness, dizziness and fainting.
Hypertension
Elevated heart rate is a powerful predictor of morbidity and mortality in patients with hypertension. Atherosclerosis and dysautonomia are major contributors to the pathogenesis.
Correlation with cardiovascular mortality risk
A number of investigations indicate that faster resting heart rate has emerged as a new risk factor for mortality in homeothermic mammals, particularly cardiovascular mortality in human beings. High heart rate is associated with endothelial dysfunction and increased atheromatous plaque formation leading to atherosclerosis. Faster heart rate may accompany increased production of inflammation molecules and increased production of reactive oxygen species in cardiovascular system, in addition to increased mechanical stress to the heart. There is a correlation between increased resting rate and cardiovascular risk. This is not seen to be "using an allotment of heart beats" but rather an increased risk to the system from the increased rate.
An Australian-led international study of patients with cardiovascular disease has shown that heart beat rate is a key indicator for the risk of heart attack. The study, published in The Lancet (September 2008) studied 11,000 people, across 33 countries, who were being treated for heart problems. Those patients whose heart rate was above 70 beats per minute had significantly higher incidence of heart attacks, hospital admissions and the need for surgery. Higher heart rate is thought to be correlated with an increase in heart attack and about a 46 percent increase in hospitalizations for non-fatal or fatal heart attack.
Other studies have shown that a high resting heart rate is associated with an increase in cardiovascular and all-cause mortality in the general population and in patients with chronic diseases. A faster resting heart rate is associated with shorter life expectancy and is considered a strong risk factor for heart disease and heart failure, independent of level of physical fitness. Specifically, a resting heart rate above 65 beats per minute has been shown to have a strong independent effect on premature mortality; every 10 beats per minute increase in resting heart rate has been shown to be associated with a 10–20% increase in risk of death. In one study, men with no evidence of heart disease and a resting heart rate of more than 90 beats per minute had a five times higher risk of sudden cardiac death. Similarly, another study found that men with resting heart rates of over 90 beats per minute had an almost two-fold increase in risk for cardiovascular disease mortality; in women it was associated with a three-fold increase. In patients having heart rates of 70 beats/minute or above, each additional beat/minute was associated with increased rate of cardiovascular death and heart failure hospitalization.
Given these data, heart rate should be considered in the assessment of cardiovascular risk, even in apparently healthy individuals. Heart rate has many advantages as a clinical parameter: It is inexpensive and quick to measure and is easily understandable. Although the accepted limits of heart rate are between 60 and 100 beats per minute, this was based for convenience on the scale of the squares on electrocardiogram paper; a better definition of normal sinus heart rate may be between 50 and 90 beats per minute.
Standard textbooks of physiology and medicine mention that heart rate (HR) is readily calculated from the ECG as follows: HR = 1000*60/RR interval in milliseconds, HR = 60/RR interval in seconds, or HR = 300/number of large squares between successive R waves. In each case, the authors are actually referring to instantaneous HR, which is the number of times the heart would beat if successive RR intervals were constant.
Lifestyle and pharmacological regimens may be beneficial to those with high resting heart rates. Exercise is one possible measure to take when an individual's heart rate is higher than 80 beats per minute. Diet has also been found to be beneficial in lowering resting heart rate: In studies of resting heart rate and risk of death and cardiac complications on patients with type 2 diabetes, legumes were found to lower resting heart rate. This is thought to occur because in addition to the direct beneficial effects of legumes, they also displace animal proteins in the diet, which are higher in saturated fat and cholesterol. Another nutrient is omega-3 long chain polyunsaturated fatty acids (omega-3 fatty acid or LC-PUFA). In a meta-analysis with a total of 51 randomized controlled trials (RCTs) involving 3,000 participants, the supplement mildly but significantly reduced heart rate (-2.23 bpm; 95% CI: -3.07, -1.40 bpm). When docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) were compared, modest heart rate reduction was observed in trials that supplemented with DHA (-2.47 bpm; 95% CI: -3.47, -1.46 bpm), but not in those received EPA.
A very slow heart rate (bradycardia) may be associated with heart block. It may also arise from autonomous nervous system impairment.
See also
Heart rate monitor
Cardiac cycle
Electrocardiography
Sinus rhythm
Second wind (heart rate is measured during 12 Minute Walk Test)
Bainbridge reflex
Notes
References
Bibliography
External links
Online Heart Beats Per Minute Calculator Tap along with your heart rate
An application (open-source) for contactless real time heart rate measurements by means of an ordinary web cam
Cardiovascular physiology
Medical signs
Mathematics in medicine
Temporal rates | Heart rate | [
"Physics",
"Mathematics"
] | 8,476 | [
"Temporal quantities",
"Physical quantities",
"Applied mathematics",
"Temporal rates",
"Mathematics in medicine"
] |
304,999 | https://en.wikipedia.org/wiki/Riccati%20equation | In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form
where and . If the equation reduces to a Bernoulli equation, while if the equation becomes a first order linear ordinary differential equation.
The equation is named after Jacopo Riccati (1676–1754).
More generally, the term Riccati equation is used to refer to matrix equations with an analogous quadratic term, which occur in both continuous-time and discrete-time linear-quadratic-Gaussian control. The steady-state (non-dynamic) version of these is referred to as the algebraic Riccati equation.
Conversion to a second order linear equation
The non-linear Riccati equation can always be converted to a second order linear ordinary differential equation (ODE):
If
then, wherever is non-zero and differentiable, satisfies a Riccati equation of the form
where and because
Substituting it follows that satisfies the linear second-order ODE
since
so that
and hence
Then substituting the two solutions of this linear second order equation into the transformation
suffices to have global knowledge of the general solution of the Riccati equation by the formula:
Complex analysis
In complex analysis, the Riccati equation occurs as the first-order nonlinear ODE in the complex plane of the form
where and are polynomials in and locally analytic functions of , i.e., is a complex rational function. The only equation of this form that is of Painlevé type, is the Riccati equation
where are (possibly matrix) functions of .
Application to the Schwarzian equation
An important application of the Riccati equation is to the 3rd order Schwarzian differential equation
which occurs in the theory of conformal mapping and univalent functions. In this case the ODEs are in the complex domain and differentiation is with respect to a complex variable. (The Schwarzian derivative has the remarkable property that it is invariant under Möbius transformations, i.e. whenever is non-zero.) The function
satisfies the Riccati equation
By the above where is a solution of the linear ODE
Since integration gives
for some constant . On the other hand any other independent solution of the linear ODE
has constant non-zero Wronskian which can be taken to be after scaling.
Thus
so that the Schwarzian equation has solution
Obtaining solutions by quadrature
The correspondence between Riccati equations and second-order linear ODEs has other consequences. For example, if one solution of a 2nd order ODE is known, then it is known that another solution can be obtained by quadrature, i.e., a simple integration. The same holds true for the Riccati equation. In fact, if one particular solution can be found, the general solution is obtained as
Substituting
in the Riccati equation yields
and since
it follows that
or
which is a Bernoulli equation. The substitution that is needed to solve this Bernoulli equation is
Substituting
directly into the Riccati equation yields the linear equation
A set of solutions to the Riccati equation is then given by
where is the general solution to the aforementioned linear equation.
See also
Linear-quadratic regulator
Algebraic Riccati equation
Linear-quadratic-Gaussian control
References
Further reading
External links
Riccati Equation at EqWorld: The World of Mathematical Equations.
Riccati Differential Equation at Mathworld
MATLAB function for solving continuous-time algebraic Riccati equation.
SciPy has functions for solving the continuous algebraic Riccati equation and the discrete algebraic Riccati equation.
Eponymous equations of physics
Ordinary differential equations | Riccati equation | [
"Physics"
] | 755 | [
"Eponymous equations of physics",
"Equations of physics"
] |
305,100 | https://en.wikipedia.org/wiki/Telerobotics | Telerobotics is the area of robotics concerned with the control of semi-autonomous robots from a distance, chiefly using television, wireless networks (like Wi-Fi, Bluetooth and the Deep Space Network) or tethered connections. It is a combination of two major subfields, which are teleoperation and telepresence.
Teleoperation
Teleoperation indicates operation of a machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academic and technical environments. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance.
Teleoperation is the most standard term, used both in research and technical communities, for referring to operation at a distance. This is opposed to "telepresence", which refers to the subset of telerobotic systems configured with an immersive interface such that the operator feels present in the remote environment, projecting their presence through the remote robot. One of the first telepresence systems that enabled operators to feel present in a remote environment through all of the primary senses (sight, sound, and touch) was the Virtual Fixtures system developed at US Air Force Research Laboratories in the early 1990s. The system enabled operators to perform dexterous tasks (inserting pegs into holes) remotely such that the operator would feel as if he or she was inserting the pegs when in fact it was a robot remotely performing the task.
A telemanipulator (or teleoperator) is a device that is controlled remotely by a human operator. In simple cases the controlling operator's command actions correspond directly to actions in the device controlled, as for example in a radio-controlled model aircraft or a tethered deep submergence vehicle. Where communications delays make direct control impractical (such as a remote planetary rover), or it is desired to reduce operator workload (as in a remotely controlled spy or attack aircraft), the device will not be controlled directly, instead being commanded to follow a specified path. At increasing levels of sophistication the device may operate somewhat independently in matters such as obstacle avoidance, also commonly employed in planetary rovers.
Devices designed to allow the operator to control a robot at a distance are sometimes called telecheric robotics.
Two major components of telerobotics and telepresence are the visual and control applications. A remote camera provides a visual representation of the view from the robot. Placing the robotic camera in a perspective that allows intuitive control is a recent technique that although based in Science Fiction (Robert A. Heinlein's 1942 short story "Waldo") has not been fruitful as the speed, resolution and bandwidth have only recently been adequate to the task of being able to control the robot camera in a meaningful way. Using a head mounted display, the control of the camera can be facilitated by tracking the head as shown in the figure below.
This only works if the user feels comfortable with the latency of the system, the lag in the response to movements, the visual representation. Any issues such as, inadequate resolution, latency of the video image, lag in the mechanical and computer processing of the movement and response, and optical distortion due to camera lens and head mounted display lenses, can cause the user 'simulator sickness' that is exacerbated by the lack of vestibular stimulation with visual representation of motion.
Mismatch between the users motions such as registration errors, lag in movement response due to overfiltering, inadequate resolution for small movements, and slow speed can contribute to these problems.
The same technology can control the robot, but then the eye–hand coordination issues become even more pervasive through the system, and user tension or frustration can make the system difficult to use.
The tendency to build robots has been to minimize the degrees of freedom because that reduces the control problems. Recent improvements in computers has shifted the emphasis to more degrees of freedom, allowing robotic devices that seem more intelligent and more human in their motions. This also allows more direct teleoperation as the user can control the robot with their own motions.
Interfaces
A telerobotic interface can be as simple as a common MMK (monitor-mouse-keyboard) interface. While this is not immersive, it is inexpensive. Telerobotics driven by internet connections are often of this type. A valuable modification to MMK is a joystick, which provides a more intuitive navigation scheme for the planar robot movement.
Dedicated telepresence setups utilize a head-mounted display with either single or dual eye display, and an ergonomically matched interface with joystick and related button, slider, trigger controls.
Other interfaces merge fully immersive virtual reality interfaces and real-time video instead of computer-generated images. Another example would be to use an omnidirectional treadmill with an immersive display system so that the robot is driven by the person walking or running. Additional modifications may include merged data displays such as Infrared thermal imaging, real-time threat assessment, or device schematics.
Applications
Space
With the exception of the Apollo program, most space exploration has been conducted with telerobotic space probes. Most space-based astronomy, for example, has been conducted with telerobotic telescopes. The Russian Lunokhod-1 mission, for example, put a remotely driven rover on the Moon, which was driven in real time (with a 2.5-second lightspeed time delay) by human operators on the ground. Robotic planetary exploration programs use spacecraft that are programmed by humans at ground stations, essentially achieving a long-time-delay form of telerobotic operation. Recent noteworthy examples include the Mars exploration rovers (MER) and the [[Curiosity rover|Curiosity
]] rover. In the case of the MER mission, the spacecraft and the rover operated on stored programs, with the rover drivers on the ground programming each day's operation. The International Space Station (ISS) uses a two-armed telemanipulator called Dextre. More recently, a humanoid robot Robonaut has been added to the space station for telerobotic experiments.
NASA has proposed use of highly capable telerobotic systems for future planetary exploration using human exploration from orbit. In a concept for Mars Exploration proposed by Landis, a precursor mission to Mars could be done in which the human vehicle brings a crew to Mars, but remains in orbit rather than landing on the surface, while a highly capable remote robot is operated in real time on the surface. Such a system would go beyond the simple long time delay robotics and move to a regime of virtual telepresence on the planet. One study of this concept, the Human Exploration using Real-time Robotic Operations (HERRO) concept, suggested that such a mission could be used to explore a wide variety of planetary destinations.
Telepresence and videoconferencing
The prevalence of high quality video conferencing using mobile devices, tablets and portable computers has enabled a drastic growth in telepresence robots to help give a better sense of remote physical presence for communication and collaboration in the office, home, school, etc. when one cannot be there in person. The robot avatar can move or look around at the command of the remote person.Jacob Ward, "I am a robot boss", Popular Science, 28 October 2013.
There have been two primary approaches that both utilize videoconferencing on a display.
Desktop telepresence robots typically mount a phone or tablet on a motorized desktop stand to enable the remote person to look around a remote environment by panning and tilting the display.
Drivable telepresence robots typically contain a display (integrated or separate phone or tablet) mounted on a roaming base. More modern roaming telepresence robots may include an ability to operate autonomously. The robots can map out the space and be able to avoid obstacles while driving themselves between rooms and their docking stations.
Traditional videoconferencing systems and telepresence rooms generally offer pan-tilt-zoom cameras with far end control. The ability for the remote user to turn the device's head and look around naturally during a meeting is often seen as the strongest feature of a telepresence robot. For this reason, the developers have emerged in the new category of desktop telepresence robots that concentrate on this strongest feature to create a much lower cost robot. The desktop telepresence robots, also called "head-and-neck robots" allow users to look around during a meeting and are small enough to be carried from location to location, eliminating the need for remote navigation.
Some telepresence robots are highly helpful for some children with long-term illnesses, who were unable to attend school regularly. Latest innovative technologies can bring people together, and it allows them to stay connected to each other, which significantly help them to overcome loneliness.
Marine applications
Marine remotely operated vehicles (ROVs) are widely used to work in water too deep or too dangerous for divers. They repair offshore oil platforms and attach cables to sunken ships to hoist them. They are usually attached by a tether to a control center on a surface ship. The wreck of the Titanic was explored by an ROV, as well as by a crew-operated vessel.
Telemedicine
Additionally, a lot of telerobotic research is being done in the field of medical devices, and minimally invasive surgical systems. With a robotic surgery system, a surgeon can work inside the body through tiny holes just big enough for the manipulator, with no need to open up the chest cavity to allow hands inside.
Emergency Response and law enforcement robots
NIST maintains a set of test standards used for Emergency Response and law enforcement telerobotic systems.
Other applications
Remote manipulators are used to handle radioactive materials.
Telerobotics has been used in installation art pieces; Telegarden is an example of a project where a robot was operated by users through the Web.
See also
Astrobotic Technology
Dragon Runner, a military robot built for urban combat
Lunokhod
Medical robot
Military robot
Remote control vehicle
Remote manipulator
Robonaut
Smart device
Spirit rover
Snowplow robot
UWA Telerobot
References
External links
Telerobotics and Telepistemology Bibliography compiled by Ken Goldberg for Leonardo/ISAST
"The Boss Is Robotic, and Rolling Up Behind You" article by John Markoff in The New York Times'' 4 September 2010
Robot control
Wireless robotics
Telepresence robots
Television | Telerobotics | [
"Engineering"
] | 2,170 | [
"Robotics engineering",
"Robot control"
] |
305,924 | https://en.wikipedia.org/wiki/Computational%20fluid%20dynamics | Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
CFD is applied to a wide range of research and engineering problems in many fields of study and industries, including aerodynamics and aerospace analysis, hypersonics, weather simulation, natural science and environmental engineering, industrial system design and analysis, biological engineering, fluid flows and heat transfer, engine and combustion analysis, and visual effects for film and games.
Background and history
The fundamental basis of almost all CFD problems is the Navier–Stokes equations, which define many single-phase (gas or liquid, but not both) fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.
Historically, methods were first developed to solve the linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s.
One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardson's book Weather Prediction by Numerical Process, set the basis for modern CFD and numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book.
The computer power available paced development of three-dimensional methods. Probably the first work using computers to model fluid flow, as governed by the Navier–Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is widely considered one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as particle-in-cell method, fluid-in-cell method, vorticity stream function method, and
marker-and-cell method. Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world.
The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967. This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), Lockheed (Quadpan), Douglas (HESS), McDonnell Aircraft (MACAERO), NASA (PMARC) and Analytical Methods (WBAERO, USAERO and VSAERO). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of many submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, is also commercially available.
In the two-dimensional realm, a number of Panel Codes have been developed for airfoil analysis and design. The codes typically have a boundary layer analysis included, so that viscous effects can be modeled. developed the PROFILE code, partly with NASA funding, which became available in the early 1980s. This was soon followed by Mark Drela's XFOIL code. Both PROFILE and XFOIL incorporate two-dimensional panel codes, with coupled boundary layer codes for airfoil analysis work. PROFILE uses a conformal transformation method for inverse airfoil design, while XFOIL has both a conformal transformation and an inverse panel method for airfoil design.
An intermediate step between Panel Codes and Full Potential codes were codes that used the Transonic Small Disturbance equations. In particular, the three-dimensional WIBCO code, developed by Charlie Boppe of Grumman Aircraft in the early 1980s has seen heavy use.
Developers turned to Full Potential codes, as panel methods could not calculate the non-linear flow present at transonic speeds. The first description of a means of using the Full Potential equations was published by Earll Murman and Julian Cole of Boeing in 1970. Frances Bauer, Paul Garabedian and David Korn of the Courant Institute at New York University (NYU) wrote a series of two-dimensional Full Potential airfoil codes that were widely used, the most important being named Program H. A further growth of Program H was developed by Bob Melnik and his group at Grumman Aerospace as Grumfoil. Antony Jameson, originally at Grumman Aircraft and the Courant Institute of NYU, worked with David Caughey to develop the important three-dimensional Full Potential code FLO22 in 1975. Many Full Potential codes emerged after this, culminating in Boeing's Tranair (A633) code, which still sees heavy use.
The next step was the Euler equations, which promised to provide more accurate solutions of transonic flows. The methodology used by Jameson in his three-dimensional FLO57 code (1981) was used by others to produce such programs as Lockheed's TEAM program and IAI/Analytical Methods' MGAERO program. MGAERO is unique in being a structured cartesian mesh code, while most other such codes use structured body-fitted grids (with the exception of NASA's highly successful CART3D code, Lockheed's SPLITFLOW code and Georgia Tech's NASCART-GT). Antony Jameson also developed the three-dimensional AIRPLANE code which made use of unstructured tetrahedral grids.
In the two-dimensional realm, Mark Drela and Michael Giles, then graduate students at MIT, developed the ISES Euler program (actually a suite of programs) for airfoil design and analysis. This code first became available in 1986 and has been further developed to design, analyze and optimize single or multi-element airfoils, as the MSES program. MSES sees wide use throughout the world. A derivative of MSES, for the design and analysis of airfoils in a cascade, is MISES, developed by Harold Youngren while he was a graduate student at MIT.
The Navier–Stokes equations were the ultimate target of development. Two-dimensional codes, such as NASA Ames' ARC2D code first emerged. A number of three-dimensional codes were developed (ARC3D, OVERFLOW, CFL3D are three successful NASA contributions), leading to numerous commercial packages.
Recently CFD methods have gained traction for modeling the flow behavior of granular materials within various chemical processes in engineering. This approach has emerged as a cost-effective alternative, offering a nuanced understanding of complex flow phenomena while minimizing expenses associated with traditional experimental methods.
Hierarchy of fluid flow equations
CFD can be seen as a group of computational methodologies (discussed below) used to solve equations governing fluid flow. In the application of CFD, a critical step is to decide which set of physical assumptions and related equations need to be used for the problem at hand. To illustrate this step, the following summarizes the physical assumptions/simplifications taken in equations of a flow that is single-phase (see multiphase flow and two-phase flow), single-species (i.e., it consists of one chemical species), non-reacting, and (unless said otherwise) compressible. Thermal radiation is neglected, and body forces due to gravity are considered (unless said otherwise). In addition, for this type of flow, the next discussion highlights the hierarchy of flow equations solved with CFD. Note that some of the following equations could be derived in more than one way.
Conservation laws (CL): These are the most fundamental equations considered with CFD in the sense that, for example, all the following equations can be derived from them. For a single-phase, single-species, compressible flow one considers the conservation of mass, conservation of linear momentum, and conservation of energy.
Continuum conservation laws (CCL): Start with the CL. Assume that mass, momentum and energy are locally conserved: These quantities are conserved and cannot "teleport" from one place to another but can only move by a continuous flow (see continuity equation). Another interpretation is that one starts with the CL and assumes a continuum medium (see continuum mechanics). The resulting system of equations is unclosed since to solve it one needs further relationships/equations: (a) constitutive relationships for the viscous stress tensor; (b) constitutive relationships for the diffusive heat flux; (c) an equation of state (EOS), such as the ideal gas law; and, (d) a caloric equation of state relating temperature with quantities such as enthalpy or internal energy.
Compressible Navier-Stokes equations (C-NS): Start with the CCL. Assume a Newtonian viscous stress tensor (see Newtonian fluid) and a Fourier heat flux (see heat flux). The C-NS need to be augmented with an EOS and a caloric EOS to have a closed system of equations.
Incompressible Navier-Stokes equations (I-NS): Start with the C-NS. Assume that density is always and everywhere constant. Another way to obtain the I-NS is to assume that the Mach number is very small and that temperature differences in the fluid are very small as well. As a result, the mass-conservation and momentum-conservation equations are decoupled from the energy-conservation equation, so one only needs to solve for the first two equations.
Compressible Euler equations (EE): Start with the C-NS. Assume a frictionless flow with no diffusive heat flux.
Weakly compressible Navier-Stokes equations (WC-NS): Start with the C-NS. Assume that density variations depend only on temperature and not on pressure. For example, for an ideal gas, use , where is a conveniently-defined reference pressure that is always and everywhere constant, is density, is the specific gas constant, and is temperature. As a result, the WC-NS do not capture acoustic waves. It is also common in the WC-NS to neglect the pressure-work and viscous-heating terms in the energy-conservation equation. The WC-NS are also called the C-NS with the low-Mach-number approximation.
Boussinesq equations: Start with the C-NS. Assume that density variations are always and everywhere negligible except in the gravity term of the momentum-conservation equation (where density multiplies the gravitational acceleration). Also assume that various fluid properties such as viscosity, thermal conductivity, and heat capacity are always and everywhere constant. The Boussinesq equations are widely used in microscale meteorology.
Compressible Reynolds-averaged Navier–Stokes equations and compressible Favre-averaged Navier-Stokes equations (C-RANS and C-FANS): Start with the C-NS. Assume that any flow variable , such as density, velocity and pressure, can be represented as , where is the ensemble-average of any flow variable, and is a perturbation or fluctuation from this average. is not necessarily small. If is a classic ensemble-average (see Reynolds decomposition) one obtains the Reynolds-averaged Navier–Stokes equations. And if is a density-weighted ensemble-average one obtains the Favre-averaged Navier-Stokes equations. As a result, and depending on the Reynolds number, the range of scales of motion is greatly reduced, something which leads to much faster solutions in comparison to solving the C-NS. However, information is lost, and the resulting system of equations requires the closure of various unclosed terms, notably the Reynolds stress.
Ideal flow or potential flow equations: Start with the EE. Assume zero fluid-particle rotation (zero vorticity) and zero flow expansion (zero divergence). The resulting flowfield is entirely determined by the geometrical boundaries. Ideal flows can be useful in modern CFD to initialize simulations.
Linearized compressible Euler equations (LEE): Start with the EE. Assume that any flow variable , such as density, velocity and pressure, can be represented as , where is the value of the flow variable at some reference or base state, and is a perturbation or fluctuation from this state. Furthermore, assume that this perturbation is very small in comparison with some reference value. Finally, assume that satisfies "its own" equation, such as the EE. The LEE and its many variations are widely used in computational aeroacoustics.
Sound wave or acoustic wave equation: Start with the LEE. Neglect all gradients of and , and assume that the Mach number at the reference or base state is very small. The resulting equations for density, momentum and energy can be manipulated into a pressure equation, giving the well-known sound wave equation.
Shallow water equations (SW): Consider a flow near a wall where the wall-parallel length-scale of interest is much larger than the wall-normal length-scale of interest. Start with the EE. Assume that density is always and everywhere constant, neglect the velocity component perpendicular to the wall, and consider the velocity parallel to the wall to be spatially-constant.
Boundary layer equations (BL): Start with the C-NS (I-NS) for compressible (incompressible) boundary layers. Assume that there are thin regions next to walls where spatial gradients perpendicular to the wall are much larger than those parallel to the wall.
Bernoulli equation: Start with the EE. Assume that density variations depend only on pressure variations. See Bernoulli's Principle.
Steady Bernoulli equation: Start with the Bernoulli Equation and assume a steady flow. Or start with the EE and assume that the flow is steady and integrate the resulting equation along a streamline.
Stokes Flow or creeping flow equations: Start with the C-NS or I-NS. Neglect the inertia of the flow. Such an assumption can be justified when the Reynolds number is very low. As a result, the resulting set of equations is linear, something which simplifies greatly their solution.
Two-dimensional channel flow equation: Consider the flow between two infinite parallel plates. Start with the C-NS. Assume that the flow is steady, two-dimensional, and fully developed (i.e., the velocity profile does not change along the streamwise direction). Note that this widely-used fully-developed assumption can be inadequate in some instances, such as some compressible, microchannel flows, in which case it can be supplanted by a locally fully-developed assumption.
One-dimensional Euler equations or one-dimensional gas-dynamic equations (1D-EE): Start with the EE. Assume that all flow quantities depend only on one spatial dimension.
Fanno flow equation: Consider the flow inside a duct with constant area and adiabatic walls. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the momentum-conservation equation an empirical term to recover the effect of wall friction (neglected in the EE). To close the Fanno flow equation, a model for this friction term is needed. Such a closure involves problem-dependent assumptions.
Rayleigh flow equation. Consider the flow inside a duct with constant area and either non-adiabatic walls without volumetric heat sources or adiabatic walls with volumetric heat sources. Start with the 1D-EE. Assume a steady flow, no gravity effects, and introduce in the energy-conservation equation an empirical term to recover the effect of wall heat transfer or the effect of the heat sources (neglected in the EE).
Methodology
In all of these approaches the same basic procedure is followed.
During preprocessing
The geometry and physical bounds of the problem can be defined using computer aided design (CAD). From there, data can be suitably processed (cleaned-up) and the fluid volume (or fluid domain) is extracted.
The volume occupied by the fluid is divided into discrete cells (the mesh). The mesh may be uniform or non-uniform, structured or unstructured, consisting of a combination of hexahedral, tetrahedral, prismatic, pyramidal or polyhedral elements.
The physical modeling is defined – for example, the equations of fluid motion + enthalpy + radiation + species conservation
Boundary conditions are defined. This involves specifying the fluid behaviour and properties at all bounding surfaces of the fluid domain. For transient problems, the initial conditions are also defined.
The simulation is started and the equations are solved iteratively as a steady-state or transient.
Finally a postprocessor is used for the analysis and visualization of the resulting solution.
Discretization methods
The stability of the selected discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks and contact surfaces.
Some of the discretization methods being used are:
Finite volume method
The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion).
In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretization guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form,
where is the vector of conserved variables, is the vector of fluxes (see Euler equations or Navier–Stokes equations), is the volume of the control volume element, and is the surface area of the control volume element.
Finite element method
The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations. Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. FEM also provides more accurate solutions for smooth problems comparing to FVM. Another advantage of FEM is that it can handle complex geometries and boundary conditions. However, FEM can require more memory and has slower solution times than the FVM.
In this method, a weighted residual equation is formed:
where is the equation residual at an element vertex , is the conservation equation expressed on an element basis, is the weight factor, and is the volume of the element.
Finite difference method
The finite difference method (FDM) has historical importance and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid).
where is the vector of conserved variables, and , , and are the fluxes in the , , and directions respectively.
Spectral element method
Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinite-dimensional function space. Clearly an infinite-dimensional function space cannot be represented on a discrete spectral element mesh; this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form . In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out.
At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world.
Lattice Boltzmann method
The lattice Boltzmann method (LBM) with its simplified kinetic picture on a lattice provides a computationally efficient description of hydrodynamics.
Unlike the traditional CFD methods, which solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice mesh. In this method, one works with the discrete in space and time version of the kinetic evolution equation in the Boltzmann Bhatnagar-Gross-Krook (BGK) form.
Vortex method
The vortex method, also Lagrangian Vortex Particle Method, is a meshfree technique for the simulation of incompressible turbulent flows. In it, vorticity is discretized onto Lagrangian particles, these computational elements being called vortices, vortons, or vortex particles. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). This breakthrough came in the 1980s with the development of the Barnes-Hut and fast multipole method (FMM) algorithms. These paved the way to practical computation of the velocities from the vortex elements.
Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention. All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology;
It is practically grid-free, thus eliminating numerous iterations associated with RANS and LES.
All problems are treated identically. No modeling or calibration inputs are required.
Time-series simulations, which are crucial for correct analysis of acoustics, are possible.
The small scale and large scale are accurately simulated at the same time.
Boundary element method
In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh.
High-resolution discretization schemes
High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing.
Turbulence models
In computational modeling of turbulent flows, one common objective is to obtain a model that can predict quantities of interest, such as fluid velocity, for use in engineering designs of the system being modeled. For turbulent flows, the range of length scales and complexity of phenomena involved in turbulence make most modeling approaches prohibitively expensive; the resolution required to resolve all scales involved in turbulence is beyond what is computationally possible. The primary approach in such cases is to create numerical models to approximate unresolved phenomena. This section lists some commonly used computational models for turbulent flows.
Turbulence models can be classified based on computational expense, which corresponds to the range of scales that are modeled versus resolved (the more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost). If a majority or all of the turbulent scales are not modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy.
In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions.
Reynolds-averaged Navier–Stokes
Reynolds-averaged Navier–Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second-order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy.
RANS models can be divided into two broad approaches:
Boussinesq hypothesis This method involves using an algebraic equation for the Reynolds stresses which include determining the turbulent viscosity, and depending on the level of sophistication of the model, solving transport equations for determining the turbulent kinetic energy and dissipation. Models include k-ε (Launder and Spalding), Mixing Length Model (Prandtl), and Zero Equation Model (Cebeci and Smith). The models available in this approach are often referred to by the number of transport equations associated with the method. For example, the Mixing Length model is a "Zero Equation" model because no transport equations are solved; the is a "Two Equation" model because two transport equations (one for and one for ) are solved.
Reynolds stress model (RSM) This approach attempts to actually solve transport equations for the Reynolds stresses. This means introduction of several transport equations for all the Reynolds stresses and hence this approach is much more costly in CPU effort.
Large eddy simulation
Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS.
Detached eddy simulation
Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Philippe R. Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart–Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions.
Direct numerical simulation
Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to . DNS is intractable for flows with complex geometries or flow configurations.
Coherent vortex simulation
The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Vasilyev applied the FDV model to large eddy simulation, but did not assume that the wavelet filter eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion.
PDF methods
Probability density function (PDF) methods for turbulence, first introduced by Lundgren, are based on tracking the one-point PDF of the velocity, , which gives the probability of the velocity at point being between and . This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.
Vorticity confinement method
The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small-scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.
Linear eddy model
The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.
Two-phase flow
The modeling of two-phase flow is still under development. Different methods have been proposed, including the Volume of fluid method, the level-set method and front tracking. These methods often involve a tradeoff between maintaining a sharp interface or conserving mass . This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface.
Solution algorithms
Discretization in the space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator.
Multigrid has the advantage of asymptotically optimal performance on many problems. Traditional solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require many iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations.
For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates.
Unsteady aerodynamics
CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3.
Biomedical engineering
CFD investigations are used to clarify the characteristics of aortic flow in details that are beyond the capabilities of experimental measurements. To analyze these conditions, CAD models of the human vascular system are extracted employing modern imaging techniques such as MRI or Computed Tomography. A 3D model is reconstructed from this data and the fluid flow can be computed. Blood properties such as density and viscosity, and realistic boundary conditions (e.g. systemic pressure) have to be taken into consideration. Therefore, making it possible to analyze and optimize the flow in the cardiovascular system for different applications.
CPU versus GPU
Traditionally, CFD simulations are performed on CPUs.
In a more recent trend, simulations are also performed on GPUs. These typically contain slower but more processors. For CFD algorithms that feature good parallelism performance (i.e. good speed-up by adding more cores) this can greatly reduce simulation times. Fluid-implicit particle and lattice-Boltzmann methods are typical examples of codes that scale well on GPUs.
See also
Application of CFD in thermal power plants
Blade element theory
Boundary conditions in fluid dynamics
Cavitation modelling
Central differencing scheme
Computational magnetohydrodynamics
Discrete element method
Finite element method
Finite volume method for unsteady flow
Fluid animation
Immersed boundary method
Lattice Boltzmann methods
List of finite element software packages
Meshfree methods
Moving particle semi-implicit method
Multi-particle collision dynamics
Multidisciplinary design optimization
Numerical methods in fluid mechanics
Shape optimization
Smoothed-particle hydrodynamics
Stochastic Eulerian Lagrangian method
Turbulence modeling
Unified methods for computing incompressible and compressible flow
Visualization (graphics)
Wind tunnel
References
Notes
External links
Course: Computational Fluid Dynamics – Suman Chakraborty (Indian Institute of Technology Kharagpur)
Course: Numerical PDE Techniques for Scientists and Engineers, Open access Lectures and Codes for Numerical PDEs, including a modern view of Compressible CFD
Computational fields of study | Computational fluid dynamics | [
"Physics",
"Chemistry",
"Technology"
] | 7,824 | [
"Computational fields of study",
"Computational fluid dynamics",
"Computational physics",
"Computing and society",
"Fluid dynamics"
] |
305,927 | https://en.wikipedia.org/wiki/Vitrification | Vitrification (, via French ) is the full or partial transformation of a substance into a glass, that is to say, a non-crystalline or amorphous solid. Glasses differ from liquids structurally and glasses possess a higher degree of connectivity with the same Hausdorff dimensionality of bonds as crystals: dimH = 3. In the production of ceramics, vitrification is responsible for their impermeability to water.
Vitrification is usually achieved by heating materials until they liquidize, then cooling the liquid, often rapidly, so that it passes through the glass transition to form a glassy solid. Certain chemical reactions also result in glasses.
In terms of chemistry, vitrification is characteristic for amorphous materials or disordered systems and occurs when bonding between elementary particles (atoms, molecules, forming blocks) becomes higher than a certain threshold value. Thermal fluctuations break the bonds; therefore, the lower the temperature, the higher the degree of connectivity. Because of that, amorphous materials have a characteristic threshold temperature termed glass transition temperature (Tg): below Tg amorphous materials are glassy whereas above Tg they are molten.
The most common applications are in the making of pottery, glass, and some types of food, but there are many others, such as the vitrification of an antifreeze-like liquid in cryopreservation.
In a different sense of the word, the embedding of material inside a glassy matrix is also called vitrification. An important application is the vitrification of radioactive waste to obtain a substance that is thought to be safer and more stable for disposal.
One study suggests during the eruption of Mount Vesuvius in 79 AD, a victim's brain was vitrified by the extreme heat of the volcanic ash; however, this has been strenuously disputed.
Ceramics
Vitrification is the progressive partial fusion of a clay, or of a body, as a result of a firing process. As vitrification proceeds, the proportion of glassy bond increases and the apparent porosity of the fired product becomes progressively lower. Vitreous bodies have open porosity, and may be either opaque or translucent. In this context, "zero porosity" may be defined as less than 1% water absorption. However, various standard procedures define the conditions of water absorption. An example is by ASTM, who state "The term vitreous generally signifies less than 0.5% absorption, except for floor and wall tile and low-voltage electrical insulators, which are considered vitreous up to 3% water absorption."
Pottery can be made impermeable to water by glazing or by vitrification. Porcelain, bone china, and sanitaryware are examples of vitrified pottery, and are impermeable even without glaze. Stoneware may be vitrified or semi-vitrified; the latter type would not be impermeable without glaze.
Applications
When sucrose is cooled slowly it results in crystal sugar (or rock candy), but when cooled rapidly it can form syrupy cotton candy (candyfloss).
Vitrification can also occur in a liquid such as water, usually through very rapid cooling or the introduction of agents that suppress the formation of ice crystals. This is in contrast to ordinary freezing which results in ice crystal formation. Vitrification is used in cryo-electron microscopy to cool samples so quickly that they can be imaged with an electron microscope without damage. In 2017, the Nobel prize for chemistry was awarded for the development of this technology, which can be used to image objects such as proteins or virus particles.
Ordinary soda-lime glass, used in windows and drinking containers, is created by the addition of sodium carbonate and lime (calcium oxide) to silicon dioxide. Without these additives, silicon dioxide would require very high temperature to obtain a melt, and subsequently (with slow cooling) a glass.
Vitrification is used in disposal and long-term storage of nuclear waste or other hazardous wastes in a method called geomelting. Waste is mixed with glass-forming chemicals in a furnace to form molten glass that then solidifies in canisters, thereby immobilizing the waste. The final waste form resembles obsidian and is a non-leaching, durable material that effectively traps the waste inside. It is widely assumed that such waste can be stored for relatively long periods in this form without concern for air or groundwater contamination. Bulk vitrification uses electrodes to melt soil and wastes where they lie buried. The hardened waste may then be disinterred with less danger of widespread contamination. According to the Pacific Northwest National Labs, "Vitrification locks dangerous materials into a stable glass form that will last for thousands of years."
Vitrification in cryopreservation
Vitrification in cryopreservation is used to preserve, for example, human egg cells (oocytes) (in oocyte cryopreservation) and embryos (in embryo cryopreservation). It prevents ice crystal formation and is a very fast process: -23,000 °C/min.
Currently, vitrification techniques have only been applied to brains (neurovitrification) by Alcor and to the upper body by the Cryonics Institute, but research is in progress by both organizations to apply vitrification to the whole body.
Many woody plants living in polar regions naturally vitrify their cells to survive the cold. Some can survive immersion in liquid nitrogen and liquid helium. Vitrification can also be used to preserve endangered plant species and their seeds. For example, recalcitrant seeds are considered hard to preserve. Plant vitrification solution (PVS), one of application of vitrification, has successfully preserved Nymphaea caerulea seeds.
Additives used in cryobiology or produced naturally by organisms living in polar regions are called cryoprotectants.
See also
Cryogenics
Crystallization
Supercooling
Vitrified fort
Literature
Stefan Lovgren, "Corpses Frozen for Future Rebirth by Arizona Company", March 2005, National Geographic
References
Glass
Phase transitions
Cryobiology
it:Criopreservazione | Vitrification | [
"Physics",
"Chemistry",
"Biology"
] | 1,274 | [
"Physical phenomena",
"Phase transitions",
"Glass",
"Phases of matter",
"Critical phenomena",
"Cryobiology",
"Unsolved problems in physics",
"Homogeneous chemical mixtures",
"Biochemistry",
"Statistical mechanics",
"Amorphous solids",
"Matter"
] |
3,694,602 | https://en.wikipedia.org/wiki/Natural%20frequency | Natural frequency, measured in terms of eigenfrequency, is the rate at which an oscillatory system tends to oscillate in the absence of disturbance. A foundational example pertains to simple harmonic oscillators, such as an idealized spring with no energy loss wherein the system exhibits constant-amplitude oscillations with a constant frequency. The phenomenon of resonance occurs when a forced vibration matches a system's natural frequency.
Overview
Free vibrations of an elastic body, also called natural vibrations, occur at the natural frequency. Natural vibrations are different from forced vibrations which happen at the frequency of an applied force (forced frequency). If the forced frequency is equal to the natural frequency, the vibrations' amplitude increases manyfold. This phenomenon is known as resonance where the system's response to the applied frequency is amplified.. A system's normal mode is defined by the oscillation of a natural frequency in a sine waveform.
In analysis of systems, it is convenient to use the angular frequency rather than the frequency f, or the complex frequency domain parameter .
In a mass–spring system, with mass m and spring stiffness k, the natural angular frequency can be calculated as:
In an electrical network, ω is a natural angular frequency of a response function f(t) if the Laplace transform F(s) of f(t) includes the term , where for a real σ, and is a constant. Natural frequencies depend on network topology and element values but not their input. It can be shown that the set of natural frequencies in a network can be obtained by calculating the poles of all impedance and admittance functions of the network. A pole of the network transfer function is associated with a natural angular frequencies of the corresponding response variable; however there may exist some natural angular frequency that does not correspond to a pole of the network function. These happen at some special initial states.
In LC and RLC circuits, its natural angular frequency can be calculated as:
See also
Fundamental frequency
References
Sources
Further reading
Waves
Oscillation | Natural frequency | [
"Physics"
] | 421 | [
"Physical phenomena",
"Classical mechanics stubs",
"Classical mechanics",
"Waves",
"Motion (physics)",
"Mechanics",
"Oscillation"
] |
3,696,813 | https://en.wikipedia.org/wiki/Classical-map%20hypernetted-chain%20method | The classical-map hypernetted-chain method (CHNC method) is a method used in many-body theoretical physics for interacting uniform electron liquids in two and three dimensions, and for non-ideal plasmas. The method extends the famous hypernetted-chain method (HNC) introduced by J.M.J. van Leeuwen et al. to quantum fluids as well. The classical HNC, together with the Percus–Yevick approximation, are the two pillars which bear the brunt of most calculations in the theory of interacting classical fluids. Also, HNC and PY have become important in providing basic reference schemes in the theory of fluids, and hence they are of great importance to the physics of many-particle systems.
The HNC and PY integral equations provide the pair distribution functions of the particles in a classical fluid, even for very high coupling strengths. The coupling strength is measured by the ratio of the potential energy to the kinetic energy. In a classical fluid, the kinetic energy is proportional to the temperature. In a quantum fluid, the situation is very complicated as one needs to deal with quantum operators, and matrix elements of such operators, which appear in various perturbation methods based on Feynman diagrams. The CHNC method provides an approximate "escape" from these difficulties, and applies to regimes beyond perturbation theory. In Robert B. Laughlin's famous Nobel Laureate work on the fractional quantum Hall effect, an HNC equation was used within a classical plasma analogy.
In the CHNC method, the pair-distributions of the interacting particles are calculated using a mapping which ensures that the quantum mechanically correct non-interacting pair distribution function is recovered when the Coulomb interactions are switched off. The value of the method lies in its ability to calculate the interacting pair distribution functions g(r) at zero and finite temperatures. Comparison of the calculated g(r) with results from Quantum Monte Carlo show remarkable agreement, even for very strongly correlated systems.
The interacting pair-distribution functions obtained from CHNC have been used to calculate the exchange-correlation energies, Landau parameters of Fermi liquids and other quantities of interest in many-body physics and density functional theory, as well as in the theory of hot plasmas.
See also
Fermi liquid
Many-body theory
Quantum fluid
Radial distribution function
References
Further reading
Theoretical physics | Classical-map hypernetted-chain method | [
"Physics"
] | 478 | [
"Theoretical physics"
] |
3,699,152 | https://en.wikipedia.org/wiki/Rotary%20phase%20converter | A rotary phase converter, abbreviated RPC, is an electrical machine that converts power from one polyphase system to another, converting through rotary motion. Typically, single-phase electric power is used to produce three-phase electric power locally to run three-phase loads in premises where only single-phase is available.
Operation
A basic three-phase induction motor will have three windings, each end connected to terminals typically numbered (arbitrarily) as L1, L2, and L3 and sometimes T1, T2, T3.
A three-phase induction motor can be run at two-thirds of its rated horsepower on single-phase power applied to a single winding, once spun up by some means. A three-phase motor running on a single phase cannot start itself because it lacks the other phases to create a rotation on its own, much like a crank that is at dead center.
A three-phase induction motor that is spinning under single-phase power applied to terminals L1 and L2 will generate an electric potential (voltage) across terminal L3 in respect with L1 and L2. However, L1 to L3 and L2 to L3 will be 120 degrees out of phase with the input voltage, thus creating three-phase power. However, without current injection, special idler windings, or other means of regulation, the voltage will sag when a load is applied.
Power-factor correction is a very important consideration when building or choosing an RPC. This is desirable because an RPC that has power-factor correction will consume less current from the single-phase service supplying power to the phase converter and its loads.
A major concern with three phase power is that each phase be at similar voltages. A discrepancy between phases is known as phase imbalance. As a general guideline, unbalanced three-phase power that exceeds 4% in voltage variation can damage the equipment that it is meant to operate.
History
At the beginning of the 20th century, there were two main principles of electric railway traction current systems:
DC system
16⅔ Hz single phase system
These systems used series-wound traction motors. All of them needed a separated supply system or converters to take power from the standard 50 Hz electric network.
Kandó synchronous phase converter
Kálmán Kandó recognized that the electric traction system must be supplied by single-phase 50 Hz power from the standard electric network, and it must be converted in the locomotive to three-phase power for traction motors.
He created an electric machine called a synchronous phase converter, which was a single-phase synchronous motor and a three-phase synchronous generator with common stator and rotor.
It had two independent windings:
The outer winding is a single-phase synchronous motor. The motor takes the power from the overhead line.
The inner winding is a three-phase (or variable-phase) synchronous generator, which provides the power for the three- (or more) phase traction motors.
Single-phase supply
The direct feed from a standard electric network makes the system less complicated than the earlier systems and makes possible simple recuperation.
The single-phase feed makes it possible to use a single overhead line. More overhead lines increase the costs, and restrict the maximum speed of the trains.
Speed control
The asynchronous traction motor can run on a single RPM determined by the frequency of the feeding current and the loading torque.
The solution was to use more secondary windings on phase converter, and more windings on motor different number of magnetic poles.
Types
A rotary phase converter (RPC) may be built as a motor-generator set. These completely isolate the load from the single-phase supply and produce balanced three-phase output. However, due to weight, cost, and efficiency concerns, most RPCs are not built this way.
Instead, they are built out of a three-phase induction motor or generator, called an idler, on which two of the terminals (the idler inputs) are powered from the single-phase line. The rotating flux in the motor produces a voltage on the third terminal. A voltage is induced in the third terminal that is phase shifted from the voltage between the first two terminals. In a three-winding motor, two of the windings are acting as a motor, and the third winding is acting as a generator. Because two of the outputs are the same as the single phase input, their phase relationship is 180°. This leaves the synthesized phase to be +/-90° from the input terminals. This non-ideal phase relationship requires a slight power de-rating of motors driven by this type of phase converter. Also, since the third, synthesized phase is driven differently from the other two, its response to load changes may be different causing this phase to sag more under load. Since induction motors are sensitive to voltage imbalance, this is another factor in de-rating of motors driven by this type of phase converter. For example, a small 5% imbalance in phase voltage requires a much larger 24% reduction of motor rated power. Thus tuning a rotary phase converter circuit for equal phase voltages under maximum load may be quite important.
Power quality
A common measure of the quality of the power produced by an RPC or any phase converter is the voltage balance, which may be measured while the RPC is driving a balanced load such as a three-phase motor. Other quality measures include the harmonic content of the power produced and the power factor of the RPC motor combination as seen by the utility. Selection of the best phase converter for any application depends on the sensitivity of the load to these factors. Three-phase induction motors are very sensitive to voltage imbalances.
The quality of three-phase power generated by such a phase converter depends upon a number of factors including:
Power capacity of the phase converter (idler horsepower rating).
Power level demands of the equipment being supplied. For instance, "hard starting" loads such as heavily loaded machinery or well pumps may have higher requirements than other loads rated at the same horsepower.
Power quality demands of the equipment being supplied (CNC equipment may have more stringent power quality requirements than a welding machine)
Use of techniques to balance the voltage between the three legs.
Quality improvement
RPC manufacturers use a variety of techniques to deal with these problems. Some of the techniques include,
The insertion of capacitors between the terminals to balance the power at a particular load.
The use of idlers with higher power ratings than the loads.
The construction of special idler motors with more windings on the third terminal to boost the voltage and compensate for the sag caused by the load.
The use of electronics to switch in capacitors, during start up or otherwise, based on the load.
The use of filters.
Uses
General
Demand exists for phase converters due to the use of three-phase motors. With increasing power output, three-phase motors have preferable characteristics to single-phase motors; the latter not being available in sizes over and, though available, rarely seen larger than . (Three-phase motors have higher efficiency, reduced complexity, with regards to starting, and three-phase power is significantly available where they are used.)
Electric railways
Rotary phase converters are used to produce a single-phase for the single overhead conductor in electric railways. Five European countries (Germany, Austria, Switzerland, Norway, and Sweden), where electricity is three-phase AC at 50 Hz, have standardised on single-phase AC at 15 kV Hz for railway electrification; phase converters are, therefore, used to change both phases and frequency. In the Soviet Union, they were used on AC locomotives to convert single phase, 50 Hz to 3-phase for driving induction motors for traction motor cooling blowers, etc.
Alternatives to rotary converters
Alternatives exist to rotary phase converters for operation of three-phase equipment on a single-phase power supply.
Static phase converters
These may be an alternative where the issue at hand is starting a motor, rather than polyphase power itself. The static phase converter is used to start a three-phase motor. The motor then runs on a single phase with a synthesised third pole. However, this makes the power balance, and thus motor efficiency, extremely poor, requiring de-rating the motor (typically to 60% or less). Overheating, and quite often destruction of the motor, will result from failing to do so. (Many manufacturers and dealers specifically state that using a static converter will void any warranty.) An oversized static converter can remove the need to de-rate the motor, but at an increased cost.
Inverter drives (VFDs)
The popularity of the Variable-frequency drive (VFD) has increased in the last decade, especially in the home-shop market. This is because of their relative low cost and ability to generate three-phase output from single phase input. A VFD converts AC power to DC and then converts it back to AC through a transistor bridge, a technology that is somewhat analogous to that of a switch-mode power supply. As the VFD generates its AC output from the DC bus, it is possible to power a three-phase motor from a single-phase source. Nevertheless, commercial-grade VFDs are produced that require three-phase input, as there are some efficiency gains to be had with such an arrangement.
A typical VFD functions by rapidly switching transistors on and off to "chop" the voltage on the DC bus through what is known as pulse-width modulation (PWM). Proper use of PWM will result in an AC output whose voltage and frequency can be varied over a fairly wide range. As an induction motor's rotational speed is proportional to input frequency, a change in the VFD's output frequency will cause the motor to change speed. Voltage is also changed in a way that results in the motor producing a relatively constant torque over the useful speed range.
The output of a quality VFD is an approximation of a sine wave, with some high frequency harmonic content. Harmonic content will elevate motor temperature and may produce some whistling or whining noise that could be objectionable. The effects of unwanted harmonics can be mitigated by the use of reactive output filtering, which is incorporated into better quality VFDs. Reactive filtration impedes the high frequency harmonic content but has little effect on the fundamental frequency that determines motor speed. The result is an output to the motor that is closer to an ideal sine wave.
In the past, VFDs that have a capacity greater than were costly, thus making the rotary phase converter (RPC) an attractive alternative. However, modern VFDs have dropped considerably in cost, making them more affordable than comparable RPCs. Also working in the VFD's favor is its more compact size relative to its electrical capacity. A plus is many VFDs can produce a "soft start" effect (in which power is gradually applied to the motor), which reduces the amount of current that must be delivered at machine start-up.
Use of a VFD may result in motor damage if the motor is not rated for such an application. This is primarily because most induction motors are forced-air cooled by a fan or blower driven by the motor itself. Operating such a motor at a lower-than-normal speed will substantially reduce the cooling airflow, increasing the likelihood of overheating and winding damage or failure, especially while operating at full load. A manufacturer may void the warranty on a motor powered by a VFD unless the motor is "inverter-rated." As VFDs are the most popular method of powering motors in new commercial installations, most three-phase motors sold today are, in fact, inverter-rated.
See also
Frequency converter
Kálmán Kandó
Rotary converter
Three-phase electric power
References
Further reading
Sitkei Gyula: A magyar elektrotechnika nagy alakjai. (Energetikai Kiadó Kht. 2005)
External links
Electrical engineering
Hungarian inventions
AC power | Rotary phase converter | [
"Engineering"
] | 2,514 | [
"Electrical engineering"
] |
3,699,463 | https://en.wikipedia.org/wiki/SOS%20box | SOS box is the operator to which the LexA repressor binds to repress the transcription of SOS-induced proteins. SOS boxes are found near the promoter of various genes.
LexA binds to an SOS box in the absence of DNA damage. In the presence of DNA damage the binding of LexA is inactivated by the RecA activator. SOS boxes differ in DNA sequences and binding affinity towards LexA from organism to organism. Furthermore, SOS boxes may be present in a dual fashion, which indicates that more than one SOS box can be within the same promoter.
Examples
See Nucleic acid nomenclature for an explanation of non-GATC nucleotide letters.
See also
SOS response
References
DNA repair | SOS box | [
"Biology"
] | 151 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
28,675,708 | https://en.wikipedia.org/wiki/Quartz%20crystal%20microbalance%20with%20dissipation%20monitoring | Within surface science, a quartz crystal microbalance with dissipation monitoring (QCM-D) is a type of quartz crystal microbalance (QCM) based on the ring-down technique. It is used in interfacial acoustic sensing. Its most common application is the determination of a film thickness in a liquid environment (such as the thickness of an adsorbed protein layer). It can be used to investigate further properties of the sample, most notably the layer's softness.
Method
Ring-down as a method to interrogate acoustic resonators was established in 1954. In the context of the QCM, it was described by Hirao et al. and Rodahl et al. The active component of a QCM is a thin quartz crystal disk sandwiched between a pair of electrodes. The application of an AC voltage over the electrodes causes the crystal to oscillate at its acoustic resonance frequency. When the AC voltage is turned off, the oscillation decays exponentially ("rings down"). This decay is recorded and the resonance frequency (f) and the energy dissipation factor (D) are extracted. D is defined as the loss of energy per oscillation period divided by the total energy stored in the system. D is equal to the resonance bandwidth divided by the resonance frequency. Other QCM instruments determine the bandwidth from the conductance spectra. Being a QCM, the QCM-D works in real-time, does not need labeling, and is surface-sensitive. Current QCM-D equipment enables measuring of more than 200 data points per second.
Changes in the resonance frequency (Δf) are primarily related to mass uptake or release at the sensor surface. When employed as a mass sensor, the instrument has a sensitivity of about 0.5ng/cm2 according to the manufacturer. Changes in the dissipation factor (ΔD) are primarily related to the viscoelasticity (softness). The softness, in turn, often is related to structural changes of the film adhering at the sensor surface.
Mass sensor
When operated as a mass sensor, the QCM-D is often used to study molecular adsorption/desorption and binding kinetics to various types of surfaces. In contrast to optical techniques such as surface plasmon resonance (SPR) spectroscopy, ellipsometry, or dual polarisation interferometry, the QCM determines the mass of the adsorbed film including trapped solvent. Comparison of the "acoustic thickness" as determined with the QCM and the "optical thickness" as determined by any of the optical techniques therefore allows to estimate the degree of swelling of the film in the ambient liquid. The difference in dry and wet mass measured by QCM-D and MP-SPR is more significant in highly hydrated layers as can be seen in.
Since the softness of the sample is affected by a large variety of parameters, the QCM-D is useful for studying molecular interactions with surfaces as well as interactions between molecules. The QCM-D is commonly used in the fields of biomaterials, cell adhesion, drug discovery, materials science, and biophysics. Other typical applications are characterizing viscoelastic films, conformational changes of deposited macromolecules, build-up of polyelectrolyte multilayers, and degradation or corrosion of films and coatings.
References
Physical quantities | Quartz crystal microbalance with dissipation monitoring | [
"Physics",
"Mathematics"
] | 702 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
28,680,558 | https://en.wikipedia.org/wiki/Square-integrable%20function | In mathematics, a square-integrable function, also called a quadratically integrable function or function or square-summable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. Thus, square-integrability on the real line is defined as follows.
One may also speak of quadratic integrability over bounded intervals such as for .
An equivalent definition is to say that the square of the function itself (rather than of its absolute value) is Lebesgue integrable. For this to be true, the integrals of the positive and negative portions of the real part must both be finite, as well as those for the imaginary part.
The vector space of (equivalence classes of) square integrable functions (with respect to Lebesgue measure) forms the space with Among the spaces, the class of square integrable functions is unique in being compatible with an inner product, which allows notions like angle and orthogonality to be defined. Along with this inner product, the square integrable functions form a Hilbert space, since all of the spaces are complete under their respective -norms.
Often the term is used not to refer to a specific function, but to equivalence classes of functions that are equal almost everywhere.
Properties
The square integrable functions (in the sense mentioned in which a "function" actually means an equivalence class of functions that are equal almost everywhere) form an inner product space with inner product given by
where
and are square integrable functions,
is the complex conjugate of
is the set over which one integrates—in the first definition (given in the introduction above), is , in the second, is .
Since , square integrability is the same as saying
It can be shown that square integrable functions form a complete metric space under the metric induced by the inner product defined above.
A complete metric space is also called a Cauchy space, because sequences in such metric spaces converge if and only if they are Cauchy.
A space that is complete under the metric induced by a norm is a Banach space.
Therefore, the space of square integrable functions is a Banach space, under the metric induced by the norm, which in turn is induced by the inner product.
As we have the additional property of the inner product, this is specifically a Hilbert space, because the space is complete under the metric induced by the inner product.
This inner product space is conventionally denoted by and many times abbreviated as
Note that denotes the set of square integrable functions, but no selection of metric, norm or inner product are specified by this notation.
The set, together with the specific inner product specify the inner product space.
The space of square integrable functions is the space in which
Examples
The function defined on is in for but not for
The function defined on is square-integrable.
Bounded functions, defined on are square-integrable. These functions are also in for any value of
Non-examples
The function defined on where the value at is arbitrary. Furthermore, this function is not in for any value of in
See also
Inner product space
References
Functional analysis
Mathematical analysis
Lp spaces | Square-integrable function | [
"Mathematics"
] | 659 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations"
] |
28,686,098 | https://en.wikipedia.org/wiki/Landweber%20exact%20functor%20theorem | In mathematics, the Landweber exact functor theorem, named after Peter Landweber, is a theorem in algebraic topology. It is known that a complex orientation of a homology theory leads to a formal group law. The Landweber exact functor theorem (or LEFT for short) can be seen as a method to reverse this process: it constructs a homology theory out of a formal group law.
Statement
The coefficient ring of complex cobordism is , where the degree of is . This is isomorphic to the graded Lazard ring . This means that giving a formal group law F (of degree ) over a graded ring is equivalent to giving a graded ring morphism . Multiplication by an integer is defined inductively as a power series, by
and
Let now F be a formal group law over a ring . Define for a topological space X
Here gets its -algebra structure via F. The question is: is E a homology theory? It is obviously a homotopy invariant functor, which fulfills excision. The problem is that tensoring in general does not preserve exact sequences. One could demand that be flat over , but that would be too strong in practice. Peter Landweber found another criterion:
Theorem (Landweber exact functor theorem)
For every prime p, there are elements such that we have the following: Suppose that is a graded -module and the sequence is regular for , for every p and n. Then
is a homology theory on CW-complexes.
In particular, every formal group law F over a ring yields a module over since we get via F a ring morphism .
Remarks
There is also a version for Brown–Peterson cohomology BP. The spectrum BP is a direct summand of with coefficients . The statement of the LEFT stays true if one fixes a prime p and substitutes BP for MU.
The classical proof of the LEFT uses the Landweber–Morava invariant ideal theorem: the only prime ideals of which are invariant under coaction of are the . This allows to check flatness only against the (see Landweber, 1976).
The LEFT can be strengthened as follows: let be the (homotopy) category of Landweber exact -modules and the category of MU-module spectra M such that is Landweber exact. Then the functor is an equivalence of categories. The inverse functor (given by the LEFT) takes -algebras to (homotopy) MU-algebra spectra (see Hovey, Strickland, 1999, Thm 2.7).
Examples
The archetypical and first known (non-trivial) example is complex K-theory K. Complex K-theory is complex oriented and has as formal group law . The corresponding morphism is also known as the Todd genus. We have then an isomorphism
called the Conner–Floyd isomorphism.
While complex K-theory was constructed before by geometric means, many homology theories were first constructed via the Landweber exact functor theorem. This includes elliptic homology, the Johnson–Wilson theories and the Lubin–Tate spectra .
While homology with rational coefficients is Landweber exact, homology with integer coefficients is not Landweber exact. Furthermore, Morava K-theory K(n) is not Landweber exact.
Modern reformulation
A module M over is the same as a quasi-coherent sheaf over , where L is the Lazard ring. If , then M has the extra datum of a coaction. A coaction on the ring level corresponds to that is an equivariant sheaf with respect to an action of an affine group scheme G. It is a theorem of Quillen that and assigns to every ring R the group of power series
.
It acts on the set of formal group laws via
.
These are just the coordinate changes of formal group laws. Therefore, one can identify the stack quotient with the stack of (1-dimensional) formal groups and defines a quasi-coherent sheaf over this stack. Now it is quite easy to see that it suffices that M defines a quasi-coherent sheaf which is flat over in order that is a homology theory. The Landweber exactness theorem can then be interpreted as a flatness criterion for (see Lurie 2010).
Refinements to -ring spectra
While the LEFT is known to produce (homotopy) ring spectra out of , it is a much more delicate question to understand when these spectra are actually -ring spectra. As of 2010, the best progress was made by Jacob Lurie. If X is an algebraic stack and a flat map of stacks, the discussion above shows that we get a presheaf of (homotopy) ring spectra on X. If this map factors over (the stack of 1-dimensional p-divisible groups of height n) and the map is etale, then this presheaf can be refined to a sheaf of -ring spectra (see Goerss). This theorem is important for the construction of topological modular forms.
See also
Chromatic homotopy theory
References
.
Algebraic Topology | Landweber exact functor theorem | [
"Mathematics"
] | 1,055 | [
"Theorems in algebraic topology",
"Theorems in topology"
] |
28,687,092 | https://en.wikipedia.org/wiki/Cipargamin | Cipargamin (NITD609, KAE609) is an experimental synthetic antimalarial drug belonging to the spiroindolone class. The compound was developed at the Novartis Institute for Tropical Diseases in Singapore, through a collaboration with the Genomics Institute of the Novartis Research Foundation (GNF), the Biomedical Primate Research Centre and the Swiss Tropical Institute.
Cipargamin is a synthetic antimalarial molecule belonging to the spiroindolone class, awarded MMV Project of the Year 2009. It is structurally related to GNF 493, a compound first identified as a potent inhibitor of Plasmodium falciparum growth in a high throughput phenotypic screen of natural products conducted at the Genomics Institute of the Novartis Research Foundation in San Diego, California in 2006.
Cipargamin was discovered by screening the Novartis library of 12,000 natural products and synthetic compounds to find compounds active against Plasmodium falciparum. The first screen turned up 275 compounds and the list was narrowed to 17 potential candidates. The current spiroindolone was optimized to address its metabolic liabilities leading to improved stability and exposure levels in animals. As a result, cipargamin is one of only a handful of molecules capable of completely curing mice infected with Plasmodium berghei (a model of blood-stage malaria). Given its good physicochemical properties, promising pharmacokinetic and efficacy profile, the molecule was recently approved as a preclinical candidate and is now entering GLP toxicology studies with the aim of entering Phase I studies in humans in late 2010. If its safety and tolerability are acceptable, cipargamin would be the first antimalarial not belonging to either the artemisinin or peroxide class to go into a proof-of-concept study in malaria. If cipargamin behaves similarly in people to the way it works in mice, it may be possible to develop it into a drug that could be taken just once - far easier than current standard treatments in which malaria drugs are taken between one and four times a day for up to seven days. Cipargamin also has properties which could enable it to be manufactured in pill form and in large quantities. Further animal studies have been performed and researchers have begun human-stage trials.
References
Antimalarial agents
Tryptamines
Chloroarenes
Spiro compounds
Drugs developed by Novartis
Fluoroarenes | Cipargamin | [
"Chemistry"
] | 526 | [
"Organic compounds",
"Spiro compounds"
] |
28,687,269 | https://en.wikipedia.org/wiki/Coherent%20perfect%20absorber | A coherent perfect absorber (CPA), or anti-laser, is a device which absorbs coherent waves, such as coherent light waves, and converts them into some form of internal energy, e.g. heat or electrical energy. It is the time-reversed counterpart of a laser. Coherent perfect absorption allows control of waves with waves (light with light) without a nonlinear medium. The concept was first published in the July 26, 2010, issue of Physical Review Letters, by a team at Yale University led by theorist A. Douglas Stone and experimental physicist Hui W. Cao. In the September 9, 2010, issue of Physical Review A, Stefano Longhi of Polytechnic University of Milan showed how to combine a laser and an anti-laser in a single device. In February 2011 the team at Yale built the first working anti-laser. It is a two-channel CPA device which absorbs two beams from the same laser, but only when the beams have the correct phases and amplitudes. The initial device absorbed 99.4 percent of all incoming light, but the team behind the invention believe it will be possible to achieve 99.999 percent.
Originally implemented as a Fabry-Pérot cavity that is many wavelengths thick, the optical CPA operates at specific optical frequencies. In January 2012, thin-film CPA has been proposed by utilizing the achromatic dispersion of metal-like materials, exhibiting the unparalleled bandwidth and thin profile advantages. Shortly after, CPA was observed in various thin film materials, including photonic metamaterial,
multi-layer graphene, single and multiple layers of chromium, as well as microwave metamaterial.
Anti-laser principle and demonstration
In the initial design, identical laser beams are fired onto opposite sides of a cavity consisting of a silicon wafer, a light-absorbing material that acts as a "loss medium". While light incident on one side would be partially transmitted and reflected, simultaneous illumination of both sides can result in destructive interference of all transmitted and reflected waves. Such complete suppression of transmission and reflection traps the optical energy within the loss medium until it is fully absorbed. The photons bounce back and forth until they are absorbed and transformed into heat. In contrast, a normal laser uses a gain medium which amplifies light instead of absorbing it.
Coherent perfect absorption and transmission in thin films
If the absorbing medium is thin in comparison to the wavelength, then constructive interference of mutually coherent waves incident on opposite sides of the absorber will enhance the level of absorption, while destructive interference will suppress it. For an ideal coherent absorber thin film, absorption can be enhanced to 100% and suppressed down to 0%, where absorption can be tuned between these extremes by adjusting the phase difference between the incident waves. Necessary conditions for coherent perfect absorption include that the film, when illuminated from one side only, will act as a (lossy) beam splitter, transmitting and reflecting equal fractions of the incident power. Necessary conditions for coherent perfect transmission include that, for illumination from one side, 25% of the incident power is transmitted and reflected each.
Coherent perfect absorption in thin films is ultrafast, absorption of ~10 femtosecond light pulses has been demonstrated, implying that it can offer around 100 THz bandwidth. The demonstration of CPA of single photons indicates that the effect is compatible with arbitrarily low intensities and has led to opportunities for quantum technologies.
While absorption of electromagnetic waves is usually considered, the concept is also applicable to other waves (such as sound waves) and other phenomena. Indeed, as constructive and destructive interference of waves on a thin material enhance and suppress the wave-matter interaction, any effect of the medium on the wave may be controlled in this way, including polarization effects associated with chirality and anisotropy, as well as refraction and nonlinear optical phenomena.
Applications
Coherent perfect absorbers can be used to build absorptive interferometers, which could be useful in detectors, transducers, and optical switches. Another potential application is in radiology, where the principle of the CPA might be used to precisely target electromagnetic radiation inside human tissues for therapeutic or imaging purposes.
Integration of thin coherent perfect absorbers in waveguides has led to proof-of-principle demonstrations of fast and low-energy all-optical signal processing and cryptography, while integration of CPA with imaging systems has enabled demonstrations of all-optical focusing, pattern recognition & image processing, and massively parallel all-optical signal processing. In principle, such applications can offer extremely high bandwidth and low energy consumption.
References
Photonics
Quantum optics | Coherent perfect absorber | [
"Physics"
] | 944 | [
"Quantum optics",
"Quantum mechanics"
] |
28,687,367 | https://en.wikipedia.org/wiki/Real-time%20MRI | Real-time magnetic resonance imaging (RT-MRI) refers to the continuous monitoring of moving objects in real time. Traditionally, real-time MRI was possible only with low image quality or low temporal resolution. An iterative reconstruction algorithm removed limitations. Radial FLASH MRI (real-time) yields a temporal resolution of 20 to 30 milliseconds for images with an in-plane resolution of 1.5 to 2.0 mm. Real-time MRI adds information about diseases of the joints and the heart. In many cases MRI examinations become easier and more comfortable for patients, especially for the patients who cannot calm their breathing or who have arrhythmia.
Balanced steady-state free precession (bSSFP) imaging gives better image contrast between the blood pool and myocardium than FLASH MRI, at the cost of severe banding artifact when B0 inhomogeneity is strong.
History
1977/1978 - Raymond Damadian built the first MRI scanner and achieved the first MRI scan of a healthy human body (1977) with the intent of diagnosing cancer. Additionally, Peter Mansfield develops the echo-planar technique, producing images in seconds and becoming the basis for fast MRIs.
1983 - Introduction of the k-space by D B Twieg
1987 - First real-time MRI of the heart is developed
1997 - Parallel imaging with an RF coil array is introduced by D K Sodickson
1999 - SENSE image reconstruction is introduced by K P Pruessmann
2002 - GRAPPA image reconstruction is introduced by Mark Griswold
Physical basis
Overview
In general, real time MRI relies on gradient echo sequences, efficient k-space sampling, and fast reconstruction methods to speed up the image acquisition process. Gradient echo sequences present shorter echo times since only one RF pulse is required for each sequence. Modern fast-switching gradient coils also require increasing the slew rate, allowing for faster changes in gradient echo sequences and decreasing the repetition time.
k-space sampling
Efficient k-space sampling also decreases data collection time. Rectilinear scanning has become the standard k-space sampling method for MRI. However, the process takes a relatively long time as it samples the entire k-space equally. Because of this delay, other sampling methods are used to capture real-time motion. Single shot echo planar imaging is one extremely fast sampling method in which all of the data for the MR image is collected from one RF pulse. However, it is important to note that the EPI method is still a Cartesian sampling method, like the rectilinear scan, equally sampling the entire k-space. Spiral sampling, like EPI, only requires a single RF pulse to sample the entire k-space. Radial and spiral sampling are also used as methods to efficiently sample the k-space, with spiral also only requiring a single RF pulse to sample the k-space. Both radial and spiral sampling are more efficient than the Cartesian methods because they oversample low frequencies, which allows for general motion capture and better real-time image reconstruction. Thus, radial or spiral sampling of the k-space are now the preferred methods for real-time MRI reconstruction.
Parallel imaging
Parallel imaging involves the addition of multiple coils surrounding the target with each coil acquiring a fraction of the total image. Because modern GPUs have parallel processing capabilities, they can reconstruct each portion of the image simultaneously. Therefore, the more coils used, the faster the acquisition of the MR images.
Gradient-echo sequences
FLASH MRI
While early applications were based on echo planar imaging, which found an important application in real-time functional MRI (rt-fMRI), recent progress is based on iterative reconstruction and FLASH MRI. The real-time imaging method proposed by Uecker and colleagues combines radial FLASH MRI, which offers rapid and continuous data acquisition, motion robustness, and tolerance to undersampling, with an iterative image reconstruction method based on the formulation of image reconstruction as a nonlinear inverse problem.
By integrating the data from multiple receive coils (i.e. parallel MRI) and exploiting the redundancy in the time series of images with the use of regularization and filtering, this approach enhances the possible degree of data undersampling by one order of magnitude, so that high-quality images may be obtained out of as little as 5 to 10% of the data required for a normal image reconstruction.
Because of the very short echo times (e.g., 1 to 2 milliseconds), the method does not suffer from off-resonance effects, so that the images neither exhibit susceptibility artifacts nor rely on fat suppression. While spoiled FLASH sequences offer spin density or T1 contrast, versions with refocused or fully balanced gradients provide access to T2/T1 contrast. The choice of the gradient-echo time (e.g., in-phase vs opposed-phase conditions) further alters the representation of water and fat signals in the images and will allow for separate water/fat movies.
Balanced steady state free precession
Another GRE sequence commonly used in RT-MRI is balanced steady state free precession (bSSFP), as mentioned above with balanced gradients. Steady state free precession involves a repetition time (TR) that is shorter than T2. This prevents the magnetic signal from decaying completely before the next RF pulse is applied, which then establishes a steady state signal over time. The short TR also makes bSSFP ideal for RT-MRI.
The equation for peak MR signal in bSSFP is given as:
Where is the initial magnetization, and .
Thus, the MR signal is proportional to T2/T1. Materials with similar T1 and T2, such as fluids and fat, present high T2/T1 contrast and can have signal intensity up to .
The bSSFP signal is also greater than the FLASH signal by a factor of
.
Due to this strong fluid/tissue contrast, RT-MRI with bSSFP lends itself to cardiac imaging and visualizing blood flow.
Image reconstruction
SENSitivity Encoding (SENSE)
Certain image reconstruction algorithms used alongside parallel imaging address the potential issues that can arise from undersampling the k-space. SENSitivity Encoding (SENSE) is a method that reconstructs the partial k-space data from each coil and combines the partial images into the final scan in the spatial domain. Coil sensitivities must first be acquired either before the actual imaging or during the imaging process. During the rest of imaging, the k-space is undersampled to skip every other line, resulting in a one-half field of view.
As a two-point example, pixels on the original aliased images can be "unfolded" through the following equations to give the final scan:
for two points, and , in the final image. and denote the image signal for the aliased image. and are the sensitivity values for coil 1 at points and , respectively, and and are the sensitivity values for coil 2 at points and , respectively.
GeneRalized Autocalibrating Partial Parallel Acquisition (GRAPPA)
Another reconstruction algorithm used is GeneRalized Autocalibrating Partial Parallel Acquisition (GRAPPA). GRAPPA fills in the undersampled k-space data in the k-space domain before reconstructing the final image. Lines through the center of the k-space are fully sampled, typically alongside the actual image, to give the autocalibration signal (ACS) region. Weighing factors are calculated using the ACS, and these factors reflect the coil-specific distortions that each coil applies on the full field-of-view frequency domain. Then, the filled-in k-space data undergoes the inverse Fourier transform to construct the partial, non-aliased images. These images are then simply combined directly in the spatial domain.
If the k-space data is non-Cartesian, reconstruction is computationally more difficult, since the fast Fourier transform (FFT) requires Cartesian values. Typically, k-space data must be resampled into Cartesian coordinates before applying the FFT. GRAPPA can address these issues by obtaining large quantities of calibration data; however, the fastest reconstructions will generally require Cartesian data.
Signal-to-noise ratio
Lastly, within parallel image reconstruction there is another factor to consider, which is the signal to noise ratio (SNR). The SNR for parallel imaging can be calculated using the following equation:
Where is the acceleration factor and is the spatially dependent geometry factor (proportional to the number of coils used or the interactions between coils). Therefore, the more coils used, the faster the imaging process and the more inter-coil interactions; hence, the lower the SNR.
Applications
Cardiac MRI
Although applications of real-time MRI cover a broad spectrum ranging from non-medical studies of turbulent flow to the noninvasive monitoring of interventional (surgical) procedures, the most important application making use of the new capabilities is cardiovascular imaging. Previous cardiac MR (CMR) used cine techniques to capture the periodic motion of the heart. However, this is not feasible for patients with arrhythmia, where the cardiac cycle is unpredictable. With the new method it is possible to obtain movies of the beating heart in real time with up to 50 frames per second during free breathing and without the need for a synchronization to the electrocardiogram. A study performed by Laubrock et al. demonstrated that RT-MRI produced higher quality images with a higher SNR than cine CMR with a bSSFP sequence and radial k-space sampling. RT-MRI also removes the need for breath-holding while imaging, leading to a more comfortable experience for the patient as well.
Musculoskeletal MRI
Apart from cardiac MRI other real-time applications deal with functional studies of joint kinetics (e.g., temporomandibular joint, knee and the wrist) or address the coordinated dynamics of the articulators such as lips, tongue, soft palate and vocal folds during speaking (articulatory phonetics) or swallowing. Musculoskeletal imaging in particular benefits from real-time observation. Researchers at the NYU Grossman School of Medicine developed a RT-MRI glove for imaging movement of the hand. The glove uses high impedance coils to prevent the generation of eddy currents from rapidly changing magnetic fields and bSSFP for rapid imaging times. High-impedance coils remove the need for specific coil conformations and active gradient shielding.
MRI-guided invasive procedures
Applications in interventional MRI, which refers to the monitoring of minimally invasive surgical procedures, are possible by interactively changing parameters such as image position and orientation. This application is particularly helpful when a 3D image of the tissue is needed during surgery. It requires an in-room display for the physician to use during the procedure as well as the use of MRI-safe surgical tools. These include ceramic, plastic, or titanium, which is a paramagnetic metal. By using bSSFP and parallel imaging with multiple coils, frame rates of 5-10 frames per second have been accomplished, allowing for the visualization of cardiac procedures.
An MRI linear accelerator (LINAC) can image tumors and organs on-table. MRI-guided radiotherapy (MRgRT) is limited due to high costs and the personnel costs needed for each treatment and because clinical benefit compared to conventional radiotherapy platforms has not been established.
Future directions
Parallel imaging
Parallel imaging coils are available for torso and cardiac imaging, but they are not yet standardized for other body parts. Dynamic coil setups for speech and musculoskeletal imaging are key areas for current research.
Machine learning
Image reconstruction in RT-MRI benefits from machine learning (ML) or deep learning (DL). A nonlinear kernel, or mapping function, can be developed from the ACS to fill in k-space data and generate the final image. This process as a whole significantly accelerates the MRI process. Image segmentation or identification of lesions can be achieved through machine learning. In deep learning, with a convolutional neural network, the mapping function can be specified by the network. ML and DL improve image resolution as well as imaging speed.
High-performance, low-field scanners
High-performance, low-field MRI scanners are also an area of development. These scanners operate at relatively low magnetic field strengths, such as 0.35 T or 0.55 T. Many RT-MRI acquisition sequences, such as bSSFP, experience significant off-resonance effects. Off-resonance effects increase linearly with B0 field strength, so minimizing B0 also minimizes these effects that can lead to artifacts and image distortion. This allows for longer TRs, which then opens the door for a wider range of k-space sampling methods and sequence designs. Finally, lower strength MRI scanners will reduce the dangers associated with heating of metallic implants and decrease the cost of MRI.
References
External links
Related information of the Max Planck Society
Real-time MRI of horn playing (Sarah Willis)
Magnetic resonance imaging
Medical monitoring
Articles containing video clips | Real-time MRI | [
"Chemistry"
] | 2,689 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
2,732,301 | https://en.wikipedia.org/wiki/Boolean-valued%20model | In mathematical logic, a Boolean-valued model is a generalization of the ordinary Tarskian notion of structure from model theory. In a Boolean-valued model, the truth values of propositions are not limited to "true" and "false", but instead take values in some fixed complete Boolean algebra.
Boolean-valued models were introduced by Dana Scott, Robert M. Solovay, and Petr Vopěnka in the 1960s in order to help understand Paul Cohen's method of forcing. They are also related to Heyting algebra semantics in intuitionistic logic.
Definition
Fix a complete Boolean algebra B and a first-order language L; the signature of L will consist of a collection of constant symbols, function symbols, and relation symbols.
A Boolean-valued model for the language L consists of a universe M, which is a set of elements (or names), together with interpretations for the symbols. Specifically, the model must assign to each constant symbol of L an element of M, and to each n-ary function symbol f of L and each n-tuple of elements of M, the model must assign an element of M to the term f(a0,...,an-1).
Interpretation of the atomic formulas of L is more complicated. To each pair a and b of elements of M, the model must assign a truth value to the expression ; this truth value is taken from the Boolean algebra B. Similarly, for each n-ary relation symbol R of L and each n-tuple of elements of M, the model must assign an element of B to be the truth value .
Interpretation of other formulas and sentences
The truth values of the atomic formulas can be used to reconstruct the truth values of more complicated formulas, using the structure of the Boolean algebra. For propositional connectives, this is easy; one simply applies the corresponding Boolean operators to the truth values of the subformulae. For example, if φ(x) and ψ(y,z) are formulas with one and two free variables, respectively, and if a, b, c are elements of the model's universe to be substituted for x, y, and z, then the truth value of
is simply
The completeness of the Boolean algebra is required to define truth values for quantified formulas. If φ(x) is a formula with free variable x (and possibly other free variables that are suppressed), then
where the right-hand side is to be understood as the supremum in B of the set of all truth values ||φ(a)|| as a ranges over M.
The truth value of a formula is an element of the complete Boolean algebra B.
Boolean-valued models of set theory
Given a complete Boolean algebra B there is a Boolean-valued model denoted by VB, which is the Boolean-valued analogue of the von Neumann universe V. (Strictly speaking, VB is a proper class, so we need to reinterpret what it means to be a model appropriately.) Informally, the elements of VB are "Boolean-valued sets". Given an ordinary set A, every set either is or is not a member; but given a Boolean-valued set, every set has a certain, fixed membership degree in A.
The elements of the Boolean-valued set, in turn, are also Boolean-valued sets, whose elements are also Boolean-valued sets, and so on. In order to obtain a non-circular definition of Boolean-valued set, they are defined inductively in a hierarchy similar to the cumulative hierarchy. For each ordinal α of V, the set VBα is defined as follows.
VB0 is the empty set.
VBα+1 is the set of all functions from VBα to B. (Such a function represents a subset of VBα; if f is such a function, then for any , the value f(x) is the membership degree of x in the set.)
If α is a limit ordinal, VBα is the union of VBβ for .
The class VB is defined to be the union of all sets VBα.
It is also possible to relativize this entire construction to some transitive model M of ZF (or sometimes a fragment thereof). The Boolean-valued model MB is obtained by applying the above construction inside M. The restriction to transitive models is not serious, as the Mostowski collapsing theorem implies that every "reasonable" (well-founded, extensional) model is isomorphic to a transitive one. (If the model M is not transitive things get messier, as M′s interpretation of what it means to be a "function" or an "ordinal" may differ from the "external" interpretation.)
Once the elements of VB have been defined as above, it is necessary to define B-valued relations of equality and membership on VB. Here a B-valued relation on VB is a function from to B. To avoid confusion with the usual equality and membership, these are denoted by and for x and y in VB. They are defined as follows:
is defined to be ("x is in y if it is equal to something in y").
is defined to be ("x equals y if x and y are both subsets of each other"), where
is defined to be ("x is a subset of y if all elements of x are in y")
The symbols Σ and Π denote the least upper bound and greatest lower bound operations, respectively, in the complete Boolean algebra B. At first sight the definitions above appear to be circular: depends on , which depends on , which depends on . However, a close examination shows that the definition of only depends on for elements of smaller rank, so and are well defined functions from VB×VB to B.
It can be shown that the B-valued relations and on VB make VB into a Boolean-valued model of set theory. Each sentence of first-order set theory with no free variables has a truth value in B; it must be shown that the axioms for equality and all the axioms of ZF set theory (written without free variables) have truth value 1 (the largest element of B). This proof is straightforward, but it is long because there are many different axioms that need to be checked.
Relationship to forcing
Set theorists use a technique called forcing to obtain independence results and to construct models of set theory for other purposes. The method was originally developed by Paul Cohen but has been greatly extended since then. In one form, forcing "adds to the universe" a generic subset of a poset, the poset being designed to impose interesting properties on the newly added object. The wrinkle is that (for interesting posets) it can be proved that there simply is no such generic subset of the poset. There are three usual ways of dealing with this:
syntactic forcing A forcing relation is defined between elements p of the poset and formulas φ of the forcing language. This relation is defined syntactically and has no semantics; that is, no model is ever produced. Rather, starting with the assumption that ZFC (or some other axiomatization of set theory) proves the independent statement, one shows that ZFC must also be able to prove a contradiction. However, the forcing is "over V"; that is, it is not necessary to start with a countable transitive model. See Kunen (1980) for an exposition of this method.
countable transitive models One starts with a countable transitive model M of as much of set theory as is needed for the desired purpose, and that contains the poset. Then there do exist filters on the poset that are generic over M; that is, that meet all dense open subsets of the poset that happen also to be elements of M.
fictional generic objects Commonly, set theorists will simply pretend that the poset has a subset that is generic over all of V. This generic object, in nontrivial cases, cannot be an element of V, and therefore "does not really exist". (Of course, it is a point of philosophical contention whether any sets "really exist", but that is outside the scope of the current discussion.) With a little practice this method is useful and reliable, but it can be philosophically unsatisfying.
Boolean-valued models and syntactic forcing
Boolean-valued models can be used to give semantics to syntactic forcing; the price paid is that the semantics is not 2-valued ("true or false"), but assigns truth values from some complete Boolean algebra. Given a forcing poset P, there is a corresponding complete Boolean algebra B, often obtained as the collection of regular open subsets of P, where the topology on P is defined by declaring all lower sets open (and all upper sets closed). (Other approaches to constructing B are discussed below.)
Now the order on B (after removing the zero element) can replace P for forcing purposes, and the forcing relation can be interpreted semantically by saying that, for p an element of B and φ a formula of the forcing language,
where ||φ|| is the truth value of φ in VB.
This approach succeeds in assigning a semantics to forcing over V without resorting to fictional generic objects. The disadvantages are that the semantics is not 2-valued, and that the combinatorics of B are often more complicated than those of the underlying poset P.
Boolean-valued models and generic objects over countable transitive models
One interpretation of forcing starts with a countable transitive model M of ZF set theory, a partially ordered set P, and a "generic" subset G of P, and constructs a new model of ZF set theory from these objects. (The conditions that the model be countable and transitive simplify some technical problems, but are not essential.) Cohen's construction can be carried out using Boolean-valued models as follows.
Construct a complete Boolean algebra B as the complete Boolean algebra "generated by" the poset P.
Construct an ultrafilter U on B (or equivalently a homomorphism from B to the Boolean algebra {true, false}) from the generic subset G of P.
Use the homomorphism from B to {true, false} to turn the Boolean-valued model MB of the section above into an ordinary model of ZF.
We now explain these steps in more detail.
For any poset P there is a complete Boolean algebra B and a map e from P to B+ (the non-zero elements of B) such that the image is dense, e(p)≤e(q) whenever p≤q, and e(p)e(q)=0 whenever p and q are incompatible. This Boolean algebra is unique up to isomorphism. It can be constructed as the algebra of regular open sets in the topological space of P (with underlying set P, and a base given by the sets Up of elements q with q≤p).
The map from the poset P to the complete Boolean algebra B is not injective in general. The map is injective if and only if P has the following property: if every r≤p is compatible with q, then p≤q.
The ultrafilter U on B is defined to be the set of elements b of B that are greater than some element of (the image of) G. Given an ultrafilter U on a Boolean algebra, we get a homomorphism to {true, false}
by mapping U to true and its complement to false. Conversely, given such a homomorphism, the inverse image of true is an ultrafilter, so ultrafilters are essentially the same as homomorphisms to {true, false}. (Algebraists might prefer to use maximal ideals instead of ultrafilters: the complement of an ultrafilter is a maximal ideal, and conversely the complement of a maximal ideal is an ultrafilter.)
If g is a homomorphism from a Boolean algebra B to a Boolean algebra C and MB is any
B-valued model of ZF (or of any other theory for that matter) we can turn MB into a C -valued model by applying the homomorphism g to the value of all formulas. In particular if C is {true, false} we get a {true, false}-valued model. This is almost the same as an ordinary model: in fact we get an ordinary model on the set of equivalence classes under || = || of a {true, false}-valued model. So we get an ordinary model of ZF set theory by starting from M, a Boolean algebra B, and an ultrafilter U on B.
(The model of ZF constructed like this is not transitive. In practice one applies the Mostowski collapsing theorem to turn this into a transitive model.)
We have seen that forcing can be done using Boolean-valued models, by constructing a Boolean algebra with ultrafilter from a poset with a generic subset. It is also possible to go back the other way: given a Boolean algebra B, we can form a poset P of all the nonzero elements of B, and a generic ultrafilter on B restricts to a generic set on P. So the techniques of forcing and Boolean-valued models are essentially equivalent.
Notes
References
Bell, J. L. (1985) Boolean-Valued Models and Independence Proofs in Set Theory, Oxford.
Contains an account of Boolean-valued models and applications to Riesz spaces, Banach spaces and algebras.
Contains an account of forcing and Boolean-valued models written for mathematicians who are not set theorists.
Model theory
Boolean algebra
Forcing (mathematics) | Boolean-valued model | [
"Mathematics"
] | 2,888 | [
"Boolean algebra",
"Forcing (mathematics)",
"Mathematical logic",
"Fields of abstract algebra",
"Model theory"
] |
2,732,435 | https://en.wikipedia.org/wiki/Davis%E2%80%93Putnam%20algorithm | In logic and computer science, the Davis–Putnam algorithm was developed by Martin Davis and Hilary Putnam for checking the validity of a first-order logic formula using a resolution-based decision procedure for propositional logic. Since the set of valid first-order formulas is recursively enumerable but not recursive, there exists no general algorithm to solve this problem. Therefore, the Davis–Putnam algorithm only terminates on valid formulas. Today, the term "Davis–Putnam algorithm" is often used synonymously with the resolution-based propositional decision procedure (Davis–Putnam procedure) that is actually only one of the steps of the original algorithm.
Overview
The procedure is based on Herbrand's theorem, which implies that an unsatisfiable formula has an unsatisfiable ground instance, and on the fact that a formula is valid if and only if its negation is unsatisfiable. Taken together, these facts imply that to prove the validity of φ it is enough to prove that a ground instance of ¬φ is unsatisfiable. If φ is not valid, then the search for an unsatisfiable ground instance will not terminate.
The procedure for checking validity of a formula φ roughly consists of these three parts:
put the formula ¬φ in prenex form and eliminate quantifiers
generate all propositional ground instances, one by one
check if each instance is satisfiable.
If some instance is unsatisfiable, then return that φ is valid. Else continue checking.
The last part is a SAT solver based on resolution (as seen on the illustration), with an eager use of unit propagation and pure literal elimination (elimination of clauses with variables that occur only positively or only negatively in the formula).
Input: A set of clauses Φ.
Output: A Truth Value: true if Φ can be satisfied, false otherwise.
function DP-SAT(Φ)
repeat
// unit propagation:
while Φ contains a unit clause {l} do
for every clause c in Φ that contains l do
Φ ← remove-from-formula(c, Φ);
for every clause c in Φ that contains ¬l do
Φ ← remove-from-formula(c, Φ);
Φ ← add-to-formula(c \ {¬l}, Φ);
// eliminate clauses not in normal form:
for every clause c in Φ that contains both a literal l and its negation ¬l do
Φ ← remove-from-formula(c, Φ);
// pure literal elimination:
while there is a literal l all of which occurrences in Φ have the same polarity do
for every clause c in Φ that contains l do
Φ ← remove-from-formula(c, Φ);
// stopping conditions:
if Φ is empty then
return true;
if Φ contains an empty clause then
return false;
// Davis-Putnam procedure:
pick a literal l that occurs with both polarities in Φ
for every clause c in Φ containing l and every clause n in Φ containing its negation ¬l do
// resolve c with n:
r ← (c \ {l}) ∪ (n \ {¬l});
Φ ← add-to-formula(r, Φ);
for every clause c that contains l or ¬l do
Φ ← remove-from-formula(c, Φ);
At each step of the SAT solver, the intermediate formula generated is equisatisfiable, but possibly not equivalent, to the original formula. The resolution step leads to a worst-case exponential blow-up in the size of the formula.
The Davis–Putnam–Logemann–Loveland algorithm is a 1962 refinement of the propositional satisfiability step of the Davis–Putnam procedure which requires only a linear amount of memory in the worst case. It eschews the resolution for the splitting rule: a backtracking algorithm that chooses a literal l, and then recursively checks if a simplified formula with l assigned a true value is satisfiable or if a simplified formula with l assigned false is. It still forms the basis for today's (as of 2015) most efficient complete SAT solvers.
See also
Herbrandization
References
Boolean algebra
Constraint programming
Automated theorem proving | Davis–Putnam algorithm | [
"Mathematics"
] | 874 | [
"Boolean algebra",
"Automated theorem proving",
"Mathematical logic",
"Computational mathematics",
"Fields of abstract algebra"
] |
2,732,992 | https://en.wikipedia.org/wiki/S%C3%B8rensen%20formol%20titration | The Sørensen formol titration(SFT) invented by S. P. L. Sørensen in 1907 is a titration of an amino acid with potassium hydroxide in the presence of formaldehyde. It is used in the determination of protein content in samples.
If instead of an amino acid an ammonium salt is used the reaction product with formaldehyde is hexamethylenetetramine:
The liberated hydrochloric acid is then titrated with the base and the amount of ammonium salt used can be determined.
With an amino acid the formaldehyde reacts with the amino group to form a methylene amino (R-N=CH2) group. The remaining acidic carboxylic acid group can then again be titrated with base.
In winemaking
Formol titration is one of the methods used in winemaking to measure yeast assimilable nitrogen needed by wine yeast in order to successfully complete fermentation.
Accuracy in formol titration
There has been some inaccuracies of the SFT caused by the differences in the basicity of the nitrogen in different amino acids which were explained by S. L. Jodidi. For instances, proline(an amino acid), histidine, and lysine yields too low values compared to the theory. Unlike alpha, monobasic (containing one amino group per molecule) amino acids, these amino (or imino) acids' nitrogens have inconstant basicity, which results in partial reaction with formaldehyde.
In case of tyrosine, the actual results are too high due to the negative hydroxyl group (-OH), which acts as a base. This explanation is supported by the fact that phenylalanine can be accurately titrated.
References
Biochemistry methods
Titration | Sørensen formol titration | [
"Chemistry",
"Biology"
] | 383 | [
"Biochemistry methods",
"Titration",
"Instrumental analysis",
"Analytical chemistry stubs",
"Biochemistry"
] |
2,733,186 | https://en.wikipedia.org/wiki/Prelog%20strain | In organic chemistry, transannular strain (also called Prelog strain after chemist Vladimir Prelog) is the unfavorable interactions of ring substituents on non-adjacent carbons. These interactions, called transannular interactions, arise from a lack of space in the interior of the ring, which forces substituents into conflict with one another. In medium-sized cycloalkanes, which have between 8 and 11 carbons constituting the ring, transannular strain can be a major source of the overall strain, especially in some conformations, to which there is also contribution from large-angle strain and Pitzer strain. In larger rings, transannular strain drops off until the ring is sufficiently large that it can adopt conformations devoid of any negative interactions.
Transannular strain can also be demonstrated in other cyclo-organic molecules, such as lactones, lactams, ethers, cycloalkenes, and cycloalkynes. These compounds are not without significance, since they are particularly useful in the study of transannular strain. Furthermore, transannular interactions are not relegated to only conflicts between hydrogen atoms, but can also arise from larger, more complicated substituents interacting across a ring.
Thermodynamics
By definition, strain implies discomfiture, so it should follow that molecules with large amounts of transannular strain should have higher energies than those without. Cyclohexane, for the most part, is without strain and is therefore quite stable and low in energy. Rings smaller than cyclohexane, like cyclopropane and cyclobutane, have significant tension caused by small-angle strain, but there is no transannular strain. While there is no small-angle strain present in medium-sized rings, there does exist something called large-angle strain. Some angle and torsional strain is used by rings with more than nine members to relieve some of the distress caused by transannular strain.
As the plot to the left indicates, the relative energies of cycloalkanes increases as the size of the ring increases, with a peak at cyclononane (with nine members in its ring.) At this point, the flexibility of the rings increases with increasing size; this allows for conformations that can significantly mitigate transannular interactions.
Kinetics
Rates of reaction can be affected by the size of rings. Essentially each reaction should be studied on a case-by-case basis but some general trends have been seen. Molecular mechanics calculations of strain energy differences ΔSI between a sp2 and sp3 state in cycloalkanes show linear correlations with rates (as ) of many reactions involving the transition between sp2 and sp3 states, such as ketone reduction, alcohol oxidation or nucleophilic substitution, the contribution of transannular strain is below 3%.
Rings with transannular strain have faster SN1, SN2, and free radical reactions compared to most smaller and normal sized rings. Five membered rings show an exception to this trend. On the other hand, some nucleophilic addition reactions involving addition to a carbonyl group in general show the opposite trend. Smaller and normal rings, with five membered rings being the anomaly, have faster reaction rates while those with transannular strain are slower.
One specific example of a study of rates of reactions for an SN1 reaction is shown on the right. Various sized rings, ranging from four to seventeen members, were used to compare the relative rates and better understand the effect of transannular strain on this reaction. The solvolysis reaction in acetic acid involved the formation of a carbocation as the chloride ion leaves the cyclic molecule. This study fits the general trend seen above that rings with transannular strain show increased reactions rates compared to smaller rings in SN1 reactions.
Examples of transannular strain
Influence on regioselectivity
The regioselectivity of water elimination is highly influenced by ring size. When water is eliminated from cyclic tertiary alcohols by an E1 route, three major products are formed. The semicyclic isomer (so-called because the double bond is shared by a ring atom and an exocyclic atom) and the (E) endocyclic isomer are expected to predominate; the (Z) endocyclic isomer is not expected to be formed until the ring size is large enough to accommodate the awkward angles of the trans configuration. The exact population of each product relative to the others differs considerably depending upon the size of the ring involved. As the ring size increases, the semicyclic isomer decreases rapidly and the (E) endocyclic isomer increases, but after a certain point, the semicyclic isomer begins to increase again. This can be attributed to transannular strain; this strain is significantly reduced in the (E) endocyclic isomer because it has one less substituent in the ring than the semicyclic isomer.
Influence on medium-sized ring synthesis
One of the effects of transannular strain is the difficulty of synthesizing medium-sized rings. Illuminati et al. have studied the kinetics of intramolecular ring closing using the simple nucleophilic substitution reaction of ortho-bromoalkoxyphenoxides. Specifically, they studied the ring closing of 5 to 10 carbon cyclic ethers. They found that as the number of carbons increased, so did the enthalpy of activation for the reaction. This indicates that strain within the cyclic transition states is higher if there are more carbons in the ring. Since transannular strain is the largest source of strain in rings this size, the larger enthalpies of activation result in much slower cyclizations due to transannular interactions in the cyclic ethers.
Influence of bridges on transannular strain
Transannular strain can be eliminated by the simple addition of a carbon bridge. E,Z,E,Z,Z-[10]-annulene is quite unstable; while it has the requisite number of π-electrons to be aromatic, they are for the most part isolated. Ultimately, the molecule itself is very difficult to observe. However, by the simple addition of a methylene bridge between the 1 and 6 positions, a stable, flat, aromatic molecule can be made and observed.
References
External links
Prelog strain definition: Link
Stereochemistry | Prelog strain | [
"Physics",
"Chemistry"
] | 1,349 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
2,733,362 | https://en.wikipedia.org/wiki/Aqueous%20two-phase%20system | Aqueous biphasic systems (ABS) or aqueous two-phase systems (ATPS) are clean alternatives for traditional organic-water solvent extraction systems.
ABS are formed when either two polymers, one polymer and one kosmotropic salt, or two salts (one chaotropic salt and the other a kosmotropic salt) are mixed at appropriate concentrations or at a particular temperature. The two phases are mostly composed of water and non volatile components, thus eliminating volatile organic compounds. They have been used for many years in biotechnological applications as non-denaturing and benign separation media. Recently, it has been found that ATPS can be used for separations of metal ions like mercury and cobalt, carbon nanotubes, environmental remediation, metallurgical applications and as a reaction media.
Introduction
In 1896, Beijerinck first noted an 'incompatibility' in solutions of agar, a water-soluble polymer, with soluble starch or gelatine. Upon mixing, they separated into two immiscible phases.
Subsequent investigation led to the determination of many other aqueous biphasic systems, of which the polyethylene glycol (PEG) - dextran system is the most extensively studied. Other systems that form aqueous biphases are: PEG - sodium carbonate or PEG and phosphates, citrates or sulfates. Aqueous biphasic systems are used during downstream processing mainly in biotechnological and chemical industries.
The two phases
It is a common observation that when oil and water are poured into the same container, they separate into two phases or layers, because they are immiscible. In general, aqueous (or water-based) solutions, being polar, are immiscible with non-polar organic solvents (cooking oil, chloroform, toluene, hexane etc.) and form a two-phase system. However, in an ABS, both immiscible components are water-based.
The formation of the distinct phases is affected by the pH, temperature and ionic strength of the two components, and separation occurs when the amount of a polymer present exceeds a certain limiting concentration (which is determined by the above factors).
PEG–dextran system
The "upper phase" is formed by the more hydrophobic polyethylene glycol (PEG), which is of lower density than the "lower phase," consisting of the more hydrophilic and denser dextran solution.
Although PEG is inherently denser than water, it occupies the upper layer. This is believed to be due to its solvent 'ordering' properties, which excludes excess water, creating a low density water environment. The degree of polymerization of PEG also affects the phase separation and the partitioning of molecules during extraction.
Advantages
ABS is an excellent method to employ for the extraction of proteins/enzymes and other labile biomolecules from crude cell extracts or other mixtures. Most often, this technique is employed in enzyme technology during industrial or laboratory production of enzymes.
They provide mild conditions that do not harm or denature unstable/labile biomolecules
The interfacial stress (at the interface between the two layers) is far lower (400-fold less) than water-organic solvent systems used for solvent extraction, causing less damage to the molecule to be extracted
The polymer layer stabilizes the extracted protein molecules, favouring a higher concentration of the desired protein in one of the layers, resulting in an effective extraction
Specialised systems may be developed (by varying factors such as temperature, degree of polymerisation, presence of certain ions etc. ) to favour the enrichment of a specific compound, or class of compounds, into one of the two phases. They are sometimes used simultaneously with ion-exchange resins for better extraction
Separation of the phases and the partitioning of the compounds occurs rapidly. This allows the extraction of the desired molecule before endogenous proteases can degrade them.
These systems are amenable to scale-ups, from laboratory-sized set-ups to those that can handle the requirements of industrial production. They may be employed in continuous protein-extraction processes.
Specificity may be further increased by tagging ligands specific to the desired enzyme, onto the polymer. This results in a preferential binding of the enzyme to the polymer, increasing the effectiveness of the extraction.
One major disadvantage, however, is the cost of materials involved, namely high-purity dextrans employed for the purpose. However, other low-cost alternatives such as less refined dextrans, hydroxypropyl starch derivatives and high-salt solutions are also available.
Thermodynamic Modeling
Besides the experimental study, it is important to have a good thermodynamic model to describe and predict liquid-liquid equilibrium conditions in engineering and design. To obtain global and reliable parameters for thermodynamic models usually, phase equilibrium data is suitable for this purpose. As there are polymer, electrolyte and water in polymer/salt systems, all different types of interactions should be taken into account. Up to now, several models have been used such as NRTL, Chen-NRTL, Wilson, UNIQUAC, NRTL-NRF and UNIFAC-NRF. It has been shown that, in all cases, the mentioned models were successful in reproducing tie-line data of polymer/salt aqueous two-phase systems. In most of the previous works, excess Gibbs functions have been used for modeling.
References
Bibliography
External links
London South Bank University on ABS
Unit operations
Separation processes | Aqueous two-phase system | [
"Chemistry"
] | 1,161 | [
"Chemical process engineering",
"Unit operations",
"nan",
"Separation processes"
] |
2,733,891 | https://en.wikipedia.org/wiki/Translational%20medicine | Translational medicine (often called translational science, of which it is a form) develops the clinical practice applications of the basic science aspects of the biomedical sciences; that is, it translates basic science to applied science in medical practice. It is defined by the European Society for Translational Medicine as "an interdisciplinary branch of the biomedical field supported by three main pillars: benchside, bedside, and community". The goal of translational medicine is to combine disciplines, resources, expertise, and techniques within these pillars to promote enhancements in prevention, diagnosis, and therapies. Accordingly, translational medicine is a highly interdisciplinary field, the primary goal of which is to coalesce assets of various natures within the individual pillars in order to improve the global healthcare system significantly.
History
Translational medicine is a rapidly growing discipline in biomedical research and aims to expedite the discovery of new diagnostic tools and treatments by using a multi-disciplinary, highly collaborative, "bench-to-bedside" approach. Within public health, translational medicine is focused on ensuring that proven strategies for disease treatment and prevention are actually implemented within the community. One prevalent description of translational medicine, first introduced by the Institute of Medicine's Clinical Research Roundtable, highlights two roadblocks (i.e., distinct areas in need of improvement): the first translational block (T1) prevents basic research findings from being tested in a clinical setting; the second translational block (T2) prevents proven interventions from becoming standard practice.
The National Institutes of Health has made a major push to fund translational medicine, especially within biomedical research, with a focus on cross-functional collaborations (e.g., between researchers and clinicians); leveraging new technology and data analysis tools; and increasing the speed at which new treatments reach patients. In December 2011, The National Center for Advancing Translational Science was established within the National Institutes of Health to "transform the translational science process so that new treatments and cures for disease can be delivered to patients faster." The Clinical and Translational Science Awards, established in 2006, supports 60 centers across the country that provide "academic homes for translational sciences and supporting research resources needed by local and national research communities." According to an article published in 2007 in Science Career Magazine, in 2007 to 2013 the European Commission targeted a majority of its €6 billion budget for health research to further translational medicine.
Training and education
In recent years, a number of educational programs have emerged to provide professional training in the skills necessary for successfully translating research into improved clinical outcomes. These programs go by various names (including Master of Translational Medicine and Master of Science in Bioinnovation). Many such programs emerge from bioengineering departments, often in collaboration with clinical departments.
Master and PhD programs
The University of Edinburgh has been running an MSc in Translational Medicine program since 2007. It is a 3-year online distance learning programme aimed at the working health professional.
Aalborg University Denmark has been running a master's degree in translational medicine since 2009.
A master's degree program in translational medicine was started at the University of Helsinki in 2010.
In 2010, UC Berkeley and UC San Francisco used a founding grant from Andy Grove to launch a joint program that became the Master of Translational Medicine (MTM). The program links the Bioengineering department at Berkeley with the Bioengineering and Therapeutic Science department at UCSF to give students a one-year experience in fostering medical innovation.
Cedars-Sinai Medical Center in Los Angeles, California was accredited in 2012 for a doctoral program in Biomedical Science and Translational Medicine. The PhD program focuses on biomedical and clinical research that relate directly to developing new therapies for patients.
Since 2013, the Official Master in Translational Medicine-MSc from the University of Barcelona offers the opportunity to gain an excellent training through theoretical and practical courses. Furthermore, this master is linked to the doctoral programme “Medicine and Translational Research”, with quality mention from the National Agency for the Quality Evaluation and Accreditation (ANECA).
In Fall 2015, the City College of New York established a master in translational medicine program. A partnership between The Grove School of Engineering and the Sophie Davis School for Biomedical Education/CUNY School of Medicine, this program provides scientists, engineers, and pre-med students with training in product design, intellectual property, regulatory affairs, and medical ethics over 3 semesters.
The University of Toronto The Master of Health Science (MHSc) in Translational Research in Health Sciences is a two-year, course-based program is designed for interprofessional students from diverse backgrounds (such as medicine, life sciences, social sciences, engineering, design, and communications) who want to learn creative problem-solving skills, strategies, and competencies to translate (scientific) knowledge into innovations that improve medicine, health, and care. They have international speakers and contributors, including Dr Iseult Roche from The UK.
Translational medicine is a key to the future of international health and in facilitating public health and promoting health policy to actively be implemented to establish care.
University of Liverpool, King's College London, Imperial College London, University College London, St George's, University of London, Oxford and Cambridge Universities run post-graduate courses in Translational Medicine too.
The George Washington University's School of Medicine & Health Sciences offers a PhD program in Translational Health Sciences.
The University of Manchester, Newcastle University and Queen's University Belfast also offer research-focussed Masters of Research (MRes) courses in Translational Medicine.
Queen's University's School of Graduate Studies offers both an MSc and PhD program in Translational Medicine.
University of Windsor's Faculty of Graduate Studies offers a one year MSc program in Translational Health Science.
Tulane University has a PhD program in Bio-Innovation to foster design and implementation of innovative biomedical technologies.
Temple University's College of Public Health offers a Master of Science in Clinical Research and Translational Medicine. The program is jointly offered with the Lewis Katz School of Medicine.
Mahidol University at Faculty of Medicine Ramathibodi hospital has a Master and PhD programme in Translational Medicine since 2012. Mahidol University is the first University in Thailand and in Southeast Asia. The most of programme lecturers are physicians and clinicians who contribute in many fields of study such as Cancer, Allergy and Immunology, Haematology, Paediatric, Rheumatology, etc. Here, the student will be directly exposed to patients. To find out the thing in between basic science and clinical application.
University of Würzburg started a Master programme in Translational Medicine in 2018. It is aimed at medical students in the third or fourth year pursuing a career as a Clinician Scientist.
St. George's University of London offers a 1-year Translational Medicine master's programme since 2018, including pathways leading to a Master of Research (MRes), Master of Science (MSc), Postgraduate Certificate (PGCert) or Postgraduate Diploma (PGDip). Both master's degrees include a research component that makes up 33% of the MSc pathway and 58% of the MRes pathway. Taught modules are designed to cover major areas of modern translational science including drug development, genomics and development of skills related to research and data analysis.
Perdana University's Graduate School of Medicine in Kuala Lumpur, Malaysia offers a Master of Science (MSc) in Translational Medicine.
Taipei Medical University's College of Medical Science and Technology in Taipei, Taiwan offers an MSc and PhD program in Translational Science.
Diplomas and courses
Academy of Translational Medicine Professionals offers a regular professional certification course "Understanding Translational Medicine Tools and Techniques".
James Lind Institute has been conducting a Postgraduate Diploma in Translational Medicine since early 2013. The program has been supported by the Universiti Sains Malaysia.
The University of Southern California School of Pharmacy offers a course in translational medicine.
International organizations
European Society for Translational Medicine
The European Society for Translational Medicine is a global non-profit and neutral healthcare organization whose principal objective is to enhance world-wide healthcare by using translational medicine approaches, resources and expertise.
The society facilitates cooperation and interaction among clinicians, scientists, academia, industry, governments, funding and regulatory agencies, investors and policy makers in order to develop and deliver high-quality translational medicine programs and initiatives with overall aim to enhance the healthcare of global population. The society's goal is to enhance research and development of novel and affordable diagnostic tools and treatments for the clinical disorders affecting global population.
The society provides an annual platform in the form of global congresses where global key opinion leaders, scientists from bench side, public health professionals, clinicians from bedside and industry professionals gather and take part in the panel discussions and scientific sessions on latest updates and developments in translational medicine field including biomarkers, omics sciences, cellular and molecular biology, data mining & management, precision medicine & companion diagnostics, disease modelling, vaccines and community healthcare.
In response to the COVID-19 coronavirus pandemic, the European Society for Translational Medicine announced a global virtual congress on COVID-19 (EUSTM-2020). The Virtual Congress focused on the key principles of Translational Medicine: Bench side, Bed side and Public Health. The congress aims to address current challenges, highlight novel solutions and identify critical hurdles as they apply to COVID-19.
Academy of Translational Medicine Professionals
Academy of Translational Medicine Professionals is working to advance the ongoing knowledge and skills for clinicians and scientific professionals at all levels. Academy's high quality, standard and ethical training and educational programs ensure that all clinical and scientific professionals achieve excellence in their respective fields. Programs are accredited by the European Society for Translational Medicine.
Fellowship program
Academy of Translational Medicine Professionals offers fellowship program which is open to highly experienced professionals who have a record of significant achievements in benchside, bedside or community health fields.
See also
Implementation research
7th Annual Congress of the European Society for Translational Medicine on COVID-19 (EUSTM-2020)
BMC Journal of Translational Medicine (Journal)
American Journal of Translational Research (journal)
Clinical and Translational Science (journal)
Science Translational Medicine (journal)
6th Annual European Congress on Clinical & Translational Sciences (Event)
3rd Annual European Clinical Conference (Event)
Bina Shaheen Siddiqui
References
External links
European Society for Translational Medicine
American Journal of Translational Research
Institute for Translational Medicine and Therapeutics
Translational Medicine:Tools And Techniques (a user guide for institutional implementation)
sv:Translationell forskning | Translational medicine | [
"Biology"
] | 2,164 | [
"Translational medicine"
] |
2,734,556 | https://en.wikipedia.org/wiki/Optical%20molasses | Optical molasses (OM) is a laser cooling technique that can cool neutral atoms to as low as a few microkelvins, depending on the atomic species. An optical molasses consists of 3 pairs of counter-propagating orthogonally polarized laser beams intersecting in the region where the atoms are present. The main difference between an optical molasses and a magneto-optical trap (MOT) is the absence of magnetic field in the former. Unlike a MOT, an OM provides only cooling and no trapping.
History
When laser cooling was proposed in 1975, a theoretical limit on the lowest possible temperature was predicted. Known as the Doppler limit, , this was given by the lowest possible temperature attainable considering the cooling of two-level atoms by Doppler cooling and the heating of atoms due to momentum diffusion from the scattering of laser photons. Here is the natural line-width of the atomic transition, is the reduced Planck constant, and is the Boltzmann constant.
The first experimental realization of optical molasses was achieved in 1985 by Chu et al. at AT&T Bell Laboratories.
The authors measured laser cooling of neutral sodium atoms down to the theoretical Doppler cooling limit by observing the fluorescence of a hot atomic beam. By temporarily switching off the laser beams for a fixed time interval, the authors firstly measured the average kinetic energy of the atoms by a time-of-flight technique. The fraction of atoms that left the region while it was in the dark was measured by comparing the brightness of the fluorescence before and after the turnoff. Then velocity distribution and temperature were measured by estimating the dependence of this fraction on the light-off time. The kinetic temperature they obtained was not very different from the Doppler cooling limit in the two-level approximation. The size of the optical molasses region was a limiting factor.
Experiments at the National Institute of Standards and Technology in Gaithersburg found the temperature of cooled atoms to be well below the theoretical limit. In 1988, Lett et al. directed sodium atoms through an optical molasses and found the temperatures to be as low as ~40 μk, 6 times lower than the expected 240 μk Doppler cooling limit. Other unexpected properties found in other experiments included significant unexpected insensitivity to laser alignment of the counter-propagating beams.
These unexpected observations led to the development of more sophisticated models of laser cooling that took into account the Zeeman and hyperfine sublevels of the atomic structure. The dynamics of optical pumping between these sublevels allow the cooling of atoms below the Doppler limit.
Theory
The best explanation of the phenomenon of optical molasses is based on the principle of polarization gradient cooling. For one-dimensional optical molasses: Suppose two laser beams approach an atom from opposite directions. Counterpropagating beams of circularly polarized light cause a standing wave, where the light polarization is linear but the direction rotates along the direction of the beams at a very fast rate. Atoms moving in the spatially varying linear polarization have a higher probability density of being in a state that is more susceptible to absorption of light from the beam coming head-on, rather than the beam from behind. This results in a velocity-dependent damping force
where
The variable is the reduced Planck constant, is the saturation intensity, is the laser detuning, and is the linewidth of the atom-cooling transition. For sodium, the cooling (cycling) transition is the transition, driven by laser light at 589 nm.
The optical molasses can reduce the atom temperature to the recoil limit is set by the energy of the photon emitted in the decay from the J′ to J state, where the J state is the ground-state angular momentum, and the J′ state is the excited-state angular momentum. This temperature is given by
though practically the limit is a few times this value because of the extreme sensitivity to external magnetic fields in this cooling scheme. Atoms typically reach temperatures on the order of microkelvins, as compared to the doppler limit
The one-dimensional optical molasses can be extended to three dimensions with six counter-propagating laser beams. The total force is the sum from each beam. For example, a study using cesium atoms achieved temperatures as low as ~3 μK, approximately 40 times below the Doppler limit and only slightly above the recoil temperature limit of Cs. The temperature obtained varies with the configuration of the laser polarization and are all higher than the theoretical estimate. Thus the extension has been proven to be effective, despite a few caveats. In 3D experiments, the transverse nature of light leads to the limitation that there will always be polarization gradients. The atoms also see different gradients along different directions, and they may change dramatically during the atom's diffusive movement in the molasses. The trajectories are not straight either, but severely affected by the cooling process. Quantum treatments are needed due to these limitations.
Relation to magneto-optical trap
An optical molasses slows down the atoms but does not provide any trapping force to confine them spatially. A magneto-optical trap employs a 3-dimensional optical molasses along with a spatially varying magnetic field to slow down and confine the atoms.
See also
Doppler cooling
Gray molasses
Magneto-optical trap
Polarization gradient cooling
Sub-Doppler cooling
References
Atomic, molecular, and optical physics | Optical molasses | [
"Physics",
"Chemistry"
] | 1,129 | [
"Atomic",
" molecular",
" and optical physics"
] |
2,736,308 | https://en.wikipedia.org/wiki/Emergency%20power%20system | An emergency power system is an independent source of electrical power that supports important electrical systems on loss of normal power supply. A standby power system may include a standby generator, batteries and other apparatus. Emergency power systems are installed to protect life and property from the consequences of loss of primary electric power supply. It is a type of continual power system.
They find uses in a wide variety of settings from homes to hospitals, scientific laboratories, data centers, telecommunication equipment and ships. Emergency power systems can rely on generators, deep-cycle batteries, flywheel energy storage or fuel cells.
History
Emergency power systems were used as early as World War II on naval ships. In combat, a ship may lose the function of its boilers, which power the steam turbines for the ship's generator. In such a case, one or more diesel engines are used to drive back-up generators. Early transfer switches relied on manual operation; two switches would be placed horizontally, in line and the "on" position facing each other. a rod is placed in between. In order to operate the switch one source must be turned off, the rod moved to the other side and the other source turned on.
Operation in buildings
Mains power can be lost due to downed lines, malfunctions at a sub-station, inclement weather, planned blackouts or in extreme cases a grid-wide failure. In modern buildings, most emergency power systems have been and are still based on generators. Usually, these generators are Diesel engine driven, although smaller buildings may use a gasoline engine driven generator.
Some larger building have gas turbines, but they can take 5 or up to 30 minutes to produce power.
Lately, more use is being made of deep cycle batteries and other technologies such as flywheel energy storage or fuel cells. These latter systems do not produce polluting gases, thereby allowing the placement to be done within the building. Also, as a second advantage, they do not require a separate shed to be built for fuel storage.
With regular generators, an automatic transfer switch is used to connect emergency power. One side is connected to both the normal power feed and the emergency power feed; and the other side is connected to the load designated as emergency. If no electricity comes in on the normal side, the transfer switch uses a solenoid to throw a triple pole, double throw switch. This switches the feed from normal to emergency power. The loss of normal power also triggers a battery operated starter system to start the generator, similar to using a car battery to start an engine. Once the transfer switch is switched and the generator starts, the building's emergency power comes back on (after going off when normal power was lost).
Unlike emergency lights, emergency lighting is not a type of light fixture; it is a pattern of the building's normal lights that provides a path of lights to allow for safe exit, or lights up service areas such as mechanical rooms and electric rooms. Exit signs, fire alarm systems (that are not on back up batteries) and the electric motor pumps for the fire sprinklers are almost always on emergency power. Other equipment on emergency power may include smoke isolation dampers, smoke evacuation fans, elevators, handicap doors and outlets in service areas. Hospitals use emergency power outlets to power life support systems and monitoring equipment. Some buildings may even use emergency power as part of normal operations, such as a theater using it to power show equipment in accordance with the principle of "the show must go on".
Operation in aviation
The use of emergency power systems in aviation can be either in the aircraft or on the ground.
In commercial and military aircraft it is critical to maintain power to essential systems during an emergency. This can be done via Ram air turbines or battery emergency power supplies which enables pilots to maintain radio contact and continue to navigate using MFD, GPS, VOR receiver or directional gyro during for more than an hour.
Localizer, glideslope, and other instrument landing aids (such as microwave transmitters) are both high power consumers and mission-critical, and cannot be reliably operated from a battery supply, even for short periods. Hence, when absolute reliability is required (such as when Category 3 operations are in force at the airport) it is usual to run the system from a diesel generator with automatic switchover to the mains supply should the generator fail. This avoids any interruption to transmission while a generator is brought up to operating speed.
This is opposed to the typical view of emergency power systems, where the backup generators are seen as secondary to the mains electrical supply.
Electronic device protection
Computers, communication networks, and other modern electronic devices need not only power, but also a steady flow of it to continue to operate. If the source voltage drops significantly or drops out completely, these devices will fail, even if the power loss is only for a fraction of a second. Because of this, even a generator back-up does not provide protection because of the start-up time involved.
To achieve more comprehensive loss protection, extra equipment such as surge protectors, inverters, or sometimes a complete uninterruptible power supply (UPS) is used. UPS systems can be local (to one device or one power outlet) or may extend building-wide. A local UPS is a small box that fits under a desk or a telecom rack and powers a small number of devices. A building-wide UPS may take any of several different forms, depending on the application. It directly feeds a system of outlets designated as UPS feed and can power a large number of devices.
Since telephone exchanges use DC, the building's battery room is generally wired directly to the consuming equipment and floats continuously on the output of the rectifiers that normally supply DC rectified from utility power. When utility power fails, the battery carries the load without needing to switch. With this simple though somewhat expensive system, some exchanges have never lost power for a moment since the 1920s.
Structure and operation in utility stations
In recent years, large units of a utility power station are usually designed on a unit system basis in which the required devices, including the boiler, the turbine generator unit, and its power (step up) and unit (auxiliary) transformer are solidly connected as one unit. A less common set-up consists of two units grouped together with one common station auxiliary. As each turbine generator unit has its own attached unit auxiliary transformer, it is connected to the circuit automatically. For starting the unit, the auxiliaries are supplied with power by another unit (auxiliary) transformer or station auxiliary transformer. The period of switching from the first unit transformer to the next unit is designed for automatic, instantaneous operation in times when the emergency power system needs to kick in. It is imperative that the power to unit auxiliaries not fail during a station shutdown (an occurrence known as black-out when all regular units temporarily fail). Instead, during shutdowns the grid is expected to remain operational. When problems occur, it is usually due to reverse power relays and frequency-operated relays on grid lines due to severe grid disturbances. Under these circumstances, the emergency station supply must kick in to avoid damage to any equipment and to prevent hazardous situations such as the release of hydrogen gas from generators to the local environment.
Controlling the emergency power system
For a 208 VAC emergency supply system, a central battery system with automatic controls, located in the power station building, is used to avoid long electric supply wires. This central battery system consists of lead-acid battery cell units to make up a 12 or 24 VDC system as well as stand-by cells, each with its own battery charging unit. Also needed are a voltage sensing unit capable of receiving 208 VAC and an automatic system that is able to signal to and activate the emergency supply circuit in case of failure of 208 VAC station supply.
References
External links
How Emergency Power Systems Work
Difference between car battery and deep-cycle batteries
Electric power
Fault tolerance
nl:Noodstroomvoeding#Noodstroomaggregaat | Emergency power system | [
"Physics",
"Engineering"
] | 1,638 | [
"Physical quantities",
"Reliability engineering",
"Fault tolerance",
"Power (physics)",
"Electric power",
"Electrical engineering"
] |
2,736,555 | https://en.wikipedia.org/wiki/Worm-like%20chain | The worm-like chain (WLC) model in polymer physics is used to describe the behavior of polymers that are semi-flexible: fairly stiff with successive segments pointing in roughly the same direction, and with persistence length within a few orders of magnitude of the polymer length. The WLC model is the continuous version of the Kratky–Porod model.
Model elements
The WLC model envisions a continuously flexible isotropic rod. This is in contrast to the freely-jointed chain model, which is only flexible between discrete freely hinged segments. The model is particularly suited for describing stiffer polymers, with successive segments displaying a sort of cooperativity: nearby segments are roughly aligned. At room temperature, the polymer adopts a smoothly curved conformation; at K, the polymer adopts a rigid rod conformation.
For a polymer of maximum length , parametrize the path of the polymer as . Allow to be the unit tangent vector to the chain at point , and to be the position vector along the chain, as shown to the right. Then:
and the end-to-end distance .
The energy associated with the bending of the polymer can be written as:
where is the polymer's characteristic persistence length, is the Boltzmann constant, and is the absolute temperature. At finite temperatures, the end-to end distance of the polymer will be significantly shorter than the maximum length . This is caused by thermal fluctuations, which result in a coiled, random configuration of the undisturbed polymer.
The polymer's orientation correlation function can then be solved for, and it follows an exponential decay with decay constant 1/P:
A useful value is the mean square end-to-end distance of the polymer:
Note that in the limit of , then . This can be used to show that a Kuhn segment is equal to twice the persistence length of a worm-like chain. In the limit of , then , and the polymer displays rigid rod behavior. The figure to the right shows the crossover from flexible to stiff behavior as the persistence length increases.
Biological relevance
Experimental data from the stretching of Lambda phage DNA is shown to the right, with force measurements determined by analysis of Brownian fluctuations of a bead attached to the DNA. A persistence length of 51.35 nm and a contour length of 1560.9 nm were used for the model, which is depicted by the solid line.
Other biologically important polymers that can be effectively modeled as worm-like chains include:
double-stranded DNA (persistence length 40-50 nm) and RNA (persistence length 64 nm)
single-stranded DNA (persistence length 4 nm)
unstructured RNA (persistence length 2 nm)
unstructured proteins (persistence length 0.6-0.7 nm)
microtubules (persistence length 0.52 cm)
filamentous bacteriophage
Stretching worm-like chain polymers
Upon stretching, the accessible spectrum of thermal fluctuations reduces, which causes an entropic force acting against the external elongation.
This entropic force can be estimated from considering the total energy of the polymer:
.
Here, the contour length is represented by , the persistence length by , the extension is represented by , and external force is represented by .
Laboratory tools such as atomic force microscopy (AFM) and optical tweezers have been used to characterize the force-dependent stretching behavior of biological polymers. An interpolation formula that approximates the force-extension behavior with about 15% relative error is:
A more accurate approximation for the force-extension behavior with about 0.01% relative error is:
,
with , , , , , .
A simple and accurate approximation for the force-extension behavior with about 1% relative error is:
Approximation for the extension-force behavior with about 1% relative error was also reported:
Extensible worm-like chain model
The elastic response from extension cannot be neglected: polymers elongate due to external forces. This enthalpic compliance is accounted for the material parameter , and the system yields the following Hamiltonian for significantly extended polymers:
,
This expression contains both the entropic term, which describes changes in the polymer conformation, and the enthalpic term, which describes the stretching of the polymer due to the external force.
Several approximations for the force-extension behavior have been put forward, depending on the applied external force. These approximations are made for stretching DNA in physiological conditions (near neutral pH, ionic strength approximately 100 mM, room temperature), with stretch modulus around 1000 pN.
For the low-force regime (F < about 10 pN), the following interpolation formula was derived:
.
For the higher-force regime, where the polymer is significantly extended, the following approximation is valid:
.
As for the case without extension, a more accurate formula was derived:
,
with . The coefficients are the same as the above described formula for the WLC model without elasticity.
Accurate and simple interpolation formulas for the force-extension and extension-force behaviors for the extensible worm-like chain model are:
See also
Ideal chain
Polymer
Polymer physics
References
Further reading
C. Bouchiat et al., "Estimating the Persistence Length of a Worm-Like Chain Molecule from Force-Extension Measurements", Biophysical Journal, January 1999, p. 409-413, Vol. 76, No. 1
Polymer physics
Polymers
Biophysics | Worm-like chain | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 1,100 | [
"Polymer physics",
"Applied and interdisciplinary physics",
"Biophysics",
"Polymer chemistry",
"Polymers"
] |
21,170,509 | https://en.wikipedia.org/wiki/Eos%20%28protein%29 | EosFP is a photoactivatable green to red fluorescent protein. Its green fluorescence (516 nm) switches to red (581 nm) upon UV irradiation of ~390 nm (violet/blue light) due to a photo-induced modification resulting from a break in the peptide backbone near the chromophore. Eos was first discovered as a tetrameric protein in the stony coral Lobophyllia hemprichii. Like other fluorescent proteins, Eos allows for applications such as the tracking of fusion proteins, multicolour labelling and tracking of cell movement. Several variants of Eos have been engineered for use in specific study systems including mEos2, mEos4 and CaMPARI.
History
EosFP was first discovered in 2005 during a large scale screen for PAFPs (photoactivatable fluorescent proteins) within the stony coral Lobophyllia hemprichii. It has since been successfully cloned in Escherichia coli and fusion constructs have been developed for use in human cells. Eos was named after the Greek goddess of dawn.
Unlike the tetrameric fluorescent proteins derived from anthozoan coral, which can interfere with normal cellular function due to interactions between protein subunits, EosFP has been broken up into dimeric and monomeric variants through the introduction of single point mutations. These variants have been successful in the tracking of cellular components without disturbing function in the host cell and maintain the same photophysical properties as wild-type Eos.
Since their discovery, monomeric Eos probes (mEos) have been shown to localize in the cytosol, plasma membrane, endosomes, prevacuolar vesicles, vacuoles, the endoplasmic reticulum, golgi bodies, peroxisomes, mitochondria, invaginations, filamentous actin and cortical microtubules. mEos fusion proteins allow for differential colour labelling in single cells, or groups of cells in developing organs. They can also be used for the understanding of spatial/ temporal interactions between organelles and vesicles. The two fluorescent forms of mEosFP (green and red) are compatible with CFP, GFP, YFP and RFP for multicolour labelling.
Function
EosFP emits a strong green fluorescence (516 nm) that changes irreversibly to red (581 nm) when irradiated with UV-light of 390 nm. This modification occurs due to a break in the peptide backbone next to the chromophore. This mechanism allows for localized tagging of the protein and makes EosFP an appropriate tool for tracking protein movement within living cells. Formation of the red chromophore involves cleaving the peptide backbone but includes almost no other changes in the protein structure.
According to single-molecule fluorescence spectroscopy, EosFP is tetrameric, and exhibits strong Forster resonance coupling within individual fluorophores. Like other fluorescent proteins, Eos can be used to report diverse signals in cells, tissues and organs without disturbing complex biological machinery. While the use of fluorescent proteins was once limited to the green fluorescent protein (GFP), in recent years many other fluorescent proteins have been cloned. Unlike GFPs, which are derived from the luminescent jellyfish Aequorea victoria, fluorescent proteins derived from anthozoa, including Eos, emit fluorescence in the red spectral range. The novel property of photoinduced green-to-red conversion in Eos is useful because it allows for localized tracking of proteins in living cells. EosFP is unique because it has a large separation in the wavelengths it can emit which allows for easy identification of peak colours. All green-to-red photoinducible fluorescent proteins, including Eos, contain a chromophoric unit derived from the tripeptide his-tyr-gly. This green-to-red conversion is completed by light rather than chemical oxidation such as in other FPs.
Structure and Absorbance Properties
Primary structure
EosFP consists of 226 amino acids. It has a molecular mass of 25.8 kDa and its pI is 6.9. Eos has 84% identical residues to Kaede, a fluorescent protein that originated in a different scleractinian coral Trachyphyllia geoffroyi, but can also be irreversibly converted from a green to red emitting form using UV light. Excluding residues Phe-61 and His-62, the chromophore environment and chromophore itself are unaffected by photochemical modification. Wild-type EosFP has a tetrameric arrangement of subunits where each subunit has the same β-can structure as GFP. This structure includes an 11-stranded barrel and, down the central axis, the fluorophore-containing helix.
Structure of Green EosFP
In its anionic form, the green chromophore has an absorption maxima at 506 nm and an emission maxima at 516 nm. It is formed autocatalytically from amino acids His-62, Tyr-63 and Gly-64. Immediately surrounding the chromophore there is a cluster of charged or polar amino acids as well as structural water molecules. Above the plane of the chromophore, there is a network of hydrogen bond interactions between Glu-144, His-194, Glu-212 and Gln-38. Arg-66 and Arg-91 participate in hydrogen bonding with the carbonyl oxygen of green Eos's imidazolinone moiety. The His-62 side chain lies in an unpolar environment. Conversion from the green to red form depends on the presence of a histidine in the first position of the tripeptide HYG that forms the chromophore. When this histidine residue is substituted with M, S, T or L, Eos only emits bright green light and no longer acts as a photoconvertible fluorescent protein.
Structure of Red EosFP
The red chromophore, which is generated by cleavage of the peptide backbone, has an absorption maxima at 571 nm and an emission maxima at 581 nm, in its anionic form. The break in the peptide backbone that leads to this chromophore is between His-62 Nα and Cα. The observed red fluorescence occurs due to an extension of the chromophore's π-conjugation where the His-62 imidazole ring connects to the imidazolinone. The hydrogen bond patterns of the red and green chromophores are almost identical.
Photochemical conversion
Photochemical conversion occurs due to interactions between the chromophoric unit and residues in its vicinity. Glu-212 functions as a base that removes a proton from His-62 aiding in the cleavage of the His-62-Nα-Cα bond. Replacing Glu-212 with glutamine prevents photoconversion. At low pH, the yield of Eos involved in photoconversion is greatly increased as the fraction of molecules in the protonated form increases. The action spectrum for photoconversion is closely related to the action spectrum for Eos's protonated form. These observations suggest that the neutral form of the green chromophore, including a protonated Tyr-63 side chain, is the gateway structure for photoconversion. Proton ejection from the Tyr-63 phenyl side chain is an important event in the conversion mechanism where a proton is transferred from the His-62 imidazole, which is hydrogen-bonded to the Phe-61 carbonyl. The extra proton causes His-62 to donate a proton to the Phe-61 carbonyl forming a leaving group out of the peptide bond between His and Phe in the elimination reaction. The His-62 side chain is protonated during photoexcitation and assists the reaction by donating a proton to the Phe-61 carbonyl in the leaving group. After the backbone is cleaved, the hydrogen bond between His-62 and Phe-61 is reformed. When His-62 is replaced with other amino acids, EosFP loses its ability to photoconvert, providing evidence that His-62 is a necessary component of the photoconversion mechanism. The internal charge distribution of the green chromophore is altered during photo excitation to assist in the elimination reaction.
Spectroscopy
Both the fluorescence excitation and emission spectrums of wild-type EosFP are shifted ~65 nm to the right upon excitation toward the red end of the spectrum. This spectral change is caused by an extension of the chromophore accompanied by a break in the peptide backbone between Phe-61 and His-62 in an irreversible mechanism. The presence of a crisp isosbestic point at 432 nm also suggests an interconversion between two species. An absorption peak at 280 nm is visible due to aromatic amino acids which transfer their excitation energy to the green chromophore. The quantum yield of the green-emitting form of Eos is 0.7. In the red shifted species, there are pronounced vibronic sidebands separate from the main peak at 533 nm and 629 nm in the excitation spectrum and emission spectrum respectively. There is another peak in the red excitation spectrum at 502 nm likely due to FRET excitation of the red fluorophore. The quantum yield of the red-emitting form is 0.55.
EosFPs variants show almost no difference in spectroscopic properties, therefore, it is likely that the structural modifications which arise from separation of interfaces have little to no effect on the structure of the fluorophore-binding site.
Applications
Tracking of Fusion Proteins
Many different fusion proteins have been created using EosFP and its engineered variants. These fusion proteins allow for the tracking of proteins within living cells while retaining complex biological functions like protein-protein interactions and protein-DNA interactions. Eos fusion constructs include those with recombination signal-binding protein (RBP) and cytokeratin. Studies have shown that it is favourable to attach the protein of interest to the N-terminal side of the EosFP label. These fusion constructs have been used to visualize nuclear translocation with androgen receptors, dynamics of the cytoskeleton with actin and vinculin and intranuclear protein movement with RBP.
Multicolour Labelling
Since EosFP can be used in fusion constructs while maintaining functionality of the protein of interest, it is a popular choice for multi-colour labelling studies. In a dual-colour labelling experiment to map the stages of mitosis, HEK293 cells were first stably transfected with tubulin-binding protein cDNA fused to EGFP for visualization of the spindle apparatus. Then, transient transfection of recombination signal-binding protein (RBP) fused to d2EosFP was used to visualize the beginning of mitosis. Photoconversion was completed by fluorescent microscopy and highlighted the separation between two sets of chromosomes during anaphase, telophase and cytokinesis.
Tracking of Cell Movement in Developmental Biology
EosFP has been used to track cell movements during embryonic development of Xenopus laevis. At the two-cell/ early gastrula stage, capped mRNA coding for a dimeric EosFP (d2EosFP) was injected into cells and locally photoconverted using fluorescence microscopy. These fluorescent embryos demonstrated the dynamics of cell movement during neurulation. EosFP was found in part of the notochord which shows the possibility of EosFP to be used in fate-mapping experiments.
Engineered variants
mEos4
Many new monomeric versions of EosFP have been developed that offer advantages over wild type EosFP. Developed by a team at the Janelia Farm Research Campus at Howard Hughes Medical Institute, mEos4 has higher photostability and longer imaging abilities than EosFP. It is also highly resistant to chemical fixatives such as PFA, gluteraldehyde and OsO4 which are used to preserve samples. mEos4 is effective at higher temperatures than EosFP, phot-converts at an increased rate and has a higher emission amplitude in both green and red fluorescent states. Applications for the mEos4 protein include photoactivation localization microscopy (PALM), correlative light/ electron microscopy (CLEM), protein activity indication and activity integration (post-hoc imaging for protein activity over time).
mEos2
mEosFP is another monomeric Eos variant that folds effectively at 37 degrees Celsius. Where tdEos (tandem dimer) cannot fuse to targets such as histones, tubulin, intermediate filaments and gap junctions, and mEos (monomeric) which can only be used successfully at 30 degrees Celsius, mEos2 is an engineered variant that can fold effectively at 37 degrees Celsius and successfully label targets intolerant to fusion from other fluorescent protein dimers . mEos2 shows almost identical spectral properties, brightness, pKa, photoconversion, contrast and maturation properties to WT Eos. The localization precision of mEos2 is twice as great as other monomeric fluorescent proteins.
CaMPARI
Also at the Janelia Research Campus, a new fluorescent molecules known as CaMPARI (calcium-modulated photoactivatable ratiometric integrator) was developed using EosFP. The permanent green to red conversion signal was coupled with a calcium-sensitive protein, calmodulin, so that color change in the fusion construct depended on the release of calcium accompanied by neural activity. CaMPARI is able to permanently mark neurons that are active at an any time and can also be targeted to synapses. This visualization is possible across a wide amount of brain tissue as opposed to the limited view available with using a microscope. It also allows for the visualization of neural activity during complicated behaviors as the organism under study is allowed to move freely, rather than under a microscope. It also allows for the observation of neurons during specific behavior periods. CaMPARI has, thus far, been used to label active neural circuits in mice, zebrafish and fruit flies.
References
Fluorescent proteins
Bioluminescence
Protein methods | Eos (protein) | [
"Chemistry",
"Biology"
] | 2,992 | [
"Biochemistry methods",
"Luminescence",
"Protein methods",
"Protein biochemistry",
"Fluorescent proteins",
"Biochemistry",
"Bioluminescence"
] |
21,171,254 | https://en.wikipedia.org/wiki/Type-2%20fuzzy%20sets%20and%20systems | Type-2 fuzzy sets and systems generalize standard Type-1 fuzzy sets and systems so that more uncertainty can be handled. From the beginning of fuzzy sets, criticism was made about the fact that the membership function of a type-1 fuzzy set has no uncertainty associated with it, something that seems to contradict the word fuzzy, since that word has the connotation of much uncertainty. So, what does one do when there is uncertainty about the value of the membership function? The answer to this question was provided in 1975 by the inventor of fuzzy sets, Lotfi A. Zadeh, when he proposed more sophisticated kinds of fuzzy sets, the first of which he called a "type-2 fuzzy set". A type-2 fuzzy set lets us incorporate uncertainty about the membership function into fuzzy set theory, and is a way to address the above criticism of type-1 fuzzy sets head-on. And, if there is no uncertainty, then a type-2 fuzzy set reduces to a type-1 fuzzy set, which is analogous to probability reducing to determinism when unpredictability vanishes.
Type1 fuzzy systems are working with a fixed membership function, while in type-2 fuzzy systems the membership function is fluctuating. A fuzzy set determines how input values are converted into fuzzy variables.
Overview
In order to symbolically distinguish between a type-1 fuzzy set and a type-2 fuzzy set, a tilde symbol is put over the symbol for the fuzzy set; so, A denotes a type-1 fuzzy set, whereas à denotes the comparable type-2 fuzzy set. When the latter is done, the resulting type-2 fuzzy set is called a "general type-2 fuzzy set" (to distinguish it from the special interval type-2 fuzzy set).
Zadeh didn't stop with type-2 fuzzy sets, because in that 1976 paper he also generalized all of this to type-n fuzzy sets. The present article focuses only on type-2 fuzzy sets because they are the next step in the logical progression from type-1 to type-n fuzzy sets, where n = 1, 2, ... . Although some researchers are beginning to explore higher than type-2 fuzzy sets, as of early 2009, this work is in its infancy.
The membership function of a general type-2 fuzzy set, Ã, is three-dimensional (Fig. 1), where the third dimension is the value of the membership function at each point on its two-dimensional domain that is called its "footprint of uncertainty"(FOU).
For an interval type-2 fuzzy set that third-dimension value is the same (e.g., 1) everywhere, which means that no new information is contained in the third dimension of an interval type-2 fuzzy set. So, for such a set, the third dimension is ignored, and only the FOU is used to describe it. It is for this reason that an interval type-2 fuzzy set is sometimes called a first-order uncertainty fuzzy set model, whereas a general type-2 fuzzy set (with its useful third-dimension) is sometimes referred to as a second-order uncertainty fuzzy set model.
The FOU represents the blurring of a type-1 membership function, and is completely described by its two bounding functions (Fig. 2), a lower membership function (LMF) and an upper membership function (UMF), both of which are type-1 fuzzy sets! Consequently, it is possible to use type-1 fuzzy set mathematics to characterize and work with interval type-2 fuzzy sets. This means that engineers and scientists who already know type-1 fuzzy sets will not have to invest a lot of time learning about general type-2 fuzzy set mathematics in order to understand and use interval type-2 fuzzy sets.
Work on type-2 fuzzy sets languished during the 1980s and early-to-mid 1990s, although a small number of articles were published about them. People were still trying to figure out what to do with type-1 fuzzy sets, so even though Zadeh proposed type-2 fuzzy sets in 1976, the time was not right for researchers to drop what they were doing with type-1 fuzzy sets to focus on type-2 fuzzy sets. This changed in the latter part of the 1990s as a result of Jerry Mendel and his student's works on type-2 fuzzy sets and systems. Since then, more and more researchers around the world are writing articles about type-2 fuzzy sets and systems.
Interval type-2 fuzzy sets
Interval type-2 fuzzy sets have received the most attention because the mathematics that is needed for such sets—primarily Interval arithmetic—is much simpler than the mathematics that is needed for general type-2 fuzzy sets. So, the literature about interval type-2 fuzzy sets is large, whereas the literature about general type-2 fuzzy sets is much smaller. Both kinds of fuzzy sets are being actively researched by an ever-growing number of researchers around the world and have resulted in successful employment in a variety of domains such as robot control.
Formally, the following have already been worked out for interval type-2 fuzzy sets:
Fuzzy set operations: union, intersection and complement
Centroid (a very widely used operation by practitioners of such sets, and also an important uncertainty measure for them)
Other uncertainty measures [fuzziness, cardinality, variance and skewness and uncertainty bounds
Similarity
Subsethood
Embedded fuzzy sets
Fuzzy set ranking
Fuzzy rule ranking and selection
Type-reduction methods
Firing intervals for an interval type-2 fuzzy logic system
Fuzzy weighted average
Linguistic weighted average
Synthesizing an FOU from data that are collected from a group of subject
Interval type-2 fuzzy logic systems
Type-2 fuzzy sets are finding very wide applicability in rule-based fuzzy logic systems (FLSs) because they let uncertainties be modeled by them whereas such uncertainties cannot be modeled by type-1 fuzzy sets. A block diagram of a type-2 FLS is depicted in Fig. 3. This kind of FLS is used in fuzzy logic control, fuzzy logic signal processing, rule-based classification, etc., and is sometimes referred to as a function approximation application of fuzzy sets, because the FLS is designed to minimize an error function.
The following discussions, about the four components in Fig. 3 rule-based FLS, are given for an interval type-2 FLS, because to-date they are the most popular kind of type-2 FLS; however, most of the discussions are also applicable for a general type-2 FLS.
Rules, that are either provided by subject experts or are extracted from numerical data, are expressed as a collection of IF-THEN statements, e.g.,
IF temperature is moderate and pressure is high, then rotate the valve a bit to the right.
Fuzzy sets are associated with the terms that appear in the antecedents (IF-part) or consequents (THEN-part) of rules, and with the inputs to and the outputs of the FLS. Membership functions are used to describe these fuzzy sets, and in a type-1 FLS they are all type-1 fuzzy sets, whereas in an interval type-2 FLS at least one membership function is an interval type-2 fuzzy set.
An interval type-2 FLS lets any one or all of the following kinds of uncertainties be quantified:
Words that are used in antecedents and consequents of rules—because words can mean different things to different people.
Uncertain consequents—because when rules are obtained from a group of experts, consequents will often be different for the same rule, i.e. the experts will not necessarily be in agreement.
Membership function parameters—because when those parameters are optimized using uncertain (noisy) training data, the parameters become uncertain.
Noisy measurements—because very often it is such measurements that activate the FLS.
In Fig. 3, measured (crisp) inputs are first transformed into fuzzy sets in the Fuzzifier block because it is fuzzy sets and not numbers that activate the rules which are described in terms of fuzzy sets and not numbers. Three kinds of fuzzifiers are possible in an interval type-2 FLS. When measurements are:
Perfect, they are modeled as a crisp set;
Noisy, but the noise is stationary, they are modeled as a type-1 fuzzy set; and,
Noisy, but the noise is non-stationary, they are modeled as an interval type-2 fuzzy set (this latter kind of fuzzification cannot be done in a type-1 FLS).
In Fig. 3, after measurements are fuzzified, the resulting input fuzzy sets are mapped into fuzzy output sets by the Inference block. This is accomplished by first quantifying each rule using fuzzy set theory, and by then using the mathematics of fuzzy sets to establish the output of each rule, with the help of an inference mechanism. If there are M rules then the fuzzy input sets to the Inference block will activate only a subset of those rules, where the subset contains at least one rule and usually way fewer than M rules. The inference is done one rule at a time. So, at the output of the Inference block, there will be one or more fired-rule fuzzy output sets.
In most engineering applications of an FLS, a number (and not a fuzzy set) is needed as its final output, e.g., the consequent of the rule given above is "Rotate the valve a bit to the right." No automatic valve will know what this means because "a bit to the right" is a linguistic expression, and a valve must be turned by numerical values, i.e. by a certain number of degrees. Consequently, the fired-rule output fuzzy sets have to be converted into a number, and this is done in the Fig. 3 Output Processing block.
In a type-1 FLS, output processing, called "defuzzification", maps a type-1 fuzzy set into a number. There are many ways for doing this, e.g., compute the union of the fired-rule output fuzzy sets (the result is another type-1 fuzzy set) and then compute the center of gravity of the membership function for that set; compute a weighted average of the centers of gravity of each of the fired rule consequent membership functions; etc.
Things are somewhat more complicated for an interval type-2 FLS, because to go from an interval type-2 fuzzy set to a number (usually) requires two steps (Fig. 3). The first step, called "type-reduction", is where an interval type-2 fuzzy set is reduced to an interval-valued type-1 fuzzy set. There are as many type-reduction methods as there are type-1 defuzzification methods. An algorithm developed by Karnik and Mendel now known as the "KM algorithm" is used for type-reduction. Although this algorithm is iterative, it is very fast.
The second step of Output Processing, which occurs after type-reduction, is still called "defuzzification". Because a type-reduced set of an interval type-2 fuzzy set is always a finite interval of numbers, the defuzzified value is just the average of the two end-points of this interval.
It is clear from Fig. 3 that there can be two outputs to an interval type-2 FLS—crisp numerical values and the type-reduced set. The latter provides a measure of the uncertainties that have flowed through the interval type-2 FLS, due to the (possibly) uncertain input measurements that have activated rules whose antecedents or consequents or both are uncertain. Just as standard deviation is widely used in probability and statistics to provide a measure of unpredictable uncertainty about a mean value, the type-reduced set can provide a measure of uncertainty about the crisp output of an interval type-2 FLS.
Computing with words
Another application for fuzzy sets has also been inspired by Zadeh — "Computing with Words". Different acronyms have been used for "computing with words," e.g., CW and CWW. According to Zadeh:
CWW is a methodology in which the objects of computation are words and propositions drawn from a natural language. [It is] inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations.
Of course, he did not mean that computers would actually compute using words—single words or phrases—rather than numbers. He meant that computers would be activated by words, which would be converted into a mathematical representation using fuzzy sets and that these fuzzy sets would be mapped by a CWW engine into some other fuzzy set after which the latter would be converted back into a word. A natural question to ask is: Which kind of fuzzy set—type-1 or type-2—should be used as a model for a word? Mendel has argued, on the basis of Karl Popper's concept of "falsificationism", that using a type-1 fuzzy set as a model for a word is scientifically incorrect. An interval type-2 fuzzy set should be used as a (first-order uncertainty) model for a word. Much research is underway about CWW.
Applications
Type-2 fuzzy sets were applied in the following areas:
Image processing
Video processing and computer vision
Failure Mode And Effect Analysis
Function approximation and estimation
Control systems
Software
Freeware MATLAB implementations, which cover general and interval type-2 fuzzy sets and systems, as well as type-1 fuzzy systems, are available at: http://sipi.usc.edu/~mendel/software.
Software supporting discrete interval type-2 fuzzy logic systems is available at:
DIT2FLS Toolbox - http://dit2fls.com/projects/dit2fls-toolbox/
DIT2FLS Library Package - http://dit2fls.com/projects/dit2fls-library-package/
Java libraries including source code for type-1, interval- and general type-2 fuzzy systems are available at: http://juzzy.wagnerweb.net/.
Python library for type 1 and type 2 fuzzy sets is available at: https://github.com/carmelgafa/type2fuzzy
Python library for interval type 2 fuzzy sets and systems is available at: https://github.com/Haghrah/PyIT2FLS
An open source Matlab/Simulink Toolbox for Interval Type-2 Fuzzy Logic Systems is available at: http://web.itu.edu.tr/kumbasart/type2fuzzy.htm
See also
Computational intelligence
Expert system
Fuzzy control system
Fuzzy logic
Fuzzy set
Granular computing
Perceptual Computing
Rough set
Soft set
Vagueness
Random-fuzzy variable
References
External links
There are two IEEE Expert Now multi-media modules that can be accessed from the IEEE at:
"Introduction to Type-2 Fuzzy Sets and Systems" by Jerry Mendel, sponsored by the IEEE Computational Intelligence Society
"Type-2 Fuzzy Logic Controllers: Towards a New Approach for Handling Uncertainties in Real World Environments" by Hani Hagras, sponsored by the IEEE Computational Intelligence Society
Fuzzy logic
Logic in computer science | Type-2 fuzzy sets and systems | [
"Mathematics"
] | 3,154 | [
"Mathematical logic",
"Logic in computer science"
] |
21,173,946 | https://en.wikipedia.org/wiki/Woodbourne%20Forest%20and%20Wildlife%20Preserve | The Woodbourne Forest and Wildlife Preserve is a protected area that is managed by The Nature Conservancy. It covers in northeastern Pennsylvania in the United States.
It is located just south of Montrose, Pennsylvania.
History and notable features
This nature preserve contains old fields, meadows, creeks, bogs, and forests that are home to a wide variety of animals. These include more than 180 species of birds, such as pileated woodpeckers, great horned owls and winter wrens.
The preserve's wetlands harbor frogs, snakes and nine species of salamander, including the spring salamander, northern two-lined salamander and four-toed salamander.
The preserve's forests, which are part of the Allegheny Highlands forests ecoregion, contain of old growth northern hardwood forest with eastern hemlock, sweet birch, sugar maple, northern red oak, white ash, and American beech trees.
Visitor activities include hiking, snowshoeing, cross-country skiing, birdwatching, and photography.
References
Nature reserves in Pennsylvania
Old-growth forests
Protected areas of Susquehanna County, Pennsylvania | Woodbourne Forest and Wildlife Preserve | [
"Biology"
] | 225 | [
"Old-growth forests",
"Ecosystems"
] |
21,175,118 | https://en.wikipedia.org/wiki/BS%20857 | BS 857:1967 is a currently in-use British Standard specification for flat or curved safety glasses (toughened or laminated) for use in land vehicles, including road vehicles and railway vehicles. The standard specifies the mechanical, safety, impact, and optical requirements as well as sampling and test methods.
Other vehicle safety glass standards
UNECE Reg. 43 is a UNECE standard for safety glass used in road vehicles.
References
Glass engineering and science
Vehicle safety technologies
Car windows
00857 | BS 857 | [
"Materials_science",
"Engineering"
] | 99 | [
"Glass engineering and science",
"Materials science"
] |
21,175,240 | https://en.wikipedia.org/wiki/Zinc%20hydride | Zinc hydride is an inorganic compound with the chemical formula . It is a white, odourless solid which slowly decomposes into its elements at room temperature; despite this it is the most stable of the binary first row transition metal hydrides. A variety of coordination compounds containing Zn–H bonds are used as reducing agents, but itself has no common applications.
Discovery and synthesis
Zinc(II) hydride was first synthesized in 1947 by Hermann Schlesinger, via a reaction between dimethylzinc and lithium aluminium hydride ; a process which was somewhat hazardous due to the pyrophoric nature of .
Later methods were predominantly salt metathesis reactions between zinc halides and alkali metal hydrides, which are significantly safer. Examples include:
Small quantities of gaseous zinc(II) hydride have also been produced by laser ablation of zinc under a hydrogen atmosphere and other high energy techniques. These methods have been used to assess its gas phase properties.
Chemical properties
Structure
New evidence suggests that in zinc(II) hydride, elements form a one-dimensional network (polymer), being connected by covalent bonds. Other lower metal hydrides polymerise in a similar fashion (c.f. aluminium hydride). Solid zinc(II) hydride is the irreversible autopolymerisation product of the molecular form, and the molecular form cannot be isolated in concentration. Solubilising zinc(II) hydride in non-aqueous solvents, involve adducts with molecular zinc(II) hydride, such as in liquid hydrogen.
Stability
Zinc(II) hydride slowly decomposes to metallic zinc and hydrogen gas at room temperature, with decomposition becoming rapid if it is heated above 90°C.
It is readily oxidised and is sensitive to both air and moisture; being hydrolysed slowly by water but violently by aqueous acids, which indicates possible passivation via the formation of a surface layer of ZnO. Despite this older samples may be pyrophoric. Zinc hydride can therefore be considered metastable at best, however it is still the most stable of all the binary first row transition metal hydrides (c.f. titanium(IV) hydride).
Molecular form
Molecular zinc(II) hydride, , has been identified as a volatile product of the acidified reduction of zinc ions with sodium borohydride. This reaction is similar to the acidified reduction with lithium aluminium hydride, however a greater fraction of the generated zinc(II) hydride is in the molecular form. This can be attributed to a slower reaction rate, which prevents a polymerising concentration of building over the progression of the reaction. This follows earlier experiments in direct synthesis from the elements. The reaction of excited zinc atoms with molecular hydrogen in the gas phase was studied by Breckenridge et al using laserpump-probe techniques. Owing to its relative thermal stability, molecular zinc(II) hydride is included in the short list of molecular metal hydrides, which have been successfully identified in the gas phase (that is, not limited to matrix isolation).
The average Zn–H bond energy was recently calculated to be 51.24 kcal mol, while the H–H bond energy is 103.3 kcal mol. Therefore, the overall reaction is nearly ergoneutral.
Molecular zinc hydride in the gas phase was found to be linear with a Zn–H bond length of 153.5 pm.
The molecule can be found a singlet ground state of Σ.
Quantum chemical calculations predict the molecular form to exist in a doubly hydrogen-bridged, dimeric groundstate, with little or no formational energy barrier. The dimer can be called di-μ-hydrido-bis(hydridozinc), per IUPAC additive nomenclature.
References
Hydride
Metal hydrides
Reducing agents | Zinc hydride | [
"Chemistry"
] | 833 | [
"Inorganic compounds",
"Metal hydrides",
"Redox",
"Reducing agents"
] |
21,175,462 | https://en.wikipedia.org/wiki/Open-source%20robotics | Open-source robotics is a branch of robotics where robots are developed with open-source hardware and free and open-source software, publicly sharing blueprints, schematics, and source code. It is thus closely related to the open design movement, the maker movement and open science.
Requirements
Open source robotics means that information about the hardware is easily discerned, so that others can easily rebuild it. In turn, this requires design to use only easily available standard subcomponents and tools, and for the build process to be documented in detail including a bill of materials and detailed ('Ikea style') step-by-step building and testing instructions. (A CAD file alone is not sufficient, as it does not show the steps for performing or testing the build). These requirements are standard to open source hardware in general, and are formalised by various licences, certifications, especially those defined by the peer-reviewed journals HardwareX and Journal of Open Hardware.
Licensing requirements for software are the same as for any open source software. But in addition, for software components to be of practical use in real robot systems, they need to be compatible with other software, usually as defined by some robotics middleware community standard.
Hardware systems
Applications to date include:
Robot arms, e.g. PARA or Thor
Wheeled mobile robots. e.g. OpenScout
Four-legged robots such as the Open Dynamic Robot Initiative
UAV quadcopters such as Agilicious
Humanoid robots, e.g. iCub
Self-driving cars, e.g. OpenPodcar (→ Personal rapid transit)
Robot fish, eg. OpenFish
Laboratory robotics such as chemical liquid handling
Vertical farming
Swarm robots, e.g. HeRoSwarm
Domestic tasks: vacuum cleaning, floor washing and grass mowing
Robot sports including robot combat and autonomous racing
Education
Hardware subcomponents
Most open source hardware definitions allow non-open subcomponents to be used in modular design, as long as they are easily available. However many designs try to push openness down into as many subcomponents as possible, with the aim of ultimately reaching fully open designs.
Open hardware manual-drive vehicles and their subcomponents, such as from Open Source Ecology, are often used as starting points and extended with automation systems.
Open subcomponents can include open-source computing hardware as subcomponents, such as Arduino and RISC-V, as well as open source motors and drivers such as the Open Source Motor Controller and ODrive.
Open source robots are often used together with, so are designed to interface to, the open source robotics middleware Robot Operating System and various open source simulators such as Gazebo, running on the open source Linux operating system.
Software subcomponents
Middleware
Robotics middleware is software which links multiple other software components together. In robotics, this specifically means real-time communication systems with standardized message passing protocols. The predominant open source middleware is ROS2, the robot operating system, now as version 2. Other alternatives include ROS1, YARP — used in the iCub, URBI, and Orca.
Driver software
Most robot sensors and actuators require software drivers. There is little standardization of open source software at this level, because each hardware device is different. Creating open drivers for closed hardware is difficult as it requires both low level programming and reverse engineering.
Simulation software
Open source robotics simulators include Gazebo, MuJoCo and Webots. Open source 3D game engines such as Godot are also sometimes used as simulators, when equipped with suitable middleware interfaces.
Automation software
At the level of AI, many standard algorithms have open source software implementations, mostly in ROS2. Major components include:
Machine vision systems such as the YOLO object detector.
3D photogrammetry
SLAM such as gmapping
Mobile robot planning such as move base
Arm inverse kinematics such as moveIt
Community
The first signs of the increasing popularity of building and sharing robot designs were found with the DIY community. What began with small competitions for remote operated vehicles (e.g. Robot combat), soon developed to the building of autonomous telepresence robots such as Sparky and then true robots (being able to take decisions themselves) as the Open Automaton Project. Several commercial companies now also produce kits for making simple robots.
The community has adopted open source hardware licenses, certifications, and peer-reviewed publications, which check that source has been made correctly and permanently available under community definitions, and which validate that this has been done. These processes have become critically important due to many historical projects claiming to be open source but them reverting on the promise due to commercialisation or other pressures.
As with other forms of open source hardware, the community continues to debate precise criteria for 'ease of build'. A common standard is that designs should be buildable by a technical university student, in a few days, using typical fablab tools, but definitions of all of these subterms can also be debated.
Compared to other forms of open source hardware, open source robotics typically includes a large software element, so involves software as well as hardware engineers. Open source concepts are more established in open source software than hardware, so robotics is a field in which those concepts can be shared and transferred from software to hardware.
While the community in open source robotics is multi-faceted with a wide range of backgrounds, a sizable sub-community uses the ROS middleware and meets annually at the ROSCon conference to discuss development of ROS itself and automation components built on it.
References
Robotics | Open-source robotics | [
"Engineering"
] | 1,145 | [
"Robotics",
"Automation"
] |
21,177,379 | https://en.wikipedia.org/wiki/CHARISSA | CHARISSA (derived from 'CHARged particle Instrumentation for a Solid State Array') is a nuclear structure research collaboration originally conceived, initiated and partially built by Dr. William Rae of the University of Oxford (retired) and now run by the School of Physics and Astronomy at the University of Birmingham, UK. The other members of the collaboration are the University of Surrey with occasional contributions from LPC CAEN and Ruđer Bošković Institute, Zagreb. The collaboration is funded by the Science and Technology Facilities Council (STFC).
Experiments
The CHARISSA collaboration carries out experiments at many of the world's leading research centres. Due to the nature of the research experiments performed must be undertaken with the use of a particle accelerator and complex detection systems. The group probes the structure of nuclei.
Accelerators
Experiments are currently being carried out utilising the following facilities:
14UD Tandem Van de Graaf, Australian National University (ANU), Canberra, Australia.
Cyclotrons, Grand Accélérateur National d'Ions Lourds (GANIL), Caen, France.
MP Tandem Van de Graaf, Institut de Physique Nucléaire (IPN), Orsay, France.
MP Tandem Van de Graaf, Technische Universität München (TUM)/Ludwig-Maximilians-Universität München (LMU), Munich, Germany.
Tandem-Linac, Florida State University (FSU), Tallahassee, USA.
Tandem Van de Graaf, Oak Ridge National Laboratory (ORNL), Oak Ridge, USA.
UNILAC/Synchrotron, Gesellschaft für Schwerionenforschung (GSI), Darmstadt, Germany.
Previous experiments have taken place using the following accelerators at their respective facilities:
VIVITRON, Institut de Recherches Subatomiques (IReS), Strasbourg, France.
Cyclotron, Hahn-Meitner-Institut (HMI), Berlin, Germany.
UCLouvain CycLoNe, Cyclotron Research Center (CRC), Louvain-la-Neuve, Belgium.
External links
Charissa homepage
HMI homepage
See also
Nuclear physics
Particle accelerator
Nuclear research institutes
Science and Technology Facilities Council
University of Birmingham | CHARISSA | [
"Physics",
"Engineering"
] | 466 | [
"Nuclear research institutes",
"Nuclear and atomic physics stubs",
"Nuclear organizations",
"Nuclear physics"
] |
5,010,838 | https://en.wikipedia.org/wiki/Sine%20and%20cosine | In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle (the hypotenuse), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle , the sine and cosine functions are denoted as and .
The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle. More modern definitions express the sine and cosine as infinite series, or as the solutions of certain differential equations, allowing their extension to arbitrary positive and negative values and even to complex numbers.
The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the and functions used in Indian astronomy during the Gupta period.
Elementary descriptions
Right-angled triangle definition
To define the sine and cosine of an acute angle , start with a right triangle that contains an angle of measure ; in the accompanying figure, angle in a right triangle is the angle of interest. The three sides of the triangle are named as follows:
The opposite side is the side opposite to the angle of interest; in this case, it is .
The hypotenuse is the side opposite the right angle; in this case, it is . The hypotenuse is always the longest side of a right-angled triangle.
The adjacent side is the remaining side; in this case, it is . It forms a side of (and is adjacent to) both the angle of interest and the right angle.
Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse:
The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as:
Special angle measures
As stated, the values and appear to depend on the choice of a right triangle containing an angle of measure . However, this is not the case as all such triangles are similar, and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is ; therefore, . The following table shows the special value of each input for both sine and cosine with the domain between . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator.
Laws
The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. Given that a triangle with sides , , and , and angles opposite those sides , , and . The law states,
This is equivalent to the equality of the first three expressions below:
where is the triangle's circumradius.
The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. The law states,
In the case where from which , the resulting equation becomes the Pythagorean theorem.
Vector definition
The cross product and dot product are operations on two vectors in Euclidean vector space. The sine and cosine functions can be defined in terms of the cross product and dot product. If and are vectors, and is the angle between and , then sine and cosine can be defined as:
Analytic descriptions
Unit circle definition
The sine and cosine functions may also be defined in a more general way by using unit circle, a circle of radius one centered at the origin , formulated as the equation of in the Cartesian coordinate system. Let a line through the origin intersect the unit circle, making an angle of with the positive half of the axis. The and coordinates of this point of intersection are equal to and , respectively; that is,
This definition is consistent with the right-angled triangle definition of sine and cosine when because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when , even under the new definition using the unit circle.
Graph of a function and its elementary properties
Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input . In a sine function, if the input is , the point is rotated counterclockwise and stopped exactly on the axis. If , the point is at the circle's halfway. If , the point returned to its origin. This results that both sine and cosine functions have the range between .
Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the coordinate. In other words, both sine and cosine functions are periodic, meaning any angle added by the circumference's circle is the angle itself. Mathematically,
A function is said to be odd if , and is said to be even if . The sine function is odd, whereas the cosine function is even. Both sine and cosine functions are similar, with their difference being shifted by . This means,
Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is . The only real fixed point of the cosine function is called the Dottie number. The Dottie number is the unique real root of the equation . The decimal expansion of the Dottie number is approximately 0.739085.
Continuity and differentiation
The sine and cosine functions are infinitely differentiable. The derivative of sine is cosine, and the derivative of cosine is negative sine:
Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. These derivatives can be applied to the first derivative test, according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. It can also be applied to second derivative test, according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign () denotes a graph is increasing (going upward) and the negative sign () is decreasing (going downward)—in certain intervals. This information can be represented as a Cartesian coordinates system divided into four quadrants.
Both sine and cosine functions can be defined by using differential equations. The pair of is the solution to the two-dimensional system of differential equations and with the initial conditions and . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations and starting from the initial conditions and .
Integral and the usage in mensuration
Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are:
where denotes the constant of integration. These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between and is
where is the incomplete elliptic integral of the second kind with modulus . It cannot be expressed using elementary functions. In the case of a full period, its arc length is
where is the gamma function and is the lemniscate constant.
Inverse functions
The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or . The inverse function of cosine is arccosine, denoted as "arccos", "acos", or . As sine and cosine are not injective, their inverses are not exact inverse functions, but partial inverse functions. For example, , but also , , and so on. It follows that the arcsine function is multivalued: , but also , , and so on. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each in the domain, the expression will evaluate only to a single value, called its principal value. The standard range of principal values for arcsin is from to , and the standard range for arccos is from to .
The inverse function of both sine and cosine are defined as:
where for some integer ,
By definition, both functions satisfy the equations:
and
Other identities
According to Pythagorean theorem, the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity, the sum of a squared sine and a squared cosine equals 1:
Sine and cosine satisfy the following double-angle formulas:
The cosine double angle formula implies that sin2 and cos2 are, themselves, shifted and scaled sine waves. Specifically,
The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods.
Series and polynomials
Both sine and cosine functions can be defined by using a Taylor series, a power series involving the higher-order derivatives. As mentioned in , the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of are , , , , continuing to repeat those four functions. The th derivative, evaluated at the point 0:
where the superscript represents repeated differentiation. This implies the following Taylor series expansion at . One can then use the theory of Taylor series to show that the following identities hold for all real numbers —where is the angle in radians. More generally, for all complex numbers:
Taking the derivative of each term gives the Taylor series for cosine:
Both sine and cosine functions with multiple angles may appear as their linear combination, resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial. The trigonometric polynomial's ample applications may be acquired in its interpolation, and its extension of a periodic function known as the Fourier series. Let and be any coefficients, then the trigonometric polynomial of a degree —denoted as —is defined as:
The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let and be any coefficients, then the trigonometric series can be defined as:
In the case of a Fourier series with a given integrable function , the coefficients of a trigonometric series are:
Complex numbers relationship
Complex exponential function definitions
Both sine and cosine can be extended further via complex number, a set of numbers composed of both real and imaginary numbers. For real number , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows:
Alternatively, both functions can be defined in terms of Euler's formula:
When plotted on the complex plane, the function for real values of traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of as:
When for real values and , where , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as:
Polar coordinates
Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates :
and the real and imaginary parts are
where and represent the magnitude and angle of the complex number .
For any real number , Euler's formula in terms of polar coordinates is stated as .
Complex arguments
Applying the series definition of the sine and cosine to a complex argument, z, gives:
where sinh and cosh are the hyperbolic sine and cosine. These are entire functions.
It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument:
Partial fraction and product expansions of complex sine
Using the partial fraction expansion technique in complex analysis, one can find that the infinite series
both converge and are equal to . Similarly, one can show that
Using product expansion technique, one can derive
Usage of complex sine
sin(z) is found in the functional equation for the Gamma function,
which in turn is found in the functional equation for the Riemann zeta-function,
As a holomorphic function, sin z is a 2D solution of Laplace's equation:
The complex sine function is also related to the level curves of pendulums.
Complex graphs
Background
Etymology
The word sine is derived, indirectly, from the Sanskrit word 'bow-string' or more specifically its synonym (both adopted from Ancient Greek 'string'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā). This was transliterated in Arabic as , which is meaningless in that language and written as (). Since Arabic is written without short vowels, was interpreted as the homograph (جيب), which means 'bosom', 'pocket', or 'fold'. When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona, he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. The English form sine was introduced in the 1590s.
The word cosine derives from an abbreviation of the Latin 'sine of the complementary angle' as cosinus in Edmund Gunter's Canon triangulorum (1620), which also includes a similar definition of cotangens.
History
While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE).
The sine and cosine functions can be traced to the and functions used in Indian astronomy during the Gupta period (Aryabhatiya and Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.
All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. With the exception of the sine (which was adopted from Indian mathematics), the other five modern trigonometric functions were discovered by Arabic mathematicians, including the cosine, tangent, cotangent, secant and cosecant. Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.
The first published use of the abbreviations sin, cos, and tan is by the 16th-century French mathematician Albert Girard; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus, a student of Copernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596.
In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x. Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviations sin., cos., tang., cot., sec., and cosec.
Software implementations
There is no standard algorithm for calculating sine and cosine. IEEE 754, the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs.
Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(10).
A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage.
The CORDIC algorithm is commonly used in scientific calculators.
The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos.
Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387.
In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h: sin(double), sinf(float), and sinl(long double). The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z). CPython's math functions call the C math library, and use a double-precision floating-point format.
Turns based implementations
Some software libraries provide implementations of sine and cosine using the input angle in half-turns, a half-turn being an angle of 180 degrees or radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. In MATLAB, OpenCL, R, Julia, CUDA, and ARM, these functions are called sinpi and cospi. For example, sinpi(x) would evaluate to where x is expressed in half-turns, and consequently the final input to the function, can be interpreted in radians by .
The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing , , and in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits.
Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo involves inaccuracies in representing .
For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to would be incurred.
See also
Āryabhaṭa's sine table
Bhaskara I's sine approximation formula
Discrete sine transform
Dixon elliptic functions
Euler's formula
Generalized trigonometry
Hyperbolic function
Lemniscate elliptic functions
Law of sines
List of periodic functions
List of trigonometric identities
Madhava series
Madhava's sine table
Optical sine theorem
Polar sine—a generalization to vertex angles
Proofs of trigonometric identities
Sinc function
Sine and cosine transforms
Sine integral
Sine quadrant
Sine wave
Sine–Gordon equation
Sinusoidal model
SOH-CAH-TOA
Trigonometric functions
Trigonometric integral
References
Footnotes
Citations
Works cited
External links
Angle
Trigonometric functions
no:Trigonometriske funksjoner#Sinus, cosinus og tangens | Sine and cosine | [
"Physics"
] | 4,896 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Angle"
] |
5,011,249 | https://en.wikipedia.org/wiki/Gelsolin | Gelsolin is an actin-binding protein that is a key regulator of actin filament assembly and disassembly. Gelsolin is one of the most potent members of the actin-severing gelsolin/villin superfamily, as it severs with nearly 100% efficiency.
Cellular gelsolin, found within the cytosol and mitochondria, has a closely related secreted form, Plasma gelsolin, that contains an additional 24 AA N-terminal extension. Plasma gelsolin's ability to sever actin filaments helps the body recover from disease and injury that leaks cellular actin into the blood. Additionally it plays important roles in host innate immunity, activating macrophages and localizing of inflammation.
Structure
Gelsolin is an 82-kD protein with six homologous subdomains, referred to as S1-S6. Each subdomain is composed of a five-stranded β-sheet, flanked by two α-helices, one positioned perpendicular with respect to the strands and one positioned parallel. The β-sheets of the three N-terminal subdomains (S1-S3) join to form an extended β-sheet, as do the β-sheets of the C-terminal subdomains (S4-S6).
Regulation
Among the lipid-binding actin regulatory proteins, gelsolin (like cofilin) preferentially binds polyphosphoinositide (PPI). The binding sequences in gelsolin closely resemble the motifs in the other PPI-binding proteins.
Gelsolin's activity is stimulated by calcium ions (Ca2+). Although the protein retains its overall structural integrity in both activated and deactivated states, the S6 helical tail moves like a latch depending on the concentration of calcium ions. The C-terminal end detects the calcium concentration within the cell. When there is no Ca2+ present, the tail of S6 shields the actin-binding sites on one of S2's helices. When a calcium ion attaches to the S6 tail, however, it straightens, exposing the S2 actin-binding sites. The N-terminal is directly involved in the severing of actin. S2 and S3 bind to the actin before the binding of S1 severs actin-actin bonds and caps the barbed end.
Gelsolin can be inhibited by a local rise in the concentration of phosphatidylinositol (4,5)-bisphosphate (PIP2), a PPI. This is a two step process. Firstly, (PIP2) binds to S2 and S3, inhibiting gelsolin from actin side binding. Then, (PIP2) binds to gelsolin’s S1, preventing gelsolin from severing actin, although (PIP2) does not bind directly to gelsolin's actin-binding site.
Gelsolin's severing of actin, in contrast to the severing of microtubules by katanin, does not require any extra energy input.
Cellular function
As an important actin regulator, gelsolin plays a role in podosome formation (along with Arp3, cortactin, and Rho GTPases).
Gelsolin also inhibits apoptosis by stabilizing the mitochondria. Prior to cell death, mitochondria normally lose membrane potential and become more permeable. Gelsolin can impede the release of cytochrome C, obstructing the signal amplification that would have led to apoptosis.
Actin can be cross-linked into a gel by actin cross-linking proteins. Gelsolin can turn this gel into a sol, hence the name gelsolin.
Animal studies
Research in mice suggests that gelsolin, like other actin-severing proteins, is not expressed to a significant degree until after the early embryonic stage—approximately 2 weeks in murine embryos. In adult specimens, however, gelsolin is particularly important in motile cells, such as blood platelets. Mice with null gelsolin-coding genes undergo normal embryonic development, but the deformation of their blood platelets reduced their motility, resulting in a slower response to wound healing.
An insufficiency of gelsolin in mice has also been shown to cause increased permeability of the vascular pulmonary barrier, suggesting that gelsolin is important in the response to lung injury.
Related proteins
Sequence comparisons indicate an evolutionary relationship between gelsolin, villin, fragmin, and severin. Six large repeating segments occur in gelsolin and villin, and 3 similar segments in severin and fragmin. The multiple repeats are related in structure (but barely in sequence) to the ADF-H domain, forming a superfamily (). The family appears to have evolved from an ancestral sequence of 120 to 130 amino acid residues.
Asgard archaea encode many functional gelsolins.
Interactions
Gelsolin is a cytoplasmic, calcium-regulated, actin-modulating protein that binds to the barbed ends of actin filaments, preventing monomer exchange (end-blocking or capping). It can promote nucleation (the assembly of monomers into filaments), as well as sever existing filaments. In addition, this protein binds with high affinity to fibronectin. Plasma gelsolin and cytoplasmic gelsolin are derived from a single gene by alternate initiation sites and differential splicing.
Gelsolin has been shown to interact with:
Amyloid precursor protein,
Androgen receptor,
PTK2B, and
VDAC1.
See also
Plasma gelsolin
Cortactin
Villin
Supervillin
Finnish type amyloidosis
References
External links
http://www.bioaegistherapeutics.com
Proteins | Gelsolin | [
"Chemistry"
] | 1,252 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
5,011,779 | https://en.wikipedia.org/wiki/Channelling%20%28physics%29 | In condensed-matter physics, channelling (or channeling) is the process that constrains the path of a charged particle in a crystalline solid.
Many physical phenomena can occur when a charged particle is incident upon a solid target, e.g., elastic scattering, inelastic energy-loss processes, secondary-electron emission, electromagnetic radiation, nuclear reactions, etc. All of these processes have cross sections which depend on the impact parameters involved in collisions with individual target atoms. When the target material is homogeneous and isotropic, the impact-parameter distribution is independent of the orientation of the momentum of the particle and interaction processes are also orientation-independent. When the target material is monocrystalline, the yields of physical processes are very strongly dependent on the orientation of the momentum of the particle relative to the crystalline axes or planes. Or in other words, the stopping power of the particle is much lower in certain directions than others. This effect is commonly called the "channelling" effect. It is related to other orientation-dependent effects, such as particle diffraction. These relationships will be discussed in detail later.
History
The channelling effect was first discovered in pioneering binary collision approximation computer simulations in 1963 in order to explain exponential tails in experimentally observed ion range distributions that did not conform to standard theories of ion penetration. The simulated prediction was confirmed experimentally the following year by measurements of ion penetration depths in single-crystalline tungsten. First transmission experiments of ions channelling through crystals were performed by Oak Ridge National Laboratory group showing that ions distribution is determinated by crystal rainbow channelling effect.
Mechanism
From a simple, classical standpoint, one may qualitatively understand the channelling effect as follows: If the direction of a charged particle incident upon the surface of a monocrystal lies close to a major crystal direction (Fig. 1), the particle with high probability will only do small-angle scattering as it passes through the several layers of atoms in the crystal and hence remain in the same crystal 'channel'. If it is not in a major crystal direction or plane ("random direction", Fig. 2), it is much more likely to undergo large-angle scattering and hence its final mean penetration depth is likely to be shorter. If the direction of the particle's momentum is close to the crystalline plane, but it is not close to major crystalline axes, this phenomenon is called "planar channelling".
Channelling usually leads to deeper penetration of the ions in the material, an effect that has been observed experimentally and in computer simulations, see Figures 3-5.
Negatively charged particles like antiprotons and electrons are attracted towards the positively charged nuclei of the plane, and after passing the center of the plane, they will be attracted again, so negatively charged particles tend to follow the direction of one crystalline plane.
Because the crystalline plane has a high density of atomic electrons and nuclei, the channeled particles eventually suffer a high angle Rutherford scattering or energy-losses in collision with electrons and leave the channel. This is called the "dechannelling" process.
Positively charged particles like protons and positrons are instead repelled from the nuclei of the plane, and after entering the space between two neighboring planes, they will be repelled from the second plane. So positively charged particles tend to follow the direction between two neighboring crystalline planes, but at the largest possible distance from each of them. Therefore, the positively charged particles have a smaller probability of interacting with the nuclei and electrons of the planes (smaller "dechannelling" effect) and travel longer distances.
The same phenomena occur when the direction of momentum of the charged particles lies close to a major crystalline, high-symmetry axis. This phenomenon is called "axial channelling". Generally, the effect of axial channeling is higher than planar channeling due to a deeper potential formed in axial conditions.
At low energies the channelling effects in crystals are not present because small-angle scattering at low energies requires large impact parameters, which become bigger than interplanar distances. The particle's diffraction is dominating here. At high energies the quantum effects and diffraction are less effective and the channelling effect is present.
Applications
There are several particularly interesting applications of the channelling effects.
Channelling effects can be used as tools to investigate the properties of the crystal lattice and of its perturbations (like doping) in the bulk region that is not accessible to X-rays.
The channelling method may be utilized to detect the geometrical location of interstitials. This is an important variation of the Rutherford backscattering ion beam analysis technique, commonly called Rutherford backscattering/channelling (RBS-C).
The channelling may even be used for superfocusing of ion beam, to be employed for sub-atomic microscopy.
At higher energies (tens of GeV), the applications include the channelling radiation for enhanced production of high energy gamma rays, and the use of bent crystals for extraction of particles from the halo of the circulating beam in a particle accelerator.
Classical channelling theory
The classical treatment of channelling phenomenon supposes that the ion - nucleus interactions are not correlated phenomena. The first analytic classical treatise is due to Jens Lindhard in 1965, who proposed a treatment that still remains the reference one. He proposed a model that is based on the effects of a continuous repulsive potential generated by atomic nuclei lines or planes, arranged neatly in a crystal. The continuous potential is the average in a row or on an atomic plane of the single Coulomb potentials of the charged nuclei and shielded from the electronic cloud.
The proposed potential (named Lindhard potential) is:
r represents the distance from the nucleus, is a constant equal to 3 and a is the screen radius of Thomas-Fermi:
is equal to the Bohr radius (=0.53Å the radius of the smallest orbit of the Bohr atom). The typical values for the screen radius is in between 0.1-0.2 Å.
Considering the case of axial channelling, if d is the distance between two successive atoms of an atomic row, the mean of the potential along this row is equal to:
equal to the distance between atomic lines. The obtained potential is a continuous potential generated by a string of atoms with an atomic number and a mean distance d between nuclei.
The energy of the channeled ions, having an atomic number can be written as:
where e are respectively the parallel and perpendicular components of the momentum of the projectile with respect to the considered direction of the string of atoms. The potential is the minimum potential of the channel, taking into account the superposition of the potentials generated by the various atomic lines inside the crystal.
It therefore follows that the components of the momentum are:
where is the angle between the direction of motion of an ion and the considered crystallographic axial direction.
Neglecting the energy loss processes, the quantity is conserved during the channeled ion motion and the energy conservation can be formulated as follows:
The equation is also known as the expression of the conservation of transverse energy. The approximation of is feasible, since we consider a good alignment between ion and crystallographic axis.
The channelling condition can now be considered the condition for which an ion is channeled if its transverse energy is not sufficient to overcome the height of the potential barrier created by the strings of ordered nuclei. It is therefore useful to define the "critical energy" as that transverse energy under which an ion is channeled, while if it exceeds it, an ion will be de-channeled.
Typical values are a few tens of eV, since the critical distance is similar to the screen radius, i.e. 0.1-0.2 Å. Therefore, all ions with transverse energy lower than will be channeled.
In the case of (perfect ion-axis alignment) all ions with impact parameter will be de-channeled.
where is the occupied area by each row of atoms having an average distance d in a material, with a density N (expressed as atoms / cm ^ 3). Therefore, is an estimation of the smallest fraction of de-channeled ions that can be obtained from a material perfectly aligned to the ion beam. By considering a single crystal of silicon, oriented along the <110>, a can be calculated, in good agreement with the experimental values.
Further considerations can be made by considering the thermal vibration motion of the nuclei: for this discussion, see the reference.
The critical angle can be defined as the angle such that if the ion enters with an angle smaller than the critical angle it will be channeled vice versa its transverse energy will allow it to escape to the periodic potential.
Using the Lindhard potential and assuming the amplitude of thermal vibration as the minimum approach distance.
Typical critical angles values (at room temperature) are for silicon <110> 0.71 °, for germanium <100> 0.89 °, for tungsten <100> 2.17 °.
Similar consideration can be made for planar channelling. In this case, the average of the atomic potentials will cause the ions to be confined between charge planes that correspond to a continuous planar potential .
where are the average number of atoms per unit area in the plane, is the spacing between crystallographic planes and y is the distance from the plane. Planar channelling has critical angles that are a factor of 2-4 smaller than axial analogs and a which is greater than axial channelling, with values that are around 10-20%, comparing with > 99% of axial channelling. A complete discussion of planar channelling can be found in references.
General literature
J.W. Mayer and E. Rimini, Ion Beam Handbook for Material Analysis, (1977) Academic Press, New York
L.C. Feldman, J.W. Mayer and S.T.Picraux, Material Analysis by Ion Channelling, (1982) Academic Press, New York
R. Hovden, H. L. Xin, D. A. Muller, Phys. Rev. B 86, 195415 (2012)
G. R. Anstis, D. Q. Cai, and D. J. H. Cockayne, Ultramicroscopy 94, 309 (2003).
D. Van Dyck and J. H. Chen, Solid State Communications 109, 501 (1999).
S. Hillyard and J. Silcox, Ultramicroscopy 58, 6 (1995).
S. J. Pennycook and D. E. Jesson, Physical Review Letters 64, 938 (1990).
M. V. Berry and Ozoriode.Am, Journal of Physics a-Mathematical and General 6, 1451 (1973).
M. V. Berry, Journal of Physics Part C Solid State Physics 4, 697 (1971).
A. Howie, Philosophical Magazine 14, 223 (1966).
P. B. Hirsch, A. Howie, R. B. Nicholson, D. W. Pashley, and M. Whelan, Electron microscopy of thin crystals (Butterworths London, 1965).
J. U. Andersen, Notes on Channeling, http://phys.au.dk/en/publications/lecture-notes/ (2014)
See also
Emission channeling
Electron channeling pattern
References
External links
CERN NA43 Experiment that investigated interactions of high energy particles with crystals
Note and reports on crystal extraction
The future looks bright for particle channelling on CERN Courier
Experimental particle physics | Channelling (physics) | [
"Physics"
] | 2,359 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
5,012,233 | https://en.wikipedia.org/wiki/Nazarov%20cyclization%20reaction | The Nazarov cyclization reaction (often referred to as simply the Nazarov cyclization) is a chemical reaction used in organic chemistry for the synthesis of cyclopentenones. The reaction is typically divided into classical and modern variants, depending on the reagents and substrates employed. It was originally discovered by Ivan Nikolaevich Nazarov (1906–1957) in 1941 while studying the rearrangements of allyl vinyl ketones.
As originally described, the Nazarov cyclization involves the activation of a divinyl ketone using a stoichiometric Lewis acid or protic acid promoter. The key step of the reaction mechanism involves a cationic 4π-electrocyclic ring closure which forms the cyclopentenone product (See Mechanism below). As the reaction has been developed, variants involving substrates other than divinyl ketones and promoters other than Lewis acids have been subsumed under the name Nazarov cyclization provided that they follow a similar mechanistic pathway.
The success of the Nazarov cyclization as a tool in organic synthesis stems from the utility and ubiquity of cyclopentenones as both motifs in natural products (including jasmone, the aflatoxins, and a subclass of prostaglandins) and as useful synthetic intermediates for total synthesis. The reaction has been used in several total syntheses and several reviews have been published.
Mechanism
The mechanism of the classical Nazarov cyclization reaction was first demonstrated experimentally by Charles Shoppee to be an intramolecular electrocyclization and is outlined below. Activation of the ketone by the acid catalyst generates a pentadienyl cation, which undergoes a thermally allowed 4π conrotatory electrocyclization as dictated by the Woodward-Hoffman rules. This generates an oxyallyl cation which undergoes an elimination reaction to lose a β-hydrogen. Subsequent tautomerization of the enolate produces the cyclopentenone product.
As noted above, variants that deviate from this template are known; what designates a Nazarov cyclization in particular is the generation of the pentadienyl cation followed by electrocyclic ring closure to an oxyallyl cation. In order to achieve this transformation, the molecule must be in the s-trans/s-trans conformation, placing the vinyl groups in an appropriate orientation. The propensity of the system to enter this conformation dramatically influences reaction rate, with α-substituted substrates having an increased population of the requisite conformer due to allylic strain. Coordination of an electron donating α-substituent by the catalyst can likewise increase the reaction rate by enforcing this conformation.
Similarly, β-substitution directed inward restricts the s-trans conformation so severely that E-Z isomerization has been shown to occur in advance of cyclization on a wide range of substrates, yielding the trans cyclopentenone regardless of initial configuration. In this way, the Nazarov cyclization is a rare example of a stereoselective pericyclic reaction, whereas most electrocyclizations are stereospecific. The example below uses triethylsilane to trap the oxyallyl cation so that no elimination occurs. (See Interrupted cyclizations below)
Along this same vein, allenyl vinyl ketones of the type studied extensively by Marcus Tius of the University of Hawaii show dramatic rate acceleration due to the removal of β-hydrogens, obviating a large amount of steric strain in the s-cis conformer.
Classical Nazarov cyclizations
Though cyclizations following the general template above had been observed prior to Nazarov's involvement, it was his study of the rearrangements of allyl vinyl ketones that marked the first major examination of this process. Nazarov correctly reasoned that the allylic olefin isomerized in situ to form a divinyl ketone before ring closure to the cyclopentenone product. The reaction shown below involves an alkyne oxymercuration reaction to generate the requisite ketone.
Research involving the reaction was relatively quiet in subsequent years, until in the mid-1980s when several syntheses employing the Nazarov cyclization were published. Shown below are key steps in the syntheses of Trichodiene and Nor-Sterepolide, the latter of which is thought to proceed via an unusual alkyne-allene isomerization that generates the divinyl ketone.
Shortcomings
The classical version of the Nazarov cyclization suffers from several drawbacks which modern variants attempt to circumvent. The first two are not evident from the mechanism alone, but are indicative of the barriers to cyclization; the last three stem from selectivity issues relating to elimination and protonation of the intermediate.
Strong Lewis or protic acids are typically required for the reaction (e.g. TiCl4, BF3, MeSO3H). These promoters are not compatible with sensitive functional groups, limiting the substrate scope.
Despite the mechanistic possibility for catalysis, multiple equivalents of the promoter are often required in order to effect the reaction. This limits the atom economy of the reaction.
The elimination step is not regioselective; if multiple β-hydrogens are available for elimination, various products are often observed as mixtures. This is highly undesirable from an efficiency standpoint as arduous separation is typically required.
Elimination destroys a potential stereocenter, decreasing the potential usefulness of the reaction.
Protonation of the enolate is sometimes not stereoselective, meaning that products can be formed as mixtures of epimers.
Modern variants
The shortcomings noted above limit the usefulness of the Nazarov cyclization reaction in its canonical form. However, modifications to the reaction focused on remedying its issues continue to be an active area of academic research. In particular, the research has focused on a few key areas: rendering the reaction catalytic in the promoter, effecting the reaction with more mild promoters to improve functional group tolerance, directing the regioselectivity of the elimination step, and improving the overall stereoselectivity. These have been successful to varying degrees.
Additionally, modifications focused on altering the progress of the reaction, either by generating the pentadienyl cation in an unorthodox fashion or by having the oxyallyl cation "intercepted" in various ways. Furthermore, enantioselective variants of various kinds have been developed. The sheer volume of literature on the subject prevents a comprehensive examination of this field; key examples are given below.
Silicon-directed cyclization
The earliest efforts to improve the selectivity of the Nazarov cyclization took advantage of the β-silicon effect in order to direct the regioselectivity of the elimination step. This chemistry was developed most extensively by Professor Scott Denmark of the University of Illinois, Urbana-Champaign in the mid-1980s and utilizes stoichiometric amounts of iron trichloride to promote the reaction. With bicyclic products, the cis isomer was selected for to varying degrees.
The silicon-directed Nazarov cyclization reaction was subsequently employed in the synthesis of the natural product Silphinene, shown below. The cyclization takes place before elimination of the benzyl alcohol moiety, so that the resulting stereochemistry of the newly formed ring arises from approach of the silyl alkene anti to the ether.
Polarization
Drawing on the substituent effects compiled over various trials of the reaction, Professor Alison Frontier of the University of Rochester developed a paradigm for "polarized" Nazarov cyclizations in which electron donating and electron withdrawing groups are used to improve the overall selectivity of the reaction. Creation of an effective vinyl nucleophile and vinyl electrophile in the substrate allows catalytic activation with copper triflate and regioselective elimination. In addition, the electron withdrawing group increases the acidity of the α-proton, allowing selective formation of the trans-α-epimer via equilibration.
It is often possible to achieve catalytic activation using a donating or withdrawing group alone, although the efficiency of the reaction (yield, reaction time, etc.) is typically lower.
Alternative cation generation
By extension, any pentadienyl cation regardless of its origin is capable of undergoing a Nazarov cyclization. There have been a large number of examples published where the requisite cation is arrived at by a variety of rearrangements. One such example involves the silver catalyzed cationic ring opening of allylic dichloro cylopropanes. The silver salt facilitates loss of chloride via precipitation of insoluble silver chloride.
In the total synthesis of rocaglamide, epoxidation of a vinyl alkoxyallenyl stannane likewise generates a pentadienyl cation via ring opening of the resultant epoxide.
Interrupted cyclization
Once the cyclization has occurred, an oxyallyl cation is formed. As discussed extensively above, the typical course for this intermediate is elimination followed by enolate tautomerization. However, these two steps can be interrupted by various nucleophiles and electrophiles, respectively. Oxyallyl cation trapping has been developed extensively by Fredrick G. West of the University of Alberta and his review covers the field. The oxyallyl cation can be trapped with heteroatom and carbon nucleophiles and can also undergo cationic cycloadditions with various tethered partners. Shown below is a cascade reaction in which successive cation trapping generates a pentacyclic core in one step with complete diastereoselectivity.
Enolate trapping with various electrophiles is decidedly less common. In one study, the Nazarov cyclization is paired with a Michael reaction using an iridium catalyst to initiate nucleophilic conjugate addition of the enolate to β-nitrostyrene. In this tandem reaction the iridium catalyst is required for both conversions: it acts as the Lewis acid in the Nazarov cyclization and in the next step the nitro group of nitrostyrene first coordinates to iridium in a ligand exchange with the carbonyl ester oxygen atom before the actual Michael addition takes place to the opposite face of the R-group.
Enantioselective variants
The development of an enantioselective Nazarov cyclization is a desirable addition to the repertoire of Nazarov cyclization reactions. To that end, several variations have been developed utilizing chiral auxiliaries and chiral catalysts. Diastereoselective cyclizations are also known, in which extant stereocenters direct the cyclization. Almost all of the attempts are based on the idea of torquoselectivity; selecting one direction for the vinyl groups to "rotate" in turn sets the stereochemistry as shown below.
Silicon-directed Nazarov cyclizations can exhibit induced diastereoselectivity in this way. In the example below, the silyl-group acts to direct the cyclization by preventing the distant alkene from rotating "towards" it via unfavorable steric interaction. In this way the silicon acts as a traceless auxiliary. (The starting material is not enantiopure but the retention of enantiomeric excess suggests that the auxiliary directs the cyclization.)
Tius's allenyl substrates can exhibit axial to tetrahedral chirality transfer if enantiopure allenes are used. The example below generates a chiral diosphenpol in 64% yield and 95% enantiomeric excess.
Tius has additionally developed a camphor-based auxiliary for achiral allenes that was employed in the first asymmetric synthesis of roseophilin. The key step employs an unusual mixture of hexafluoro-2-propanol and trifluoroethanol as solvent.
The first chiral Lewis acid promoted asymmetric Nazarov cyclization was reported by Varinder Aggarwal and utilized copper (II) bisoxazoline ligand complexes with up to 98% ee. The enantiomeric excess was unaffected by use of 50 mol% of the copper complex but the yield was significantly decreased.
Related Reactions
Extensions of the Nazarov cyclization are generally also subsumed under the same name. For example, an α-β, γ-δ unsaturated ketone can undergo a similar cationic conrotatory cyclization that is typically referred to as an iso-Nazarov cyclization reaction. Other such extensions have been given similar names, including homo-Nazarov cyclizations and vinylogous Nazarov cyclizations.
Retro-Nazarov reaction
Because they overstabilize the pentadienyl cation, β-electron donating substituents often severely impede Nazarov cyclization. Building from this, several electrocyclic ring openings of β-alkoxy cyclopentanes have been reported. These are typically referred to as retro-Nazarov cyclization reactions.
Imino-Nazarov reaction
Nitrogen analogues of the Nazarov cyclization reaction (known as imino-Nazarov cyclization reactions) have few instances; there is one example of a generalized imino-Nazarov cyclization reported (shown below), and several iso-imino-Nazarov reactions in the literature. Even these tend to suffer from poor stereoselectivity, poor yields, or narrow scope. The difficulty stems from the relative over-stabilization of the pentadienyl cation by electron donation, impeding cyclization.
See also
Pauson–Khand reaction
Electrocyclization
Cyclopentenone
Merrilactone A
References
Rearrangement reactions
Name reactions | Nazarov cyclization reaction | [
"Chemistry"
] | 2,925 | [
"Name reactions",
"Ring forming reactions",
"Rearrangement reactions",
"Organic reactions"
] |
5,014,146 | https://en.wikipedia.org/wiki/Beta%20barrel | In protein structures, a beta barrel (β barrel) is a beta sheet (β sheet) composed of tandem repeats that twists and coils to form a closed toroidal structure in which the first strand is bonded to the last strand (hydrogen bond). Beta-strands in many beta-barrels are arranged in an antiparallel fashion. Beta barrel structures are named for resemblance to the barrels used to contain liquids. Most of them are water-soluble outer membrane proteins and frequently bind hydrophobic ligands in the barrel center, as in lipocalins. Others span cell membranes and are commonly found in porins. Porin-like barrel structures are encoded by as many as 2–3% of the genes in Gram-negative bacteria. It has been shown that more than 600 proteins with various function such as oxidase, dismutase, and amylase contain the beta barrel structure.
In many cases, the strands contain alternating polar and non-polar (hydrophilic and hydrophobic) amino acids, so that the hydrophobic residues are oriented into the interior of the barrel to form a hydrophobic core and the polar residues are oriented toward the outside of the barrel on the solvent-exposed surface. Porins and other membrane proteins containing beta barrels reverse this pattern, with hydrophobic residues oriented toward the exterior where they contact the surrounding lipids, and hydrophilic residues oriented toward the aqueous interior pore.
All beta-barrels can be classified in terms of two integer parameters: the number of strands in the beta-sheet, n, and the "shear number", S, a measure of the stagger of the strands in the beta-sheet. These two parameters (n and S) are related to the inclination angle of the beta strands relative to the axis of the barrel.
Types
Up-and-down
Up-and-down barrels are the simplest barrel topology and consist of a series of beta strands, each of which is hydrogen-bonded to the strands immediately before and after it in the primary sequence.
Jelly roll
The jelly roll fold or barrel, also known as the Swiss roll, typically comprises eight beta strands arranged in two four-stranded sheets. Adjacent strands along the sequence alternate between the two sheets, such that they are "wrapped" in three dimensions to form a barrel shape.
Examples
Porins
Sixteen- or eighteen-stranded up-and-down beta barrel structures occur in porins, which function as transporters for ions and small molecules that cannot diffuse across a cellular membrane. Such structures appear in the outer membranes of gram-negative bacteria, chloroplasts, and mitochondria. The central pore of the protein, sometimes known as the eyelet, is lined with charged residues arranged so that the positive and negative charges appear on opposite sides of the pore. A long loop between two beta strands partially occludes the central channel; the exact size and conformation of the loop helps in discriminating between molecules passing through the transporter.
Preprotein translocases
Beta barrels also function within endosymbiont derived organelles such as mitochondria and chloroplasts to transport proteins. Within the mitochondrion two complexes exist with beta barrels serving as the pore forming subunit, Tom40 of the Translocase of the outer membrane, and Sam50 of the Sorting and assembly machinery. The chloroplast also has functionally similar beta barrel containing complexes, the best characterised of which is Toc75 of the TOC complex (Translocon at the outer envelope membrane of chloroplasts).
Lipocalins
Lipocalins are typically eight-stranded up-and-down beta barrel proteins that are secreted into the extracellular environment. A distinctive feature is their ability to bind and transport small hydrophobic molecules in the barrel calyx. Examples of the family include retinol binding proteins (RBPs) and major urinary proteins (Mups). RBP binds and transports retinol (vitamin A), while Mups bind a number of small, organic pheromones, including 2-sec-butyl-4,5-dihydrothiazole (abbreviated as SBT or DHT), 6-hydroxy-6-methyl-3-heptanone (HMH) and 2,3 dihydro-exo-brevicomin (DHB).
Shear number
A piece of paper can be formed into a cylinder by bringing opposite sides together. The two edges come together to form a line. Shear can be created by sliding the two edges parallel to that line. Likewise, a beta barrel can be formed by bringing the edges of a beta sheet together to form a cylinder. If those edges are displaced, shear is created.
A similar definition is found in geology, where shear refers to a displacement within rock perpendicular to the rock surface. In physics, the amount of displacement is referred to as shear strain, which has units of length. For shear number in barrels, displacement is measured in units of amino acid residues.
The determination of shear number requires the assumption that each amino acid in one strand of a beta sheet is adjacent to just one amino acid in the neighboring strand (this assumption may not hold if, for example, a beta bulge is present). To illustrate, S will be calculated for green fluorescent protein. This protein was chosen because the beta barrel contains both parallel and antiparallel strands. To determine which amino acid residues are adjacent in the beta strands, the location of hydrogen bonds is determined.
The inter-strand hydrogen bonds can be summarised in a table. Each column contains the residues in one strand (strand 1 is repeated in the last column). The arrows indicate the hydrogen bonds that were identified in the figures. The relative direction of each strand is indicated by the "+" and "-" at the bottom of the table. Except for strands 1 and 6, all strands are antiparallel. The parallel interaction between strands 1 and 6 accounts for the different appearance of the hydrogen bonding pattern. (Some arrows are missing because not all of the hydrogen bonds expected were identified. Non-standard amino acids are indicated with "?") The side chains that point to the outside of the barrel are in bold.
If no shear were present in this barrel, then residue 12 V, say, in strand 1 should end up in the last strand at the same level as it started at. However, because of shear, 12 V is not at the same level: it is 14 residues higher than it started at, so its shear number, S, is 14.
See also
OMPdb (2011)
References
Further reading
External links
Explanation of all-beta topologies: "orthogonal beta-sandwiches" are beta-barrels (as defined in this article); "aligned" beta-sandwiches" correspond to beta-sandwich folds in SCOP classification.
all-beta folds in SCOP database (folds 54 to 100 are water-soluble beta-barrels).
CATH database - folds and homologous superfamilies within the beta-barrel architecture.
General classification and images of protein structures from Jane Richardson lab
Images and examples of transmembrane beta-barrels
Stockholm Bioinformatics Center review of transmembrane proteins
The Lipocalin Website
The OMPdb database for beta-barrel proteins
Protein structure
Protein folds
Protein tandem repeats
Protein domains | Beta barrel | [
"Chemistry",
"Biology"
] | 1,509 | [
"Protein tandem repeats",
"Protein classification",
"Protein domains",
"Structural biology",
"Protein structure"
] |
5,017,330 | https://en.wikipedia.org/wiki/Rushbrooke%20inequality | In statistical mechanics, the Rushbrooke inequality relates the critical exponents of a magnetic system which exhibits a first-order phase transition in the thermodynamic limit for non-zero temperature T.
Since the Helmholtz free energy is extensive, the normalization to free energy per site is given as
The magnetization M per site in the thermodynamic limit, depending on the external magnetic field H and temperature T is given by
where is the spin at the i-th site, and the magnetic susceptibility and specific heat at constant temperature and field are given by, respectively
and
Additionally,
Definitions
The critical exponents and are defined in terms of the behaviour of the order parameters and response functions near the critical point as follows
where
measures the temperature relative to the critical point.
Derivation
Using the magnetic analogue of the Maxwell relations for the response functions, the relation
follows, and with thermodynamic stability requiring that , one has
which, under the conditions and the definition of the critical exponents gives
which gives the Rushbrooke inequality
Remarkably, in experiment and in exactly solved models, the inequality actually holds as an equality.
Critical phenomena
Statistical mechanics | Rushbrooke inequality | [
"Physics",
"Materials_science",
"Mathematics"
] | 239 | [
"Physical phenomena",
"Critical phenomena",
"Condensed matter physics",
"Statistical mechanics",
"Dynamical systems"
] |
27,050,779 | https://en.wikipedia.org/wiki/Heisler%20chart | In thermal engineering, Heisler charts are a graphical analysis tool for the evaluation of heat transfer in transient, one-dimensional conduction. They are a set of two charts per included geometry introduced in 1947 by M. P. Heisler which were supplemented by a third chart per geometry in 1961 by H. Gröber. Heisler charts allow the evaluation of the central temperature for transient heat conduction through an infinitely long plane wall of thickness , an infinitely long cylinder of radius , and a sphere of radius . Each aforementioned geometry can be analyzed by three charts which show the midplane temperature, temperature distribution, and heat transfer.
Although Heisler–Gröber charts are a faster and simpler alternative to the exact solutions of these problems, there are some limitations. First, the body must be at uniform temperature initially. Second, the Fourier's number of the analyzed object should be bigger than 0.2. Additionally, the temperature of the surroundings and the convective heat transfer coefficient must remain constant and uniform. Also, there must be no heat generation from the body itself.
Infinitely long plane wall
These first Heisler–Gröber charts were based upon the first term of the exact Fourier series solution for an infinite plane wall:
where Ti is the initial uniform temperature of the slab, T∞ is the constant environmental temperature imposed at the boundary, x is the location in the plane wall, λ is the root of λ * tan λ = Bi, and α is thermal diffusivity. The position x = 0 represents the center of the slab.
The first chart for the plane wall is plotted using three different variables. Plotted along the vertical axis of the chart is dimensionless temperature at the midplane, Plotted along the horizontal axis is the Fourier number, Fo = αt/L2. The curves within the graph are a selection of values for the inverse of the Biot number, where Bi = hL/k. k is the thermal conductivity of the material and h is the heat transfer coefficient.
The second chart is used to determine the variation of temperature within the plane wall at other location in the x-direction at the same time of for different Biot numbers. The vertical axis is the ratio of a given temperature to that at the centerline where the x/L curve is the position at which T is taken. The horizontal axis is the value of Bi−1.
The third chart in each set was supplemented by Gröber in 1961, and this particular one shows the dimensionless heat transferred from the wall as a function of a dimensionless time variable. The vertical axis is a plot of Q/Qo, the ratio of actual heat transfer to the amount of total possible heat transfer before T = T∞. On the horizontal axis is the plot of (Bi2)(Fo), a dimensionless time variable.
Infinitely long cylinder
For the infinitely long cylinder, the Heisler chart is based on the first term in an exact solution to a Bessel function.
Each chart plots similar curves to the previous examples, and on each axis is plotted a similar variable.
Sphere (of radius ro)
The Heisler chart for a sphere is based on the first term in the exact Fourier series solution:
These charts can be used similar to the first two sets and are plots of similar variables.
See also
Convective heat transfer
Heat transfer coefficient
Biot number
Fourier number
Heat conduction
References
Heat transfer
Mechanical engineering | Heisler chart | [
"Physics",
"Chemistry",
"Engineering"
] | 699 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Applied and interdisciplinary physics",
"Thermodynamics",
"Mechanical engineering"
] |
27,058,073 | https://en.wikipedia.org/wiki/Lax%E2%80%93Wendroff%20theorem | In computational mathematics, the Lax–Wendroff theorem, named after Peter Lax and Burton Wendroff, states that if a conservative numerical scheme for a hyperbolic system of conservation laws converges, then it converges towards a weak solution.
See also
Lax–Wendroff method
Godunov's scheme
References
Randall J. LeVeque, Numerical methods for conservation laws, Birkhäuser, 1992
Numerical differential equations
Computational fluid dynamics
Theorems in analysis | Lax–Wendroff theorem | [
"Physics",
"Chemistry",
"Mathematics"
] | 95 | [
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical analysis",
"Computational fluid dynamics",
"Applied mathematics",
"Computational physics",
"Applied mathematics stubs",
"Mathematical problems",
"Fluid dynamics"
] |
24,144,104 | https://en.wikipedia.org/wiki/C20H32O | {{DISPLAYTITLE:C20H32O}}
The molecular formula C20H32O (molar mass: 288.46 g/mol, exact mass: 288.2453 u) may refer to:
Bolenol, also known as ethylnorandrostenol
Desoxymethyltestosterone
Ethylestrenol, an anabolic steroid
Molecular formulas | C20H32O | [
"Physics",
"Chemistry"
] | 85 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,144,116 | https://en.wikipedia.org/wiki/C22H28O3 | {{DISPLAYTITLE:C22H28O3}}
The molecular formula C22H28O3 (molar mass: 340.45 g/mol, exact mass: 340.203845 u) may refer to:
Canrenone, an aldosterone antagonist
Norethisterone acetate
Molecular formulas | C22H28O3 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,144,235 | https://en.wikipedia.org/wiki/C19H28O5S | {{DISPLAYTITLE:C19H28O5S}}
The molecular formula C19H28O5S (molar mass: 368.488 g/mol) may refer to:
Dehydroepiandrosterone sulfate, or DHEA-S
Testosterone sulfate
Molecular formulas | C19H28O5S | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,144,656 | https://en.wikipedia.org/wiki/C21H30N2O | {{DISPLAYTITLE:C21H30N2O}}
The molecular formula C21H30N2O (molar mass: 326.484 g/mol, exact mass: 326.2358 u) may refer to:
Bunaftine
FT-104
Hydroxystenozole, also known as 17α-methylandrost-4-eno[3,2-c]pyrazol-17β-ol
Molecular formulas | C21H30N2O | [
"Physics",
"Chemistry"
] | 99 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,145,055 | https://en.wikipedia.org/wiki/C21H36O2 | {{DISPLAYTITLE:C21H36O2}}
The molecular formula C21H36O2 may refer to:
Allopregnanediol, or 5α-pregnane-3α,20α-diol, a steroid
Adipostatin A, an alkylresorcinol
Pregnanediol, a steroid
Molecular formulas | C21H36O2 | [
"Physics",
"Chemistry"
] | 79 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,145,111 | https://en.wikipedia.org/wiki/C22H31NO2 | {{DISPLAYTITLE:C22H31NO2}}
The molecular formula C22H31NO2 (molar mass: 341.495 g/mol, exact mass: 341.2355 u) may refer to:
Desfesoterodine
Pregnenolone 16α-carbonitrile
Molecular formulas | C22H31NO2 | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,145,125 | https://en.wikipedia.org/wiki/C22H30O2 | {{DISPLAYTITLE:C22H30O2}}
The molecular formula C22H30O2 (molar mass: 326.48 g/mol, exact mass: 326.2246 u) may refer to:
Anordiol, or anordriol
Promegestone
Molecular formulas | C22H30O2 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,145,339 | https://en.wikipedia.org/wiki/C21H32O5 | {{DISPLAYTITLE:C21H32O5}}
The molecular formula C21H32O5 may refer to:
5α-Dihydrocortisol, a metabolite of cortisol that is formed by 5α-reductase
Tetrahydrocortisone, a steroid and an inactive metabolite of cortisone | C21H32O5 | [
"Chemistry"
] | 79 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
24,145,421 | https://en.wikipedia.org/wiki/C24H40O4 | {{DISPLAYTITLE:C24H40O4}}
The molecular formula C24H40O4 (molar mass: 392.57 g/mol, exact mass: 392.29266) may refer to:
Chenodeoxycholic acid, a bile acid
Deoxycholic acid, a bile acid
Hyodeoxycholic acid, a bile acid
Ursodiol, a bile acid | C24H40O4 | [
"Chemistry"
] | 90 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
24,145,600 | https://en.wikipedia.org/wiki/C23H34O5 | The molecular formula C23H34O5 (molar mass: 390.51 g/mol, exact mass: 390.2406 u) may refer to:
Digoxigenin (DIG)
Mevastatin, or compactin
Treprostinil
Molecular formulas | C23H34O5 | [
"Physics",
"Chemistry"
] | 59 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,145,929 | https://en.wikipedia.org/wiki/C27H40O4 | The molecular formula C27H40O4 (molar mass: 428.60 g/mol, exact mass: 428.2927 u) may refer to:
AM-938
Hydroxyprogesterone caproate (OHPC)
Testosterone hexahydrobenzylcarbonate
Molecular formulas | C27H40O4 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,146,023 | https://en.wikipedia.org/wiki/C21H25ClO5 | {{DISPLAYTITLE:C21H25ClO5}}
The molecular formula C21H25ClO5 (molar mass: 392.87 g/mol, exact mass: 392.1391 u) may refer to:
Cloprednol
Chloroprednisone
Molecular formulas | C21H25ClO5 | [
"Physics",
"Chemistry"
] | 68 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,146,087 | https://en.wikipedia.org/wiki/C22H29FO4 | {{DISPLAYTITLE:C22H29FO4}}
The molecular formula C22H29FO4 (molar mass: 376.46 g/mol) may refer to:
Desoximetasone
Doxibetasol
Fluocortolone
Fluorometholone, also known as 6α-methyl-9α-fluoro-11β,17α-dihydroxypregna-1,4-diene-3,20-dione
Molecular formulas | C22H29FO4 | [
"Physics",
"Chemistry"
] | 106 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,146,102 | https://en.wikipedia.org/wiki/C26H32F2O7 | {{DISPLAYTITLE:C26H32F2O7}}
The molecular formula C26H32F2O7 (molar mass: 494.52 g/mol, exact mass: 494.21161) may refer to:
Diflorasone diacetate
Fluocinonide
Molecular formulas | C26H32F2O7 | [
"Physics",
"Chemistry"
] | 72 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,146,112 | https://en.wikipedia.org/wiki/C22H28F2O5 | {{DISPLAYTITLE:C22H28F2O5}}
The molecular formula C22H28F2O5 may refer to:
Diflorasone, a synthetic glucocorticoid corticosteroid
Flumetasone, a corticosteroid for topical use
Molecular formulas | C22H28F2O5 | [
"Physics",
"Chemistry"
] | 68 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,146,126 | https://en.wikipedia.org/wiki/C22H27FO5 | {{DISPLAYTITLE:C22H27FO5}}
The molecular formula C22H27FO5 (molar mass: 390.45 g/mol) may refer to:
Fluocortin, a corticosteroid
Fluprednidene, a synthetic glucocorticoid corticosteroid
Molecular formulas | C22H27FO5 | [
"Physics",
"Chemistry"
] | 75 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,146,353 | https://en.wikipedia.org/wiki/C25H34O6 | {{DISPLAYTITLE:C25H34O6}}
The molecular formula C25H34O6 (molar mass: 430.53 g/mol, exact mass: 430.2355 u) may refer to:
Budesonide (BUD)
Dexbudesonide
Ingenol mebutate
YK-11
Molecular formulas | C25H34O6 | [
"Physics",
"Chemistry"
] | 74 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.