id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
37,281,723 | https://en.wikipedia.org/wiki/Triethoxysilane | Triethoxysilane is an organosilicon compound with the formula HSi(OC2H5)3. It is a colourless liquid used in precious metal-catalysed hydrosilylation reactions. The resulting triethoxysilyl groups are often valued for attachment to silica surfaces. Compared to most compounds with Si-H bonds, triethoxysilane exhibits relatively low reactivity. Like most silyl ethers, triethoxysilane is susceptible to hydrolysis.
As reducing agent, triethoxysilane can for example be used in reduction of amides, reduction of carbonyl compounds in the presence of cobalt(II) chloride as catalyst, Cu-catalyzed reductive hydroxymethylation of styrenes, and Rh-catalyzed hydrodediazoniation.
Applications
Compounds based on triethoxysilane might be used in fluoride varnish.
References
Silyl ethers
Ethoxy compounds | Triethoxysilane | Chemistry | 208 |
4,526 | https://en.wikipedia.org/wiki/Brick | A brick is a type of construction material used to build walls, pavements and other elements in masonry construction. Properly, the term brick denotes a unit primarily composed of clay, but is now also used informally to denote units made of other materials or other chemically cured construction blocks. Bricks can be joined using mortar, adhesives or by interlocking. Bricks are usually produced at brickworks in numerous classes, types, materials, and sizes which vary with region, and are produced in bulk quantities.
Block is a similar term referring to a rectangular building unit composed of clay or concrete, but is usually larger than a brick. Lightweight bricks (also called lightweight blocks) are made from expanded clay aggregate.
Fired bricks are one of the longest-lasting and strongest building materials, sometimes referred to as artificial stone, and have been used since . Air-dried bricks, also known as mudbricks, have a history older than fired bricks, and have an additional ingredient of a mechanical binder such as straw.
Bricks are laid in courses and numerous patterns known as bonds, collectively known as brickwork, and may be laid in various kinds of mortar to hold the bricks together to make a durable structure.
History
Middle East and South Asia
The earliest bricks were dried mudbricks, meaning that they were formed from clay-bearing earth or mud and dried (usually in the sun) until they were strong enough for use. The oldest discovered bricks, originally made from shaped mud and dating before 7500 BC, were found at Tell Aswad, in the upper Tigris region and in southeast Anatolia close to Diyarbakir.
Mudbrick construction was used at Çatalhöyük, from c. 7,400 BC.
Mudbrick structures, dating to c. 7,200 BC have been located in Jericho, Jordan Valley. These structures were made up of the first bricks with dimension 400x150x100 mm.
Between 5000 and 4500 BC, Mesopotamia had discovered fired brick. The standard brick sizes in Mesopotamia followed a general rule: the width of the dried or burned brick would be twice its thickness, and its length would be double its width.
The South Asian inhabitants of Mehrgarh also constructed air-dried mudbrick structures between 7000 and 3300 BC and later the ancient Indus Valley cities of Mohenjo-daro, Harappa, and Mehrgarh. Ceramic, or fired brick was used as early as 3000 BC in early Indus Valley cities like Kalibangan.
In the middle of the third millennium BC, there was a rise in monumental baked brick architecture in Indus cities. Examples included the Great Bath at Mohenjo-daro, the fire altars of Kaalibangan, and the granary of Harappa. There was a uniformity to the brick sizes throughout the Indus Valley region, conforming to the 1:2:4, thickness, width, and length ratio. As the Indus civilization began its decline at the start of the second millennium BC, Harappans migrated east, spreading their knowledge of brickmaking technology. This led to the rise of cities like Pataliputra, Kausambi, and Ujjain, where there was an enormous demand for kiln-made bricks.
By 604 BC, bricks were the construction materials for architectural wonders such as the Hanging Gardens of Babylon, where glazed fired bricks were put into practice.
China
The earliest fired bricks appeared in Neolithic China around 4400 BC at Chengtoushan, a walled settlement of the Daxi culture. These bricks were made of red clay, fired on all sides to above 600 °C, and used as flooring for houses. By the Qujialing period (3300 BC), fired bricks were being used to pave roads and as building foundations at Chengtoushan.
According to Lukas Nickel, the use of ceramic pieces for protecting and decorating floors and walls dates back at various cultural sites to 3000-2000 BC and perhaps even before, but these elements should be rather qualified as tiles. For the longest time builders relied on wood, mud and rammed earth, while fired brick and mudbrick played no structural role in architecture. Proper brick construction, for erecting walls and vaults, finally emerges in the third century BC, when baked bricks of regular shape began to be employed for vaulting underground tombs. Hollow brick tomb chambers rose in popularity as builders were forced to adapt due to a lack of readily available wood or stone. The oldest extant brick building above ground is possibly Songyue Pagoda, dated to 523 AD.
By the end of the third century BC in China, both hollow and small bricks were available for use in building walls and ceilings. Fired bricks were first mass-produced during the construction of the tomb of China's first Emperor, Qin Shi Huangdi. The floors of the three pits of the terracotta army were paved with an estimated 230,000 bricks, with the majority measuring 28x14x7 cm, following a 4:2:1 ratio. The use of fired bricks in Chinese city walls first appeared in the Eastern Han dynasty (25 AD-220 AD). Up until the Middle Ages, buildings in Central Asia were typically built with unbaked bricks. It was only starting in the ninth century CE when buildings were entirely constructed using fired bricks.
The carpenter's manual Yingzao Fashi, published in 1103 at the time of the Song dynasty described the brick making process and glazing techniques then in use. Using the 17th-century encyclopaedic text Tiangong Kaiwu, historian Timothy Brook outlined the brick production process of Ming dynasty China:
Europe
Early civilisations around the Mediterranean, including the Ancient Greeks and Romans, adopted the use of fired bricks. By the early first century CE, standardised fired bricks were being heavily produced in Rome. The Roman legions operated mobile kilns, and built large brick structures throughout the Roman Empire, stamping the bricks with the seal of the legion. The Romans used brick for walls, arches, forts, aqueducts, etc. Notable mentions of Roman brick structures are the Herculaneum gate of Pompeii and the baths of Caracalla.
During the Early Middle Ages the use of bricks in construction became popular in Northern Europe, after being introduced there from Northwestern Italy. An independent style of brick architecture, known as brick Gothic (similar to Gothic architecture) flourished in places that lacked indigenous sources of rocks. Examples of this architectural style can be found in modern-day Denmark, Germany, Poland, and Kaliningrad (former East Prussia).
This style evolved into the Brick Renaissance as the stylistic changes associated with the Italian Renaissance spread to northern Europe, leading to the adoption of Renaissance elements into brick building. Identifiable attributes included a low-pitched hipped or flat roof, symmetrical facade, round arch entrances and windows, columns and pilasters, and more.
A clear distinction between the two styles only developed at the transition to Baroque architecture. In Lübeck, for example, Brick Renaissance is clearly recognisable in buildings equipped with terracotta reliefs by the artist Statius von Düren, who was also active at Schwerin (Schwerin Castle) and Wismar (Fürstenhof).
Long-distance bulk transport of bricks and other construction equipment remained prohibitively expensive until the development of modern transportation infrastructure, with the construction of canal, roads, and railways.
Industrial era
Production of bricks increased massively with the onset of the Industrial Revolution and the rise in factory building in England. For reasons of speed and economy, bricks were increasingly preferred as building material to stone, even in areas where the stone was readily available. It was at this time in London that bright red brick was chosen for construction to make the buildings more visible in the heavy fog and to help prevent traffic accidents.
The transition from the traditional method of production known as hand-moulding to a mechanised form of mass-production slowly took place during the first half of the nineteenth century. The first brick-making machine was patented by Richard A. Ver Valen of Haverstraw, New York, in 1852. The Bradley & Craven Ltd 'Stiff-Plastic Brickmaking Machine' was patented in 1853. Bradley & Craven went on to be a dominant manufacturer of brickmaking machinery. Henry Clayton, employed at the Atlas Works in Middlesex, England, in 1855, patented a brick-making machine that was capable of producing up to 25,000 bricks daily with minimal supervision. His mechanical apparatus soon achieved widespread attention after it was adopted for use by the South Eastern Railway Company for brick-making at their factory near Folkestone.
At the end of the 19th century, the Hudson River region of New York State would become the world's largest brick manufacturing region, with 130 brickyards lining the shores of the Hudson River from Mechanicsville to Haverstraw and employing 8,000 people. At its peak, about 1 billion bricks were produced a year, with many being sent to New York City for use in its construction industry.
The demand for high office building construction at the turn of the 20th century led to a much greater use of cast and wrought iron, and later, steel and concrete. The use of brick for skyscraper construction severely limited the size of the building – the Monadnock Building, built in 1896 in Chicago, required exceptionally thick walls to maintain the structural integrity of its 17 storeys.
Following pioneering work in the 1950s at the Swiss Federal Institute of Technology and the Building Research Establishment in Watford, UK, the use of improved masonry for the construction of tall structures up to 18 storeys high was made viable. However, the use of brick has largely remained restricted to small to medium-sized buildings, as steel and concrete remain superior materials for high-rise construction.
Methods of manufacture
Four basic types of brick are un-fired, fired, chemically set bricks, and compressed earth blocks. Each type is manufactured differently for various purposes.
Mudbrick
Unfired bricks, also known as mudbrick, are made from a mixture of silt, clay, sand and other earth materials like gravel and stone, combined with tempers and binding agents such as chopped straw, grasses, tree bark, or dung. Since these bricks are made up of natural materials and only require heat from the Sun to bake, mudbricks have a relatively low embodied energy and carbon footprint.
The ingredients are first harvested and added together, with clay content ranging from 30% to 70%. The mixture is broken up with hoes or adzes, and stirred with water to form a homogenous blend. Next, the tempers and binding agents are added in a ratio, roughly one part straw to five parts earth to reduce weight and reinforce the brick by helping reduce shrinkage. However, additional clay could be added to reduce the need for straw, which would prevent the likelihood of insects deteriorating the organic material of the bricks, subsequently weakening the structure. These ingredients are thoroughly mixed together by hand or by treading and are then left to ferment for about a day.
The mix is then kneaded with water and molded into rectangular prisms of a desired size. Bricks are lined up and left to dry in the sun for three days on both sides. After the six days, the bricks continue drying until required for use. Typically, longer drying times are preferred, but the average is eight to nine days spanning from initial stages to its application in structures. Unfired bricks could be made in the spring months and left to dry over the summer for use in the autumn. Mudbricks are commonly employed in arid environments to allow for adequate air drying.
Fired brick
Fired bricks are baked in a kiln which makes them durable. Modern, fired, clay bricks are formed in one of three processes – soft mud, dry press, or extruded. Depending on the country, either the extruded or soft mud method is the most common, since they are the most economical.
Clay and shale are the raw ingredients in the recipe for a fired brick. They are the product of thousands of years of decomposition and erosion of rocks, such as pegmatite and granite, leading to a material that has properties of being highly chemically stable and inert. Within the clays and shales are the materials of aluminosilicate (pure clay), free silica (quartz), and decomposed rock.
One proposed optimal mix is:
Silica (sand) – 50% to 60% by weight
Alumina (clay) – 20% to 30% by weight
Lime – 2 to 5% by weight
Iron oxide – ≤ 7% by weight
Magnesia – less than 1% by weight
Shaping methods
Three main methods are used for shaping the raw materials into bricks to be fired:
Moulded bricks – These bricks start with raw clay, preferably in a mix with 25–30% sand to reduce shrinkage. The clay is first ground and mixed with water to the desired consistency. The clay is then pressed into steel moulds with a hydraulic press. The shaped clay is then fired at to achieve strength.
Dry-pressed bricks – The dry-press method is similar to the soft-mud moulded method, but starts with a much thicker clay mix, so it forms more accurate, sharper-edged bricks. The greater force in pressing and the longer firing time make this method more expensive.
Extruded bricks – For extruded bricks the clay is mixed with 10–15% water (stiff extrusion) or 20–25% water (soft extrusion) in a pugmill. This mixture is forced through a die to create a long cable of material of the desired width and depth. This mass is then cut into bricks of the desired length by a wall of wires. Most structural bricks are made by this method as it produces hard, dense bricks, and suitable dies can produce perforations as well. The introduction of such holes reduces the volume of clay needed, and hence the cost. Hollow bricks are lighter and easier to handle, and have different thermal properties from solid bricks. The cut bricks are hardened by drying for 20 to 40 hours at before being fired. The heat for drying is often waste heat from the kiln.
Kilns
In many modern brickworks, bricks are usually fired in a continuously fired tunnel kiln, in which the bricks are fired as they move slowly through the kiln on conveyors, rails, or kiln cars, which achieves a more consistent brick product. The bricks often have lime, ash, and organic matter added, which accelerates the burning process.
The other major kiln type is the Bull's Trench Kiln (BTK), based on a design developed by British engineer W. Bull in the late 19th century.
An oval or circular trench is dug, wide, deep, and in circumference. A tall exhaust chimney is constructed in the centre. Half or more of the trench is filled with "green" (unfired) bricks which are stacked in an open lattice pattern to allow airflow. The lattice is capped with a roofing layer of finished brick.
In operation, new green bricks, along with roofing bricks, are stacked at one end of the brick pile. Historically, a stack of unfired bricks covered for protection from the weather was called a "hack". Cooled finished bricks are removed from the other end for transport to their destinations. In the middle, the brick workers create a firing zone by dropping fuel (coal, wood, oil, debris, etc.) through access holes in the roof above the trench. The constant source of fuel maybe grown on the woodlots.
The advantage of the BTK design is a much greater energy efficiency compared with clamp or scove kilns. Sheet metal or boards are used to route the airflow through the brick lattice so that fresh air flows first through the recently burned bricks, heating the air, then through the active burning zone. The air continues through the green brick zone (pre-heating and drying the bricks), and finally out the chimney, where the rising gases create suction that pulls air through the system. The reuse of heated air yields savings in fuel cost.
As with the rail process, the BTK process is continuous. A half-dozen labourers working around the clock can fire approximately 15,000–25,000 bricks a day. Unlike the rail process, in the BTK process the bricks do not move. Instead, the locations at which the bricks are loaded, fired, and unloaded gradually rotate through the trench.
Influences on colour
The colour of fired clay bricks is influenced by the chemical and mineral content of the raw materials, the firing temperature, and the atmosphere in the kiln. For example, pink bricks are the result of a high iron content, white or yellow bricks have a higher lime content. Most bricks burn to various red hues; as the temperature is increased the colour moves through dark red, purple, and then to brown or grey at around . The names of bricks may reflect their origin and colour, such as London stock brick and Cambridgeshire White. Brick tinting may be performed to change the colour of bricks to blend-in areas of brickwork with the surrounding masonry.
An impervious and ornamental surface may be laid on brick either by salt glazing, in which salt is added during the burning process, or by the use of a slip, which is a glaze material into which the bricks are dipped. Subsequent reheating in the kiln fuses the slip into a glazed surface integral with the brick base.
Chemically set bricks
Chemically set bricks are not fired but may have the curing process accelerated by the application of heat and pressure in an autoclave.
Calcium-silicate bricks
Calcium-silicate bricks are also called sandlime or flintlime bricks, depending on their ingredients. Rather than being made with clay they are made with lime binding the silicate material. The raw materials for calcium-silicate bricks include lime mixed in a proportion of about 1 to 10 with sand, quartz, crushed flint, or crushed siliceous rock together with mineral colourants. The materials are mixed and left until the lime is completely hydrated; the mixture is then pressed into moulds and cured in an autoclave for three to fourteen hours to speed the chemical hardening. The finished bricks are very accurate and uniform, although the sharp arrises need careful handling to avoid damage to brick and bricklayer. The bricks can be made in a variety of colours; white, black, buff, and grey-blues are common, and pastel shades can be achieved. This type of brick is common in Sweden as well as Russia and other post-Soviet countries, especially in houses built or renovated in the 1970s. A version known as fly ash bricks, manufactured using fly ash, lime, and gypsum (known as the FaL-G process) are common in South Asia. Calcium-silicate bricks are also manufactured in Canada and the United States, and meet the criteria set forth in ASTM C73 – 10 Standard Specification for Calcium Silicate Brick (Sand-Lime Brick).
Concrete bricks
Bricks formed from concrete are usually termed as blocks or concrete masonry unit, and are typically pale grey. They are made from a dry, small aggregate concrete which is formed in steel moulds by vibration and compaction in either an "egglayer" or static machine. The finished blocks are cured, rather than fired, using low-pressure steam. Concrete bricks and blocks are manufactured in a wide range of shapes, sizes and face treatments – a number of which simulate the appearance of clay bricks.
Concrete bricks are available in many colours and as an engineering brick made with sulfate-resisting Portland cement or equivalent. When made with adequate amount of cement they are suitable for harsh environments such as wet conditions and retaining walls. They are made to standards BS 6073, EN 771-3 or ASTM C55. Concrete bricks contract or shrink so they need movement joints every 5 to 6 metres, but are similar to other bricks of similar density in thermal and sound resistance and fire resistance.
Compressed earth blocks
Compressed earth blocks are made mostly from slightly moistened local soils compressed with a mechanical hydraulic press or manual lever press. A small amount of a cement binder may be added, resulting in a stabilised compressed earth block.
Types
There are thousands of types of bricks that are named for their use, size, forming method, origin, quality, texture, and/or materials.
Categorized by manufacture method:
Extruded – made by being forced through an opening in a steel die, with a very consistent size and shape.
Wire-cut – cut to size after extrusion with a tensioned wire which may leave drag marks
Moulded – shaped in moulds rather than being extruded
Machine-moulded – clay is forced into moulds using pressure
Handmade – clay is forced into moulds by a person
Dry-pressed – similar to soft mud method, but starts with a much thicker clay mix and is compressed with great force.
Categorized by use:
Common or building – A brick not intended to be visible, used for internal structure
Face – A brick used on exterior surfaces to present a clean appearance
Hollow – not solid, the holes are less than 25% of the brick volume
Perforated – holes greater than 25% of the brick volume
Keyed – indentations in at least one face and end to be used with rendering and plastering
Paving – brick intended to be in ground contact as a walkway or roadway
Thin – brick with normal height and length but thin width to be used as a veneer
Specialized use bricks:
Chemically resistant – bricks made with resistance to chemical reactions
Acid brick – acid resistant bricks
Engineering – a type of hard, dense, brick used where strength, low water porosity or acid (flue gas) resistance are needed. Further classified as type A and type B based on their compressive strength
Accrington – a type of engineering brick from England
Fire or refractory – highly heat-resistant bricks
Clinker – a vitrified brick
Ceramic glazed – fire bricks with a decorative glazing
Bricks named for place of origin:
Chicago common brick - a soft brick made near Chicago, Illinois with a range of colors, like buff yellow, salmon pink, or deep red
Cream City brick – a light yellow brick made in Milwaukee, Wisconsin
Dutch brick – a hard light coloured brick originally from the Netherlands
Fareham red brick – a type of construction brick
London stock brick – type of handmade brick which was used for the majority of building work in London and South East England until the growth in the use of machine-made bricks
Nanak Shahi bricks – a type of decorative brick in India
Roman brick – a long, flat brick typically used by the Romans
Staffordshire blue brick – a type of construction brick from England
Optimal dimensions, characteristics, and strength
For efficient handling and laying, bricks must be small enough and light enough to be picked up by the bricklayer using one hand (leaving the other hand free for the trowel). Bricks are usually laid flat, and as a result, the effective limit on the width of a brick is set by the distance which can conveniently be spanned between the thumb and fingers of one hand, normally about . In most cases, the length of a brick is twice its width plus the width of a mortar joint, about or slightly more. This allows bricks to be laid bonded in a structure which increases stability and strength (for an example, see the illustration of bricks laid in English bond, at the head of this article). The wall is built using alternating courses of stretchers, bricks laid longways, and headers, bricks laid crossways. The headers tie the wall together over its width. In fact, this wall is built in a variation of English bond called English cross bond where the successive layers of stretchers are displaced horizontally from each other by half a brick length. In true English bond, the perpendicular lines of the stretcher courses are in line with each other.
A bigger brick makes for a thicker (and thus more insulating) wall. Historically, this meant that bigger bricks were necessary in colder climates (see for instance the slightly larger size of the Russian brick in table below), while a smaller brick was adequate, and more economical, in warmer regions. A notable illustration of this correlation is the Green Gate in Gdansk; built in 1571 of imported Dutch brick, too small for the colder climate of Gdansk, it was notorious for being a chilly and drafty residence. Nowadays this is no longer an issue, as modern walls typically incorporate specialised insulation materials.
The correct brick for a job can be selected from a choice of colour, surface texture, density, weight, absorption, and pore structure, thermal characteristics, thermal and moisture movement, and fire resistance.
In England, the length and width of the common brick remained fairly constant from 1625 when the size was regulated by statute at 9 x x 3 inches (but see brick tax), but the depth has varied from about or smaller in earlier times to about more recently. In the United Kingdom, the usual size of a modern brick (from 1965) is , which, with a nominal mortar joint, forms a unit size of , for a ratio of 6:3:2.
In the United States, modern standard bricks are specified for various uses; The most commonly used is the modular brick has the actual dimensions of × × inches (194 × 92 × 57 mm). With the standard inch mortar joint, this gives the nominal dimensions of 8 x 4 x inches which eases the calculation of the number of bricks in a given wall. The 2:1 ratio of modular bricks means that when they turn corners, a 1/2 running bond is formed without needing to cut the brick down or fill the gap with a cut brick; and the height of modular bricks means that a soldier course matches the height of three modular running courses, or one standard CMU course.
Some brickmakers create innovative sizes and shapes for bricks used for plastering (and therefore not visible on the inside of the building) where their inherent mechanical properties are more important than their visual ones. These bricks are usually slightly larger, but not as large as blocks and offer the following advantages:
A slightly larger brick requires less mortar and handling (fewer bricks), which reduces cost
Their ribbed exterior aids plastering
More complex interior cavities allow improved insulation, while maintaining strength.
Blocks have a much greater range of sizes. Standard co-ordinating sizes in length and height (in mm) include 400×200, 450×150, 450×200, 450×225, 450×300, 600×150, 600×200, and 600×225; depths (work size, mm) include 60, 75, 90, 100, 115, 140, 150, 190, 200, 225, and 250. They are usable across this range as they are lighter than clay bricks. The density of solid clay bricks is around 2000 kg/m3: this is reduced by frogging, hollow bricks, and so on, but aerated autoclaved concrete, even as a solid brick, can have densities in the range of 450–850 kg/m3.
Bricks may also be classified as solid (less than 25% perforations by volume, although the brick may be "frogged," having indentations on one of the longer faces), perforated (containing a pattern of small holes through the brick, removing no more than 25% of the volume), cellular (containing a pattern of holes removing more than 20% of the volume, but closed on one face), or hollow (containing a pattern of large holes removing more than 25% of the brick's volume). Blocks may be solid, cellular or hollow.
The term "frog" can refer to the indentation or the implement used to make it. Modern brickmakers usually use plastic frogs but in the past they were made of wood.
The compressive strength of bricks produced in the United States ranges from about , varying according to the use to which the brick are to be put. In England clay bricks can have strengths of up to 100 MPa, although a common house brick is likely to show a range of 20–40 MPa.
Uses
Bricks are a versatile building material, able to participate in a wide variety of applications, including:
Structural walls, exterior and interior walls
Bearing and non-bearing sound proof partitions
The fireproofing of structural-steel members in the form of firewalls, party walls, enclosures and fire towers
Foundations for stucco
Chimneys and fireplaces
Porches and terraces
Outdoor steps, brick walks and paved floors
Swimming pools
In the United States, bricks have been used for both buildings and pavement. Examples of brick use in buildings can be seen in colonial era buildings and other notable structures around the country. Bricks have been used in paving roads and sidewalks especially during the late 19th century and early 20th century. The introduction of asphalt and concrete reduced the use of brick for paving, but they are still sometimes installed as a method of traffic calming or as a decorative surface in pedestrian precincts. For example, in the early 1900s, most of the streets in the city of Grand Rapids, Michigan, were paved with bricks. Today, there are only about 20 blocks of brick-paved streets remaining (totalling less than 0.5 percent of all the streets in the city limits). Much like in Grand Rapids, municipalities across the United States began replacing brick streets with inexpensive asphalt concrete by the mid-20th century.
In Northwest Europe, bricks have been used in construction for centuries. Until recently, almost all houses were built almost entirely from bricks. Although many houses are now built using a mixture of concrete blocks and other materials, many houses are skinned with a layer of bricks on the outside for aesthetic appeal.
Bricks in the metallurgy and glass industries are often used for lining furnaces, in particular refractory bricks such as silica, magnesia, chamotte and neutral (chromomagnesite) refractory bricks. This type of brick must have good thermal shock resistance, refractoriness under load, high melting point, and satisfactory porosity. There is a large refractory brick industry, especially in the United Kingdom, Japan, the United States, Belgium and the Netherlands.
Engineering bricks are used where strength, low water porosity or acid (flue gas) resistance are needed.
In the UK a red brick university is one founded in the late 19th or early 20th century. The term is used to refer to such institutions collectively to distinguish them from the older Oxbridge institutions, and refers to the use of bricks, as opposed to stone, in their buildings.
Colombian architect Rogelio Salmona was noted for his extensive use of red bricks in his buildings and for using natural shapes like spirals, radial geometry and curves in his designs.
Limitations
Starting in the 20th century, the use of brickwork declined in some areas due to concerns about earthquakes. Earthquakes such as the San Francisco earthquake of 1906 and the 1933 Long Beach earthquake revealed the weaknesses of unreinforced brick masonry in earthquake-prone areas. During seismic events, the mortar cracks and crumbles, so that the bricks are no longer held together. Brick masonry with steel reinforcement, which helps hold the masonry together during earthquakes, has been used to replace unreinforced bricks in many buildings. Retrofitting older unreinforced masonry structures has been mandated in many jurisdictions. However, similar to steel corrosion in reinforced concrete, rebar rusting will compromise the structural integrity of reinforced brick and ultimately limit the expected lifetime, so there is a trade-off between earthquake safety and longevity to a certain extent.
Accessibility
The United States Access Board does not specify which materials a sidewalk must be made of in order to be ADA compliant, but states that sidewalks must not have surface variances of greater than one inch. Due to the accessibility challenges of bricks, the Federal Highway Administration recommends against the use of bricks as well as cobblestones in its accessibility guide for sidewalks and crosswalks. The Brick Industry Association maintains standards for making brick more accessible for disabled people, with proper and regular maintenance being necessary to keep brick accessible.
Some US jurisdictions, such as San Francisco, have taken steps to remove brick sidewalks from certain areas such as Market Street in order to improve accessibility.
Gallery
See also
References
Further reading
Hudson, Kenneth (1972) Building Materials; chap. 3: Bricks and tiles. London: Longman; pp. 28–42
External links
Brick in 20th-Century Architecture
Brick Industry Association United States
Brick Development Association UK
Think Brick Australia
International Brick Collectors Association
Building materials
Masonry
Soil-based building materials | Brick | Physics,Engineering | 6,676 |
9,345,461 | https://en.wikipedia.org/wiki/Refectory | A refectory (also frater, frater house, fratery) is a dining room, especially in monasteries, boarding schools and academic institutions. One of the places the term is most often used today is in graduate seminaries. The name derives from the Latin reficere "to remake or restore," via Late Latin refectorium, which means "a place one goes to be restored" (cf. "restaurant").
Refectories and monastic culture
Communal meals are the times when all monks of an institution are together. Diet and eating habits differ somewhat by monastic order, and more widely by schedule. The Benedictine rule is illustrative.
The Rule of St Benedict orders two meals. Dinner is provided year-round; supper is also served from late spring to early fall, except for Wednesdays and Fridays. The diet originally consisted of simple fare: two dishes, with fruit as a third course if available. The food was simple, with the meat of mammals forbidden to all but the sick. Moderation in all aspects of diet is the spirit of Benedict's law. Meals are eaten in silence, facilitated sometimes by hand signals. A single monk might read aloud from the Scriptures or writings of the saints during the meals.
Size, structure, and placement
Refectories vary in size and dimension, based primarily on wealth and size of the monastery, as well as when the room was built. They share certain design features. Monks eat at long benches; important officials sit at raised benches at one end of the hall. A lavabo, or large basin for hand-washing, usually stands outside the refectory.
Tradition also fixes other factors. In England, the refectory is generally built on an undercroft (perhaps in an allusion to the upper room where the Last Supper reportedly took place) on the side of the cloister opposite the church. Benedictine models are traditionally generally laid out on an east–west axis, while Cistercian models lie north–south.
Norman refectories could be as large as long by wide (such as the abbey at Norwich). Even relatively early refectories might have windows, but these became larger and more elaborate in the high medieval period. The refectory at Cluny Abbey was lit through thirty-six large glazed windows. The twelfth-century abbey at Mont Saint-Michel had six windows, five feet wide by twenty feet high.
Eastern Orthodox
In Eastern Orthodox monasteries, the trapeza (, refectory) is considered a sacred place, and in some cases is even constructed as a full church with an altar and iconostasis. Some services are intended to be performed specifically in the trapeza. There is always at least one icon with a lampada (oil lamp) kept burning in front of it. The service of the Lifting of the Panagia is performed at the end of meals. During Bright Week, this service is replaced with the Lifting of the Artos. In some monasteries, the Ceremony of Forgiveness at the beginning of Great Lent is performed in the trapeza. All food served in the trapeza should be blessed, and for that purpose, holy water is often kept in the kitchen.
Modern usage
As well as continued use of the historic monastic meaning, the word refectory is often used in a modern context to refer to a café or cafeteria that is open to the public—including non-worshipers such as tourists—attached to a cathedral or abbey. This usage is particularly prevalent in Church of England buildings, which use the takings to supplement their income.
Many universities in the UK also call their student cafeteria or dining facilities the refectory. The term is rare at American colleges, although Brown University calls its main dining hall the Sharpe Refectory, the main dining hall at Rhodes College is known as the Catherine Burrow Refectory, and, in August of 2019, Villanova University chose the name 'The Refectory' for the "sophisticated-yet-casual restaurant service" (open to students and the public) to purposefully acknowledge and recognize the history of the refectory name to connote "a dining room for communal meals at academic institutions and monasteries".
See also
Refectory table
References
Sources
Adams, Henry, Mont Saint-Michel and Chartres. New York: Penguin, 1986.
Fernie, E. C. The Architecture of Norman England. Oxford: Oxford University Press, 2000.
Harvey, Barbara. Living and Dying in England, 1100-1450. Oxford: Clarendon Press, 1995.
Singman, Jeffrey. Daily Life in Medieval Europe. Westport, CT: Greenwood Press, 1999.
Webb, Geoffrey. Architecture in Britain: the Middle Ages. Baltimore: Penguin, 1956.
External links
Refectory in Russian Orthodox Convent, Jerusalem
Church architecture
Restaurants by type
Rooms
Eastern Orthodox liturgy
Sacral architecture | Refectory | Engineering | 985 |
23,754,861 | https://en.wikipedia.org/wiki/Semantic%20computing | Semantic computing is a field of computing that combines elements of semantic analysis, natural language processing, data mining, knowledge graphs, and related fields.
Semantic computing addresses three core problems:
Understanding the (possibly naturally-expressed) intentions (semantics) of users and expressing them in a machine-processable format
Understanding the meanings (semantics) of computational content (of various sorts, including, but is not limited to, text, video, audio, process, network, software and hardware) and expressing them in a machine-processable format
Mapping the semantics of user with that of content for the purpose of content retrieval, management, creation, etc.
The IEEE has held an International Conference on Semantic Computing since 2007. A conference on Knowledge Graphs and Semantic Computing has been held since 2015.
See also
Computational semantics
Semantic audio
Semantic compression
Semantic technology
Semantic network
References
External links
IEEE International Conference on Semantic Computing
IEEE International School on Semantic Computing
International Journal of Semantic Computing
Semantic Computing Research Group
Semantic Link Network
Semantic Web | Semantic computing | Technology | 197 |
328,763 | https://en.wikipedia.org/wiki/Emblem | An emblem is an abstract or representational pictorial image that represents a concept, like a moral truth, or an allegory, or a person, like a monarch or saint.
Emblems vs. symbols
Although the words emblem and symbol are often used interchangeably, an emblem is a pattern that is used to represent an idea or an individual. An emblem develops in concrete, visual terms some abstraction: a deity, a tribe or nation, or a virtue or vice.
An emblem may be worn or otherwise used as an identifying badge or patch. For example, in America, police officers' badges refer to their personal metal emblem whereas their woven emblems on uniforms identify members of a particular unit. A real or metal cockle shell, the emblem of James the Great, sewn onto the hat or clothes, identified a medieval pilgrim to his shrine at Santiago de Compostela. In the Middle Ages, many saints were given emblems, which served to identify them in paintings and other images: St. Catherine of Alexandria had a wheel, or a sword, St. Anthony the Abbot, a pig and a small bell. These are also called attributes, especially when shown carried by or close to the saint in art. Monarchs and other grand persons increasingly adopted personal devices or emblems that were distinct from their family heraldry. The most famous include Louis XIV of France's sun, the salamander of Francis I of France, the boar of Richard III of England and the armillary sphere of Manuel I of Portugal. In the fifteenth and sixteenth century, there was a fashion, started in Italy, for making large medals with a portrait head on the obverse and the emblem on the reverse; these would be given to friends and as diplomatic gifts. Pisanello produced many of the earliest and finest of these.
A symbol, on the other hand, substitutes one thing for another, in a more concrete fashion:
The Christian cross is a symbol of the crucifixion of Jesus; it is an emblem of sacrifice.
The Red Cross is one of three symbols representing the International Red Cross. A red cross on a white background is the emblem of humanitarian spirit.
The crescent shape is a symbol of the moon; it is an emblem of Islam.
The skull and crossbones is a symbol identifying a poison. The skull is an emblem of the transitory nature of human life.
Other terminology
A totem is specifically an animal emblem that expresses the spirit of a clan. Emblems in heraldry are known as charges. The lion passant serves as the emblem of England, the lion rampant as the emblem of Scotland.
An icon consists of an image (originally a religious image), that has become standardized by convention. A logo is an impersonal, secular icon, usually of a corporate entity.
Emblems in history
Since the 15th century, the terms of emblem (emblema; from , meaning "embossed ornament") and emblematura belong to the termini technici of architecture. They mean an iconic painted, drawn, or sculptural representation of a concept affixed to houses and belong—like the inscriptions—to the architectural ornaments (ornamenta). Since the publication of
(1452) by Leon Battista Alberti (1404–1472), patterned after the by the Roman architect and engineer Vitruvius, emblema are related to Egyptian hieroglyphics and are considered as being the lost universal language. Therefore, the emblems belong to the Renaissance knowledge of antiquity which comprises not only Greek and Roman antiquity but also Egyptian antiquity as proven by the numerous obelisks built in 16th and 17th century Rome.
Evidence of the use of emblems in pre-Columbian America has also been found, such as those used in Mayan city states, kingdoms, and even empires such as the Aztec or Inca. The use of these in the American context does not differ much from the contexts of other regions of the world, being even the equivalent of the coats of arms of their respective territorial entities.
The 1531 publication in Augsburg of the first emblem book, the Emblemata of the Italian jurist Andrea Alciato launched a fascination with emblems that lasted two centuries and touched most of the countries of western Europe. "Emblem" in this sense refers to a didactic or moralizing combination of picture and text intended to draw the reader into a self-reflective examination of their own life. Complicated associations of emblems could transmit information to the culturally-informed viewer, a characteristic of the 16th-century artistic movement called Mannerism.
A popular collection of emblems, which ran to many editions, was presented by Francis Quarles in 1635. Each of the emblems consisted of a paraphrase from a passage of Scripture, expressed in ornate and metaphorical language, followed by passages from the Christian Fathers, and concluding with an epigram of four lines. These were accompanied by an emblem that presented the symbols displayed in the accompanying passage.
Emblems in speech
Emblems are certain gestures which have a specific meaning attached to them. These meanings are usually associated with the culture they are established in. Using emblems creates a way for humans to communicate with one another in a non-verbal way. An individual waving their hand at a friend, for example, would communicate "hello" without having to verbally say anything.
Emblems vs. sign language
Although sign language uses hand gestures to communicate words in a non-verbal way, it should not be confused with emblems. Sign language contains linguistic properties, similar to those used in verbal languages, and is used to communicate entire conversations. Linguistic properties are verbs, nouns, pronouns, adverbs, adjectives, etc.. In contrast with sign language, emblems are a non-linguistic form of communication. Emblems are single gestures which are meant to get a short non-verbal message to another individual.
Emblems in culture
Emblems are associated with the culture they are established in and are subjective to that culture. For example, the sign made by forming a circle with the thumb and forefinger is used in America to communicate "OK" in a non-verbal way, in Japan to mean "money", and in some southern European countries to mean something sexual. Furthermore, the thumbs up sign in America means "good job ", but in some parts of the Middle East the thumbs up sign means something highly offensive.
See also
Coat of arms
Crest
Emblem book
Logo
Meme
Mission patch
National emblem
Saint symbology
Seal (emblem)
Symbol
Badge
Icon
References
Further reading
Emblematica Online. University of Illinois at Urbana Champaign Libraries. 1,388 facsimiles of emblem books.
Moseley, Charles, A Century of Emblems: An Introduction to the Renaissance Emblem (Aldershot: Scolar Press, 1989)
Notes
External links
Camerarius, Joachim (1605) Symbolorum & emblematum - digital facsimile of book of emblems, from the website of the Linda Hall Library | Emblem | Mathematics | 1,417 |
192,904 | https://en.wikipedia.org/wiki/Ultimate%20fate%20of%20the%20universe | The ultimate fate of the universe is a topic in physical cosmology, whose theoretical restrictions allow possible scenarios for the evolution and ultimate fate of the universe to be described and evaluated. Based on available observational evidence, deciding the fate and evolution of the universe has become a valid cosmological question, being beyond the mostly untestable constraints of mythological or theological beliefs. Several possible futures have been predicted by different scientific hypotheses, including that the universe might have existed for a finite and infinite duration, or towards explaining the manner and circumstances of its beginning.
Observations made by Edwin Hubble during the 1930s–1950s found that galaxies appeared to be moving away from each other, leading to the currently accepted Big Bang theory. This suggests that the universe began very dense about 13.787 billion years ago, and it has expanded and (on average) become less dense ever since. Confirmation of the Big Bang mostly depends on knowing the rate of expansion, average density of matter, and the physical properties of the mass–energy in the universe.
There is a strong consensus among cosmologists that the shape of the universe is considered "flat" (parallel lines stay parallel) and will continue to expand forever.
Factors that need to be considered in determining the universe's origin and ultimate fate include the average motions of galaxies, the shape and structure of the universe, and the amount of dark matter and dark energy that the universe contains.
Emerging scientific basis
Theory
The theoretical scientific exploration of the ultimate fate of the universe became possible with Albert Einstein's 1915 theory of general relativity. General relativity can be employed to describe the universe on the largest possible scale. There are several possible solutions to the equations of general relativity, and each solution implies a possible ultimate fate of the universe.
Alexander Friedmann proposed several solutions in 1922, as did Georges Lemaître in 1927. In some of these solutions, the universe has been expanding from an initial singularity which was, essentially, the Big Bang.
Observation
In 1929, Edwin Hubble published his conclusion, based on his observations of Cepheid variable stars in distant galaxies, that the universe was expanding. From then on, the beginning of the universe and its possible end have been the subjects of serious scientific investigation.
Big Bang and Steady State theories
In 1927, Georges Lemaître set out a theory that has since come to be called the Big Bang theory of the origin of the universe. In 1948, Fred Hoyle set out his opposing Steady State theory in which the universe continually expanded but remained statistically unchanged as new matter is constantly created. These two theories were active contenders until the 1965 discovery, by Arno Allan Penzias and Robert Woodrow Wilson, of the cosmic microwave background radiation, a fact that is a straightforward prediction of the Big Bang theory, and one that the original Steady State theory could not account for. As a result, the Big Bang theory quickly became the most widely held view of the origin of the universe.
Cosmological constant
Einstein and his contemporaries believed in a static universe. When Einstein found that his general relativity equations could easily be solved in such a way as to allow the universe to be expanding at the present and contracting in the far future, he added to those equations what he called a cosmological constant — essentially a constant energy density, unaffected by any expansion or contraction — whose role was to offset the effect of gravity on the universe as a whole in such a way that the universe would remain static. However, after Hubble announced his conclusion that the universe was expanding, Einstein would write that his cosmological constant was "the greatest blunder of my life."
Density parameter
An important parameter in fate of the universe theory is the density parameter, omega (), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether is equal to, less than, or greater than . These are called, respectively, the flat, open and closed universes. These three adjectives refer to the overall geometry of the universe, and not to the local curving of spacetime caused by smaller clumps of mass (for example, galaxies and stars). If the primary content of the universe is inert matter, as in the dust models popular for much of the 20th century, there is a particular fate corresponding to each geometry. Hence cosmologists aimed to determine the fate of the universe by measuring , or equivalently the rate at which the expansion was decelerating.
Repulsive force
Starting in 1998, observations of supernovas in distant galaxies have been interpreted as consistent with a universe whose expansion is accelerating. Subsequent cosmological theorizing has been designed so as to allow for this possible acceleration, nearly always by invoking dark energy, which in its simplest form is just a positive cosmological constant. In general, dark energy is a catch-all term for any hypothesized field with negative pressure, usually with a density that changes as the universe expands. Some cosmologists are studying whether dark energy which varies in time (due to a portion of it being caused by a scalar field in the early universe) can solve the crisis in cosmology. Upcoming galaxy surveys from the Euclid, Nancy Grace Roman and James Webb space telescopes (and data from next-generation ground-based telescopes) are expected to further develop our understanding of dark energy (specifically whether it is best understood as a constant energy intrinsic to space, as a time varying quantum field or as something else entirely).
Role of the shape of the universe
The current scientific consensus of most cosmologists is that the ultimate fate of the universe depends on its overall shape, how much dark energy it contains and on the equation of state which determines how the dark energy density responds to the expansion of the universe. Recent observations conclude, from 7.5 billion years after the Big Bang, that the expansion rate of the universe has probably been increasing, commensurate with the Open Universe theory. However, measurements made by the Wilkinson Microwave Anisotropy Probe suggest that the universe is either flat or very close to flat.
Closed universe
If , the geometry of space is closed like the surface of a sphere. The sum of the angles of a triangle exceeds 180 degrees and there are no parallel lines; all lines eventually meet. The geometry of the universe is, at least on a very large scale, elliptic.
In a closed universe, gravity eventually stops the expansion of the universe, after which it starts to contract until all matter in the universe collapses to a point, a final singularity termed the "Big Crunch", the opposite of the Big Bang. If, however, the universe contains dark energy, then the resulting repulsive force may be sufficient to cause the expansion of the universe to continue forever—even if . This is the case in the currently accepted Lambda-CDM model, where dark energy is found through observations to account for roughly 68% of the total energy content of the universe. According to the Lambda-CDM model, the universe would need to have an average matter density roughly seventeen times greater than its measured value today in order for the effects of dark energy to be overcome and the universe to eventually collapse. This is in spite of the fact that, according to the Lambda-CDM model, any increase in matter density would result in .
Open universe
If , the geometry of space is open, i.e., negatively curved like the surface of a saddle. The angles of a triangle sum to less than 180 degrees, and lines that do not meet are never equidistant; they have a point of least distance and otherwise grow apart. The geometry of such a universe is hyperbolic.
Even without dark energy, a negatively curved universe expands forever, with gravity negligibly slowing the rate of expansion. With dark energy, the expansion not only continues but accelerates. The ultimate fate of an open universe with dark energy is either universal heat death or a "Big Rip" where the acceleration caused by dark energy eventually becomes so strong that it completely overwhelms the effects of the gravitational, electromagnetic and strong binding forces. Conversely, a negative cosmological constant, which would correspond to a negative energy density and positive pressure, would cause even an open universe to re-collapse to a big crunch.
Flat universe
If the average density of the universe exactly equals the critical density so that , then the geometry of the universe is flat: as in Euclidean geometry, the sum of the angles of a triangle is 180 degrees and parallel lines continuously maintain the same distance. Measurements from the Wilkinson Microwave Anisotropy Probe have confirmed the universe is flat within a 0.4% margin of error.
In the absence of dark energy, a flat universe expands forever but at a continually decelerating rate, with expansion asymptotically approaching zero. With dark energy, the expansion rate of the universe initially slows, due to the effects of gravity, but eventually increases, and the ultimate fate of the universe becomes the same as that of an open universe.
Theories about the end of the universe
The fate of the universe may be determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below. However, observations are not conclusive, and alternative models are still possible.
Big Freeze or Heat Death
The heat death of the universe, also known as the Big Freeze (or Big Chill), is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature. Under this scenario, the universe eventually reaches a state of maximum entropy in which everything is evenly distributed and there are no energy gradients—which are needed to sustain information processing, one form of which is life. This scenario has gained ground as the most likely fate.
In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, but they will disappear over time as they emit Hawking radiation. Over infinite time, there could be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations, and the fluctuation theorem.
The heat death scenario is compatible with any of the three spatial models, but it requires that the universe reaches an eventual temperature minimum. Without dark energy, it could occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe.
Big Rip
The current Hubble constant defines a rate of acceleration of the universe not large enough to destroy local structures like galaxies, which are held together by gravity, but large enough to increase the space between them. A steady increase in the Hubble constant to infinity would result in all material objects in the universe, starting with galaxies and eventually (in a finite time) all forms, no matter how small, disintegrating into unbound elementary particles, radiation and beyond. As the energy density, scale factor and expansion rate become infinite, the universe ends as what is effectively a singularity.
In the special case of phantom dark energy, which has supposed negative kinetic energy that would result in a higher rate of acceleration than other cosmological constants predict, a more sudden big rip could occur.
Big Crunch
The Big Crunch hypothesis is a symmetric view of the ultimate fate of the universe. Just as the theorized Big Bang started as a cosmological expansion, this theory assumes that the average density of the universe will be enough to stop its expansion and the universe will begin contracting. The result is unknown; a simple estimation would have all the matter and spacetime in the universe collapse into a dimensionless singularity back into how the universe started with the Big Bang, but at these scales unknown quantum effects need to be considered (see Quantum gravity). Recent evidence suggests that this scenario is unlikely but has not been ruled out, as measurements have been available only over a relatively short period of time and could reverse in the future.
This scenario allows the Big Bang to occur immediately after the Big Crunch of a preceding universe. If this happens repeatedly, it creates a cyclic model, which is also known as an oscillatory universe. The universe could then consist of an infinite sequence of finite universes, with each finite universe ending with a Big Crunch that is also the Big Bang of the next universe. A problem with the cyclic universe is that it does not reconcile with the second law of thermodynamics, as entropy would build up from oscillation to oscillation and cause the eventual heat death of the universe. Current evidence also indicates the universe is not closed. This has caused cosmologists to abandon the oscillating universe model. A somewhat similar idea is embraced by the cyclic model, but this idea evades heat death because of an expansion of the branes that dilutes entropy accumulated in the previous cycle.
Big Bounce
The Big Bounce is a theorized scientific model related to the beginning of the known universe. It derives from the oscillatory universe or cyclic repetition interpretation of the Big Bang where the first cosmological event was the result of the collapse of a previous universe.
According to one version of the Big Bang theory of cosmology, in the beginning the universe was infinitely dense. Such a description seems to be at odds with other more widely accepted theories, especially quantum mechanics and its uncertainty principle. Therefore, quantum mechanics has given rise to an alternative version of the Big Bang theory, specifically that the universe tunneled into existence and had a finite density consistent with quantum mechanics, before evolving in a manner governed by classical physics. Also, if the universe is closed, this theory would predict that once this universe collapses it will spawn another universe in an event similar to the Big Bang after a universal singularity is reached or a repulsive quantum force causes re-expansion.
In simple terms, this theory states that the universe will continuously repeat the cycle of a Big Bang, followed by a Big Crunch.
Cosmic uncertainty
Each possibility described so far is based on a simple form for the dark energy equation of state. However, as the name is meant to imply, little is now known about the physics of dark energy. If the theory of inflation is true, the universe went through an episode dominated by a different form of dark energy in the first moments of the Big Bang, but inflation ended, indicating an equation of state more complex than those assumed for present-day dark energy. It is possible that the dark energy equation of state could change again, resulting in an event that would have consequences which are difficult to predict or parameterize. As the nature of dark energy and dark matter remain enigmatic, even hypothetical, the possibilities surrounding their coming role in the universe are unknown.
Other possible fates of the universe
There are also some possible events, such as the Big Slurp, which would seriously harm the universe, although the universe as a whole would not be completely destroyed as a result.
Big Slurp
This theory posits that the universe currently exists in a false vacuum and that it could become a true vacuum at any moment.
In order to best understand the false vacuum collapse theory, one must first understand the Higgs field which permeates the universe. Much like an electromagnetic field, it varies in strength based upon its potential. A true vacuum exists so long as the universe exists in its lowest energy state, in which case the false vacuum theory is irrelevant. However, if the vacuum is not in its lowest energy state (a false vacuum), it could tunnel into a lower-energy state. This is called vacuum decay. This has the potential to fundamentally alter the universe: in some scenarios, even the various physical constants could have different values, severely affecting the foundations of matter, energy, and spacetime. It is also possible that all structures will be destroyed instantaneously, without any forewarning.
However, only a portion of the universe would be destroyed by the Big Slurp while most of the universe would still be unaffected because galaxies located further than 4,200 megaparsecs (13 billion light-years) away from each other are moving away from each other faster than the speed of light while the Big Slurp itself cannot expand faster than the speed of light. To place this in context, the size of the observable universe is currently about 46 billion light years in all directions from earth. The universe is thought to be that size or larger.
Observational constraints on theories
Choosing among these rival scenarios is done by 'weighing' the universe, for example, measuring the relative contributions of matter, radiation, dark matter, and dark energy to the critical density. More concretely, competing scenarios are evaluated against data on galaxy clustering and distant supernovas, and on the anisotropies in the cosmic microwave background.
See also
Alan Guth
Andrei Linde
Anthropic principle
Arrow of time
Cosmological horizon
Cyclic model
Freeman Dyson
General relativity
John D. Barrow
Kardashev scale
Multiverse
Shape of the universe
Timeline of the far future
Zero-energy universe
References
Further reading
External links
Baez, J., 2004, "The End of the Universe".
Hjalmarsdotter, Linnea, 2005, "Cosmological parameters."
A Brief History of the End of Everything, a BBC Radio 4 series.
Cosmology at Caltech.
Jamal Nazrul Islam (1983): The Ultimate Fate of the Universe. Cambridge University Press, Cambridge, England. . (Digital print version published in 2009).
Physical cosmology | Ultimate fate of the universe | Physics,Astronomy | 3,626 |
7,999,138 | https://en.wikipedia.org/wiki/Golden%E2%80%93Thompson%20inequality | In physics and mathematics, the Golden–Thompson inequality is a trace inequality between exponentials of symmetric and Hermitian matrices proved independently by and . It has been developed in the context of statistical mechanics, where it has come to have a particular significance.
Statement
The Golden–Thompson inequality states that for (real) symmetric or (complex) Hermitian matrices A and B, the following trace inequality holds:
This inequality is well defined, since the quantities on either side are real numbers. For the expression on the right hand side of the inequality, this can be seen by rewriting it as using the cyclic property of the trace.
Let denote the Frobenius norm, then the Golden–Thompson inequality is equivalently stated as
Motivation
The Golden–Thompson inequality can be viewed as a generalization of a stronger statement for real numbers. If a and b are two real numbers, then the exponential of a+b is the product of the exponential of a with the exponential of b:
If we replace a and b with commuting matrices A and B, then the same inequality holds.
This relationship is not true if A and B do not commute. In fact, proved that if A and B are two Hermitian matrices for which the Golden–Thompson inequality is verified as an equality, then the two matrices commute. The Golden–Thompson inequality shows that, even though and are not equal, they are still related by an inequality.
Proof
Generalizations
Other norms
In general, if A and B are Hermitian matrices and is a unitarily invariant norm, then
The standard Golden–Thompson inequality is a special case of the above inequality, where the norm is the Frobenius norm.
The general case is provable in the same way, since unitarily invariant norms also satisfy the Cauchy-Schwarz inequality.
Indeed, for a slightly more general case, essentially the same proof applies. For each , let be the Schatten norm.
Multiple matrices
The inequality has been generalized to three matrices by and furthermore to any arbitrary number of Hermitian matrices by . A naive attempt at generalization does not work: the inequality
is false. For three matrices, the correct generalization takes the following form:
where the operator is the derivative of the matrix logarithm given by .
Note that, if and commute, then , and the inequality for three matrices reduces to the original from Golden and Thompson.
used the Kostant convexity theorem to generalize the Golden–Thompson inequality to all compact Lie groups.
References
Linear algebra
Matrix theory
Inequalities | Golden–Thompson inequality | Mathematics | 515 |
37,506,792 | https://en.wikipedia.org/wiki/Pfizer%20Award%20in%20Enzyme%20Chemistry | The Pfizer Award in Enzyme Chemistry, formerly known as the Paul-Lewis Award in Enzyme Chemistry was established in 1945. Consisting of a gold medal and honorarium, its purpose is to stimulate fundamental research in enzyme chemistry by scientists not over forty years of age. The award is administered by the Division of Biological Chemistry of the American Chemical Society and sponsored by Pfizer. The award was terminated in 2022.
Recipients
Source:
1946 – David E. Green
1947 – Van R. Potter
1948 – Albert L. Lehninger
1949 – Henry A. Lardy
1950 – Britton Chance
1951 – Arthur Kornberg
1952 – Bernard L. Horecker
1953 – Earl R. Stadtman
1954 – Alton Meister
1955 – Paul D. Boyer
1956 – Merton F. Utter
1957 – G. Robert Greenberg
1958 – Eugene P. Kennedy
1959 – Minor J. Coon
1960 – Arthur Pardee
1961 – Frank M. Huennekens
1962 – Jack L. Strominger
1963 – Charles Gilvarg
1964 – Marshall Nirenberg
1965 – Frederic M. Richards
1966 – Samuel B. Weiss
1967 – P. Roy Vagelos & Salih J. Wakil
1968 – William J. Rutter
1969 – Robert T. Schimke
1970 – Herbert Weissbach
1971 – Jack Preiss
1972 – Ekkehard K. F. Bautz
1973 – Howard M. Temin
1974 – Michael J. Chamberlin
1975 – Malcolm L. Gefter
1976 – Michael S. Brown & Joseph L. Goldstein
1977 – Stephen J. Benkovic
1978 – Paul Schimmel
1979 – Frederik C. Hartman
1980 – Thomas A. Steitz
1981 – Daniel V. Santi
1982 – Richard R. Burgess
1983 – Paul L. Modrich
1984 – Robert T.N. Tjian
1985 – Thomas R. Cech
1986 – JoAnne Stubbe
1987 – Gregory Petsko
1988 – John W. Kozarich
1989 – Kenneth A. Johnson
1990 – James A. Wells
1991 – Ronald Vale
1992 – Carl O. Pabo
1993 – Michael H. Gelb
1994 – Donald Hilvert
1995 – Gerald F. Joyce
1996 – P. Andrew Karplus
1997 – Daniel Herschlag
1998 – Ronald T. Raines
1999 – David W. Christianson
2000 – Eric T. Kool
2001 – Ruma Banerjee
2002 – Karin Musier-Forsyth
2003 – Dorothee Kern
2004 – Wilfred A. van der Donk
2005 – Nicole S. Sampson
2006 – James Berger
2007 – Neil L. Kelleher
2008 – Carsten Krebs
2009 – Virginia Cornish
2010 – Vahe Bandarian
2011 – Sarah O’Connor
2012 – Jin Zhang
2013 – Kate Carroll
2014 – Hening Lin
2015 – Douglas Mitchell
2016 – Michelle C. Chang
2017 – Emily Balskus
2018 – Mohammad Seyedsayamdost
2019 – Kenichi Yokoyama
2020 – Rahul Kohli
2021 – Amie K. Boal
See also
List of biochemistry awards
References
External links
Division of Biological Chemistry, American Chemical Society
Biochemistry awards
Awards established in 1945
American science and technology awards | Pfizer Award in Enzyme Chemistry | Chemistry,Biology | 623 |
2,328,567 | https://en.wikipedia.org/wiki/Heinke%20%28diving%20equipment%20manufacturer%29 | Heinke was a series of companies that made diving equipment in London, run by members of a Heinke family.
Timeline
Family background
Gotthilf Frederick Heinke was born in Meseritz, Prussia in 1786. He arrived in London in 1809 and worked initially as a victualler to build up capital. He married Sarah Smith, who bore him three sons and two daughters. The sons were John William Heinke (born 1816), Charles Edwin Heinke (born 1818), and Gotthilf Henry Heinke (born 1820). John married Louisa Margaret Leathart in 1840
Ironmongery
In 1818, Gotthilf Frederick Heinke opened an ironmongery shop business in London and, in 1819, he got a workshop at 103 Great Portland Street in London. Gotthilf Frederick opened a second premises at 3 Old Jewry, London in 1839.
Start of making diving helmets
Around 1844, Charles Edwin Heinke made his first diving helmet. Inspired by William F. Saddler, Heinke started using solid brass for diving helmets' breastplates, instead of copper sheet. Heinke's diving helmets had three similarly shaped circular windows. They did not have the outer protective grills as in other helmets; thus they had better visibility for divers, and it was easier to keep the windows clean. Heinke's main competitor was Siebe Gorman who also made diving helmets, and Heinke constantly tried to improve on designs. He introduced an additional exhaust valve on the front side of the breastplate, which is now called the "peppermill" because of the holes in its cover. This exhaust made it possible for the diver to ascend and descend much faster.
In 1845, Charles brought in the "Pearler" helmet, with a square-pattern mould-cast (instead of oval and beaten) copper helmet. He became famous with this style. Their square breastplate made it easier for the diver to bend forwards to look for pearl oysters on the seabed. The idea was later copied by companies such as Siebe after Siebe took over Heinke, and even by Morse Diving in the USA.
1852: William Robert Foster and others began running a firm Foster and Williams supplying diving dresses and air hose at 87 Grange Road, Bermondsey, London.
1858: Gotthilf Frederick Heinke applied for British citizenship, and was granted it.
1863: Some members of the Heinke family (including Frederick William Heinke, son of John William Heinke) started a firm Heinke Brothers, 78-78 Great Portland Street, London, "Submarine Engineers"; that firm lasted until 1867.
1869: Charles Edwin Heinke died after being in ill-health for 2–3 years.
1870: John William Heinke died from congestion of the liver after an 11-month illness. These two deaths disorganized company business.
1871: Frederick William Heinke and one William Griffin Davis AICE formed a new firm Heinke & Davis at 176 Great Portland Street, London. It moved to 2 Brabant Court, Philpot Lane, London. It was bankrupt by January 1879.
1871: Gotthilf Frederick Heinke died.
1871: Gotthilf Henry Heinke became a sleeping partner in the business and took on a partner William Foster to manage the business; they started a new business C.E.Heinke & Co, Submarine Engineers.
1880: Frederick William Heinke was forced to seek work in North America, but died of a fever in 1883 in Tecomabaca, Oaxaca, Mexico.
1884: Gotthilf Henry Heinke retired for ill health, and sold his company to William Foster and Robert Fox (his brother in law) who had also become involved in the business, but continued to live on the upper floor of 79 Great Portland Street till his death in 1899.
20th century
1902: Robert Fox died. Foster and Williams was merged into C.E.Heinke & Co, Submarine Engineers.
1904: The lease on Great Portland Street expired. Production was moved to Foster and Williams's premises.
1905: The company acquired an additional of work area.
1905: All Heinke helmets made until 1905 had the butterfly style wingnuts; after that regular wingnuts were used.
1922: C.E.Heinke & Co, Submarine Engineers became a limited company C.E.Heinke & Co Ltd, Submarine Engineers, making a good living from standard diving equipment.
WWII and after
WWII blitz: Many company records were lost.
1950: After this date, the firm's fortunes declined, as with Siebe Gorman.
Mid to late 1950s: the firm starts making "Heinke-Lung" aqualungs, Delta dry suits, Dolphin and Falla wetsuits, Hans Hass diving masks, swimming fins and snorkel tubes.
1958: Heinke donated the Heinke Trophy to the British Sub-Aqua Club (BSAC). This trophy is awarded annually to the BSAC branch judged to have done the most to further the interests of its own members and of the BSAC.
1961: The firm was incorporated into Siebe Gorman. The last Heinke diving helmet went out of production in 1961. A few helmets were given the tag of "Siebe-Heinke", but eventually the name Heinke completely disappeared.
1967-1968: Siebe Gorman stops using the tradename 'Siebe Heinke'.
Unlike Siebe Gorman, who had only one series of serial numbers for their diving helmets, except for the last productions (which were meant most probably for the Russian Navy), Heinke used many series of serial numbers for them.
Notes
References
External links
Heinke
Early 1950s advertisement for a Heinke-Lung (aqualung made by Heinke)
Early 1950s advertisement with Hans Hass and Lotte Hass
Underwater diving equipment manufacturers
Underwater diving engineering
Defunct companies of the United Kingdom
Manufacturing companies based in London
Underwater diving in the United Kingdom | Heinke (diving equipment manufacturer) | Engineering | 1,249 |
54,485,006 | https://en.wikipedia.org/wiki/Formal%20criteria%20for%20adjoint%20functors | In category theory, a branch of mathematics, the formal criteria for adjoint functors are criteria for the existence of a left or right adjoint of a given functor.
One criterion is the following, which first appeared in Peter J. Freyd's 1964 book Abelian Categories, an Introduction to the Theory of Functors:
Another criterion is:
See also
Anafunctor
References
Bibliography
Further reading
External link
Adjoint functors | Formal criteria for adjoint functors | Mathematics | 92 |
4,857,087 | https://en.wikipedia.org/wiki/Aluminosilicate | Aluminosilicate refers to materials containing anionic Si-O-Al linkages. Commonly, the associate cations are sodium (Na+), potassium (K+) and protons (H+). Such materials occur as minerals, coal combustion products and as synthetic materials, often in the form of zeolites. Both synthetic and natural aluminosilicates are of technical significance as structural materials, catalysts, and reagents.
Important representatives
Feldspar is a common tectosilicate aluminosilicate mineral made of potassium, sodium, and calcium cations surrounded by a negatively charged network of silicon, aluminium and oxygen atoms.
Many aluminosilicates are synthesized by reactions of silicates, aluminates, and other compounds. They have the general formula where M+ is usually H+ and Na+. The Si/Al ratio is variable, which provides a means to tune the properties. Many of these materials are porous and exhibit properties of industrial value. Naturally occurring microporous, hydrous aluminosilicate minerals are also referred to as zeolites.
See also
Aluminium silicate
Geopolymer cement
Silicate minerals
Calcium aluminosilicate
Sodium aluminosilicate
Gorilla Glass – a type of aluminosilicate glass
References
Glass compositions
Nesosilicates | Aluminosilicate | Chemistry | 281 |
55,191,186 | https://en.wikipedia.org/wiki/WIL%20Research%20Laboratories | WIL Research Laboratories, LLC (acquired in 2016 and renamed Charles River Laboratories Ashland, LLC) was a contract research organization (CRO), privately held for 40 years, that provided product safety toxicological research, metabolism, bioanalytical, pharmacological, and formulation services to the pharmaceutical, biotechnology, chemical, agrochemical, and food products industries, as well as manufacturing support for clinical trials. WIL Research was well-known internationally in many disciplines, and considered by many industry experts to be the premier laboratory in the world for developmental and reproductive toxicology (DART).
Early history
WIL Research Laboratories was founded in 1976 in Cincinnati, Ohio by G. Bruce Briggs, Ralph S. Hodgdon, and Robert W. Brigham, with Briggs serving as the company's first president. The company was initially a limited mammalian toxicological testing laboratory that conducted short-term studies for several clients in the Cincinnati area. In 1978, Great Lakes Chemical Corporation acquired WIL Research Laboratories. By 1980, WIL Research outgrew its facilities in Cincinnati, subsequently acquired the 75-acre Hess & Clark research facility on the outskirts of Ashland, Ohio, and by 1982 had moved its operations to the new location. The move to Ashland enabled WIL to conduct a larger number of studies as it began to expand its client base.
Sustained organic growth
Dr. Joseph F. Holson was named President and Director of WIL Research Laboratories in 1988. Under his leadership over the next 20 years, WIL Research grew from 31 employees into a dynamic contract research organization employing more than 600 individuals at the Ashland site. This success was attributed to the company's entrepreneurial scientific management, study director-centric business model, internationally recognized scientific prowess (particularly in DART), internally developed innovations (including the industry's first protocol-driven toxicology data management software system), and strong involvement in the Ashland community.
Entrepreneurial scientific management
During Holson's tenure, WIL Research continuously expanded its scientific capabilities, facilities, and staffing levels. During this period, the company grew from a limited mammalian toxicology research laboratory into a robust interdisciplinary CRO offering developmental and reproductive toxicology, neurotoxicology, inhalation toxicology, developmental neurotoxicology, large animal toxicology, juvenile toxicology, safety pharmacology, metabolism, analytical and bioanalytical chemistry, and formulation services to a globally diverse client base. Underpinning the continuous expansion of service capabilities was a steady expansion of the company's facilities from approximately 30,000 square feet to more than 300,000 square feet of dedicated laboratory, vivarium, and support services space.
At the heart of Dr. Holson's vision, though, was a drive to continually deepen the company's talent pool, as the number of employees in Ashland grew from 31 to more than 600. Joseph Holson was well-known as an energetic, outgoing leader with a vision for the company that revolved around the success of his staff and ongoing recruitment efforts. Critical to the success of WIL Research was a continuous investment in staff training, as new biologists typically underwent a 9-12 month training period and all employees regularly completed continuing education not only in their specific areas of expertise but also in the subjects of animal care and welfare, Good Laboratory Practices, and research integrity. Many of the internal training programs developed at WIL Research were highly regarded and requested by clients and industry partners.
Study director-centric business model
A key driver of WIL's steady growth was its study director-centric business model, which viewed each study director as an individual business unit with scientific, project management, and marketing responsibilities. This approach was in contrast to the typical division within CROs between science and marketing. WIL Research emphasized direct scientist-to-scientist interaction as much as possible across the entire scope of each project, which gained the company numerous accolades from its clients. Examples of the types of projects undertaken by WIL Research included studies of drugs for the treatment of herpes, Alzheimers' disease, glaucoma, cancer, and AIDS, numerous pesticides, and replacement chemicals for ozone-depleting chlorofluorocarbons in fire extinguishers.
Global leadership in developmental and reproductive toxicology
Although highly respected in many disciplines, WIL Research was considered by many to be the leading laboratory in the world for developmental and reproductive toxicology (DART). This leadership was driven by Dr. Joseph Holson, an internationally recognized authority in the field. The DART division at WIL Research, led initially by Dr. Holson and subsequently by Mr. Mark D. Nemec and Dr. Donald G. Stump, became known not only for high-quality regulatory guideline studies, but also for innovative, specialized DART research.
Internal innovation
In 1978, as a result of expanding toxicology testing services, the WIL Toxicology Data Management System (WTDMS™) was developed. This protocol-driven software system was the first in the CRO industry and became the prototype for other major toxicology testing laboratories. WTDMS™ was licensed to several other toxicology testing laboratories, and was used continuously by WIL Research Laboratories for nearly forty years prior to its gradual replacement by the Provantis system.
Ashland Community
While WIL Research depended on the broader Ashland area for a steady supply of qualified personnel, it also contributed extensively to Ashland's economic growth, becoming one of the largest employers in the county. During Holson's tenure, the company invested approximately $62 million in facilities renovation and expansion. In a talk given to the local Rotary club, Holson added that WIL Research at that time served approximately 550 clients (domestic and international), most of whom regularly visited Ashland to monitor their studies. In 2006, WIL Research received the Golden Oak award from the mayor of Ashland, an award recognizing "the foresight, diligence and unselfishness of individuals or organizations who contribute to new growth, strengthen the roots or improve the overall community of Ashland."
WIL Research also actively supported Ashland University, with many of its senior scientists serving as adjunct professors in their areas of expertise, especially in the undergraduate toxicology program, which the company helped begin in 1984. Dr. Joseph Holson also served on Ashland University's Science Advisory Board (1990-2008) and Board of Trustees (1993-1998), and gave the initial lecture, entitled "Risk and Regulation," of a year-long lecture series in support of the university's Environmental Studies program in 1995.
Mergers and acquisitions
After nearly two decades of sustained organic growth, Joseph Holson led WIL Research through an initial period of private capital-financed expansion. In 2004, Holson and four other senior executives (Mark D. Nemec, Dr. Christopher P. Chengelis, Dr. Daniel W. Sved, and James M. Rudar) initiated a management buyout (in partnership with private equity firm Behrman Capital) from Great Lakes Chemical Corporation which led to the formation of a holding company (WRH, Inc.). The expansion continued with the merger of Biotechnics, LLC (Hillsborough, NC, led by Dr. George Parker) with WIL Research operations in Ashland, the acquisitions of Notox Beheer BV ('s-Hertogenbosch, Netherlands, let by Jan van der Hoeven, Dr. Wilbert Frieling, and Dr. Ilona Enninga) and QS Pharma LLC (Boothwyn, PA), and the subsequent $500 million sale of WRH, Inc. to American Capital, Ltd. (NASDAQ:ACAS) in 2007. After the sale to ACAS, Dr. Holson served as Vice President and Chief Scientific Officer of the global entity while continuing to serve as President and Director of WIL Research Laboratories in Ashland, Ohio until his retirement in November 2008. Upon Dr. Holson's retirement, Mr. Nemec was appointed President and Chief Operating Officer of the Ashland flagship facility, and Dr. Chengelis was named Vice President and Chief Scientific Officer.
Under the ownership of American Capital, David Spaight was named Chairman and CEO of the global holding company in 2010, which undertook a re-branding and global integration effort. During the ACAS-led period, growth of the company occurred primarily through additional acquisitions, including those of Midwest BioResearch, LLC (Skokie, IL, led by Dr. Michael Schlosser) and Ricerca Bioscience's pharmaceutical services facility in Lyon, France (led by Stéphane Bulle). In addition, a new safety assessment facility in Schaijk, Netherlands (close to the existing Den Bosch site) was opened in 2015 to augment the European operations. These activities combined to increase the total number of employees in the global entity to more than 1300, with total 2015 revenues of $215 million.
In early 2016, Wilmington, MA-based Charles River Laboratories International, Inc. (NYSE:CRL), led by James C. Foster, acquired the global holdings of WIL Research for $585 million in cash. The platform WIL Research Laboratories facility in Ashland, OH was subsequently renamed to Charles River Laboratories Ashland, LLC.
References
External links
Contract research organizations
Life sciences industry | WIL Research Laboratories | Biology | 1,879 |
4,242,881 | https://en.wikipedia.org/wiki/Early-warning%20radar | An early-warning radar is any radar system used primarily for the long-range detection of its targets, i.e., allowing defences to be alerted as early as possible before the intruder reaches its target, giving the air defences the maximum time in which to operate. This contrasts with systems used primarily for tracking or gun laying, which tend to offer shorter ranges but offer much higher accuracy.
EW radars tend to share a number of design features that improve their performance in the role. For instance, EW radar typically operates at lower frequencies, and thus longer wavelengths, than other types. This greatly reduces their interaction with rain and snow in the air, and therefore improves their performance in the long-range role where their coverage area will often include precipitation. This also has the side-effect of lowering their optical resolution, but this is not important in this role. Likewise, EW radars often use much lower pulse repetition frequency to maximize their range, at the cost of signal strength, and offset this with long pulse widths, which increases the signal at the cost of lowering range resolution.
The canonical EW radar is the British Chain Home system, which entered full-time service in 1938. It used a very low pulse repetition of 25 pps and very powerful transmissions (for the era) reaching 1 MW in late-war upgrades. The German Freya and US CXAM (Navy) and SCR-270 (Army) were similar. Post-war models moved to the microwave range in ever-increasingly powerful models that reached the 50 MW range by the 1960s. Since then, improvements in receiver electronics has greatly reduced the amount of signal needed to produce an accurate image, and in modern examples the transmitted power is much less; the AN/FPS-117 offers range from 25 kW. EW radars are also highly susceptible to radar jamming and often include advanced frequency hopping systems to reduce this problem.
History
The first early-warning radars were the British Chain Home, the German Freya, the US CXAM (Navy) and SCR-270 (Army), and the Soviet Union . By modern standards these were quite short range, typically about . This "short" distance is a side effect of radio propagation at the long wavelengths being used at the time, which were generally limited to line-of-sight. Although techniques for long-range propagation were known and widely used for shortwave radio, the ability to process the complex return signal was simply not possible at the time.
Cold War
To counter the threat of Soviet bombers flying over the Arctic, the U.S. and Canada developed the DEW Line. Other examples (Pinetree Line) have since been built with even better performance. An alternative early warning design was the Mid-Canada Line, which provided "line breaking" indication across the middle of Canada, with no provision to identify the target's exact location or direction of travel. Starting in the 1950s, a number of over-the-horizon radars were developed that greatly extended detection ranges, generally by bouncing the signal off the ionosphere.
Modern day
Today the early warning role has been supplanted to a large degree by airborne early warning platforms. By placing the radar on an aircraft, the line-of-sight to the horizon is greatly extended. This allows the radar to use high-frequency signals, offering high resolution, while still offering long range. A major exception to this rule are radars intended to warn of ballistic missile attacks, like BMEWS, as the high-altitude exo-atmospheric trajectory of these weapons allows them to be seen at great ranges even from ground-based radars.
Early systems
Chain Home
Chain Home Low
SCR-270
AN/CPS-1
CXAM radar
Freya radar
1950s through 70s
Pinetree Line
Mid-Canada Line
Distant Early Warning Line
Duga radar
BMEWS
AMES Type 80
AMES Type 84
AMES Type 85
ROTOR
Dnestr radar
Dnepr radar
Daryal radar
Linesman/Mediator
Operational systems
AWACS, the US airborne system of surveillance radar plus command and control functions
Daryal radar, Soviet and Russian early warning radar
Dnestr radar, Soviet and Russian early warning radars
Don-2N radar, Russian missile defence radar in Moscow
Duga radar, Soviet over-the-horizon early-warning radar system
Dunay radar, Soviet missile defence radar
GIRAFFE, Swedish family of early warning radar systems
EL/M-2080 Green Pine, Israeli ground-based missile-defense radar by Elta
EL/M-2090, Israeli ground-based very long range early-warning radar system
Erieye, Swedish airborne system of surveillance radar
Jindalee, Australian over-the-horizon radar network
Long Range Discrimination Radar, the US radar system
North Warning System, a joint United States and Canadian early-warning radar system
NETRA, Indian airborne early warning and control aircraft
PAVE PAWS, the US early warning radar
RAF Fylingdales, British early warning radar
Red Color, Israeli early warning radar system
Sea-based X-band Radar, the US sea-based early-warning radar station
Voronezh radar, Russian early warning radar system
ESR-32A, Egyptian air surveillance and early-warning radar system
ASELSAN ALP-300G
References
Radar
Early warning systems | Early-warning radar | Technology | 1,074 |
46,273,602 | https://en.wikipedia.org/wiki/Prpl%20Foundation | The prpl Foundation is a non-profit open source software Foundation started in 2014 by Imagination Technologies and others to encourage use of the MIPS architecture (and “open to others”), through the promotion of standards and open source software, with a particular focus on equipment for data centers, networking (with a focus on residential gateways), and devices for the Internet of Things.
The Foundation manages projects in specific topic areas via “pWGs” (prpl Working Groups).. The organization also collects and disseminates information of interest to its members, including patterns in consumer use of smart devices and security issues. In 2016 the organization released a study, "The prpl Foundation Smart Home Security Report". The group also finds and reports security issues in smart devices.
Members of prpl include: Verizon, Orange, AT&T, SoftAtHome, Vodafone, Broadcom, MaxLinear, and Qualcomm.
References
External links
Open source software supported by prpl Foundation
Computer science organizations
MIPS architecture | Prpl Foundation | Technology | 215 |
4,388,959 | https://en.wikipedia.org/wiki/Hans%20Kornberg | Sir Hans Leo Kornberg, FRS (14 January 1928 – 16 December 2019) was a British-American biochemist. He was Sir William Dunn Professor of Biochemistry in the University of Cambridge from 1975 to 1995, and Master of Christ's College, Cambridge from 1982 to 1995.
Early life and education
Kornberg was born in 1928 in Germany to Jewish parents, Max Kornberg (1889–1943) and Margarete (née Silberbach, 1890-1928), who died three weeks after his birth. In 1939, his father and stepmother Selma (née Nathan; 1886–1943) got him out of Nazi Germany (though they could not follow), first to an uncle in Amsterdam and eventually to the care of an uncle in Yorkshire. A few years later, his father and stepmother were murdered in the Holocaust. Initially he went to a school for German refugees, but later to Queen Elizabeth Grammar School in Wakefield.
On leaving school he became a junior laboratory technician for Hans Adolf Krebs at the University of Sheffield who encouraged him to study further and apply for a scholarship at the same university. He graduated with a BSc Honours in Chemistry in 1949. His interest moved to biochemistry and he studied in the Faculty of Medicine, receiving a PhD degree in 1953 on the studies on urease in mammalian gastric mucosa.
Career
After receiving Commonwealth Fund Fellowship and working for two years in Yale University and Public Health Research Institute in USA, he then returned to the UK where his mentor, Sir Hans Krebs, had moved to Oxford University and offered him a post there. This partnership produced a paper in Nature, concerning their discovery of the glyoxylate cycle, and also a joint book entitled Energy Transformations in Living Matter in 1957.
In 1960, he was appointed to the first Chair in Biochemistry at the University of Leicester, which he held until 1975. Later, he was elected as Sir William Dunn Chair of Biochemistry at the University of Cambridge. Hans became a lecturer at Worcester College between 1958 and 1961, and was also the first person to receive The Biochemical Society's annual Colworth Medal on 1963.
He received Christ's Fellowship in 1975 and was elected as the 34th Master of the Christ's College, Cambridge from 1982 to 1995. In 1995, he retired to take up a position as a Professor of Biology at Boston University, USA, where he taught biochemistry.
Honours and awards
He was elected to the Fellowship of the Royal Society in 1965 and the same year awarded the Colworth Medal of The Biochemical Society. In 1973, he was awarded the Otto Warburg Medal of the German Society for Biochemistry and Molecular Biology. In the 1978 Queen's Birthday Honours List he was knighted for "services to science". He has been awarded 11 honorary doctorates and has been elected into membership of:
The United States National Academy of Sciences
The German Academy of Sciences "Leopoldina"
The Italian National Academy of Sciences "Lincei"
The American Philosophical Society
The American Academy of Arts and Sciences
The American Society of Biological Chemistry and Molecular Biology
The Japanese Biochemical Society
Phi Beta Kappa Society
and Honorary Fellowship of
The Biochemical Society (UK)
The Royal Society of Biology
Brasenose College (Oxford)
Worcester College (Oxford)
Wolfson College (Cambridge)
The Foulkes Foundation (London)
Personal life
While at Oxford, he met and married his first wife, Monica King, in 1956 and had four children: Julia, Rachel, Jonathan and Simon. The children were raised Catholic. Monica died in 1989. In 1991, he married a Jewish woman, Donna Haber. Sir Hans Kornberg died on 16 December 2019.
References
External links
British Humanist Society Distinguished Supporters
Jewish Year Book, 2005, p. 214: List of Jewish Fellows of the Royal Society
Professor Sir Hans Kornberg FRS in Conversation with Sir James Baddiley FRS October 1990
Current research interests (Boston University Biology Department)
Sir Hans Kornberg (Boston University Biology Department)
1928 births
2019 deaths
Sir William Dunn Professors of Biochemistry
Academics of the University of Leicester
Alumni of the University of Sheffield
Boston University faculty
English biochemists
English emigrants to the United States
English humanists
English people of German-Jewish descent
Fellows of the Royal Society
Fellows of the American Academy of Arts and Sciences
Jewish emigrants from Nazi Germany to the United Kingdom
Jewish chemists
Knights Bachelor
Masters of Christ's College, Cambridge
Foreign associates of the National Academy of Sciences
Members of the German National Academy of Sciences Leopoldina
Naturalised citizens of the United Kingdom
People educated at Queen Elizabeth Grammar School, Wakefield
People from Herford
Presidents of the British Science Association
Presidents of the Association for Science Education
Presidents of the International Union of Biochemistry and Molecular Biology | Hans Kornberg | Chemistry | 942 |
34,369,888 | https://en.wikipedia.org/wiki/Tall%20cardinal | In mathematics, a tall cardinal is a large cardinal κ that is θ-tall for all ordinals θ, where a cardinal is called θ-tall if there is an elementary embedding j : V → M with critical point κ such that j(κ) > θ and Mκ ⊆ M.
Tall cardinals are equiconsistent with strong cardinals.
References
Large cardinals | Tall cardinal | Mathematics | 80 |
57,936,744 | https://en.wikipedia.org/wiki/Butroxydim | Butroxydim is a chemical used as a herbicide. It is a group A herbicide used to kill grass weeds in a range of broadacre crops. Structurally related herbicides against grasses are alloxydim, sethoxydim, clethodim, and cycloxydim.
References
Ketones
Ketoxime ethers
Herbicides
Ethoxy compounds | Butroxydim | Chemistry,Biology | 80 |
54,397,496 | https://en.wikipedia.org/wiki/Eyes%20of%20Things | Eyes of Things (EoT) is the name of a project funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement number 643924. The purpose of the project, which is funded under the Smart Cyber-physical systems topic, is to develop a generic hardware-software platform for embedded, efficient (i.e. battery-operated, wearable, mobile), computer vision, including deep learning inference.
On November 29, 2018, the European Space Agency announced that it was testing the suitability of the device for space applications in advance of a flight in a Cubesat.
Motivation
EoT is based on the following tenets:
Future embedded systems will have more intelligence and cognitive functionality. Vision is paramount to such intelligent capacity
Unlike other sensors, vision requires intensive processing. Power consumption must be optimized if vision is to be used in mobile and wearable applications
Cloud processing of edge-captured images is not sustainable. The sheer amount of visual data generated cannot be transferred to the cloud. Bandwidth is not sufficient and cloud servers cannot cope with it.
Partners
VISILAB group at University of Castilla–La Mancha (Coordinator)
Movidius
Awaiba
Thales Security Solutions & Systems
DFKI
Fluxguide
Evercam
nVISO
Awards
2019 Electronic Component and Systems Innovation Award by the European Commission
2018 HiPEAC Tech Transfer Award
2018 EC Innovation Radar - highlighting excellent innovations Award
2018 Internet of Things (IoT) Technology Research Award Pilot by Google
2016 Semifinalist "THE VISION SHOW STARTUP COMPETITION", Global Association for Vision Information, Boston US
See also
Wearable camera
Computer vision
Internet of Things
Embedded systems
Edge computing
References
Computer vision | Eyes of Things | Engineering | 335 |
68,238,187 | https://en.wikipedia.org/wiki/List%20of%20crocodilians | Crocodilia is an order of mostly large, predatory, semiaquatic reptiles, which includes true crocodiles, the alligators, and caimans; as well as the gharial and false gharial. A member of this order is called a crocodilian, or colloquially a crocodile.
The 9 genera and 28 species of Crocodilia are split into 3 subfamilies: Alligatoridae, alligators and caimans; Crocodylidae, true crocodiles; and Gavialidae, the gharial and false gharial.
Conventions
Conservation status codes listed follow the International Union for Conservation of Nature (IUCN) Red List of Threatened Species. Range maps are provided wherever possible; if a range map is not available, a description of the crocodilian's range is provided. Ranges are based on the IUCN red list for that species unless otherwise noted. All extinct species or subspecies listed alongside extant species went extinct after 1500 CE, and are indicated by a dagger symbol "". Population figures are rounded to the nearest hundred.
Classification
The order Crocodilia consists of 28 extant species belonging to 9 genera. This does not include hybrid species or extinct prehistoric species. Modern molecular studies indicate that the 9 genera can be grouped into 3 families.
Family Alligatoridae (Alligators and caimans)
Genus Alligator: two species
Genus Caiman: three species
Genus Melanosuchus: one species
Genus Paleosuchus: two species
Family Crocodylidae (True crocodiles)
Genus Crocodylus: fourteen species
Genus Mecistops: two species
Genus Osteolaemus: two species
Family Gavialidae (Gharial and false gharial)
Genus Gavialis: one species
Genus Tomistoma: one species
Crocodilians
Family Alligatoridae
The extant Alligatoridae can be recognised by the broad snout, in which the fourth tooth of the lower jaw cannot be seen when the mouth is closed.
Family Crocodylidae
The extant Crocodylidae have a variety of snout shapes, but can be recognised because the fourth tooth of the lower jaw is visible when the mouth is closed.
Family Gavialidae
Gavialidae can be recognised by the long narrow snout, with an enlarged boss at the tip.
References
Crocodilians
Crocodilia | List of crocodilians | Biology | 471 |
29,957,222 | https://en.wikipedia.org/wiki/Steganography%20tools | A steganography software tool allows a user to embed hidden data inside a carrier file, such as an image or video, and later extract that data.
It is not necessary to conceal the message in the original file at all. Thus, it is not necessary to modify the original file and thus, it is difficult to detect anything. If a given section is subjected to successive bitwise manipulation to generate the cyphertext, then there is no evidence in the original file to show that it is being used to encrypt a file.
Architecture
Carrier
The carrier is the signal, stream, or data file into which the hidden data is hidden by making subtle modifications. Examples include audio files, image files, documents, and executable files. In practice, the carrier should look and work the same as the original unmodified carrier, and should appear benign to anyone inspecting it.
Certain properties can raise suspicion that a file is carrying hidden data:
If the hidden data is large relative to the carrier content, as in an empty document that is a megabyte in size.
The use of obsolete formats or poorly-supported extensions which break commonly used tools.
It is a cryptographic requirement that the carrier (e.g. photo) is original, not a copy of something publicly available (e.g., downloaded). This is because the publicly available source data could be compared against the version with a hidden message embedded.
There is a weaker requirement that the embedded message not change the carrier's statistics (or other metrics) such that the presence of a message is detectable. For instance, if the least-significant-bits of the red camera-pixel channel of an image has a Gaussian distribution given a constant colored field, simple image steganography which produces a random distribution of these bits could allow discrimination of stego images from unchanged ones.
The sheer volume of modern (ca 2014) and inane high-bandwidth media (e.g., youtube.com, bittorrent sources. eBay, Facebook, spam, etc.) provides ample opportunity for covert information±.
Chain
Hidden data may be split among a set of files, producing a carrier chain, which has the property that all the carriers must be available, unmodified, and processed in the correct order in order to retrieve the hidden data. This additional security feature usually is achieved by:
using a different initialization vector for each carrier and storing it inside processed carriers -> CryptedIVn = Crypt( IVn, CryptedIVn-1 )
using a different cryptography algorithm for each carrier and choosing it with a chain-order-dependent equiprobabilistic algorithm
Robustness and cryptography
Steganography tools aim to ensure robustness against modern forensic methods, such as statistical steganalysis. Such robustness may be achieved by a balanced mix of:
a stream-based cryptography process;
a data whitening process;
an encoding process.
If the data is detected, cryptography also helps to minimize the resulting damage, since the data is not exposed, only the fact that a secret was transmitted. The sender may be forced to decrypt the data once it is discovered, but deniable encryption can be leveraged to make the decrypted data appear benign.
Strong steganography software relies on a multi-layered architecture with a deep, documented obfuscation process.
Carrier engine
The carrier engine is the core of any steganography tool. Different file formats are modified in different ways, in order to covertly insert hidden data inside them. Processing algorithms include:
Injection (suspicious because of the content-unrelated file size increment)
Generation (suspicious because of the traceability of the generated carriers)
Ancillary data and metadata substitution
LSB or adaptive substitution
Frequency space manipulation
See also
Steganography
BPCS-Steganography
Steganographic file system
Steganography detection
Articles
References
External links
Exhaustive directory of steganography software by Dr. Neil Johnson
Steganography
Espionage techniques
Applications of cryptography
Cryptographic software | Steganography tools | Mathematics | 826 |
7,637,128 | https://en.wikipedia.org/wiki/SN%20185 | SN 185 was a transient astronomical event observed in the year AD 185, likely a supernova. The transient occurred in the direction of Alpha Centauri, between the constellations Circinus and Centaurus, centered at RA Dec , in Circinus. This "guest star" was observed by Chinese astronomers in the Book of Later Han (后汉书), and might have been recorded in Roman literature. It remained visible in the night sky for eight months. This is believed to be the first supernova for which records exist.
History
The Book of Later Han gives the following description:
In the 2nd year of the epoch Zhongping [中平], the 10th month, on the day Guihai [癸亥] [December 7, Year 185], a 'guest star' appeared in the middle of the Southern Gate [南門] [an asterism consisting of ε Centauri and α Centauri], The size was half a bamboo mat. It displayed various colors, both pleasing and otherwise. It gradually lessened. In the 6th month of the succeeding year it disappeared.
The gaseous shell RCW 86 is probably the supernova remnant of this event and has a relatively large angular size of roughly 45 arc minutes (larger than the apparent size of the full moon, which varies from 29 to 34 arc minutes). The distance to RCW 86 is estimated to be . Recent X-ray studies show a good match for the expected age.
Infrared observations from NASA's Spitzer Space Telescope and Wide-field Infrared Survey Explorer (WISE) reveal how the supernova occurred and how its shattered remains ultimately spread out to great distances. The findings show that the stellar explosion took place in a hollowed-out cavity, allowing material expelled by the star to travel much faster and farther than it would have otherwise.
Differing modern interpretations of the Chinese records of the guest star have led to quite different suggestions for the astronomical mechanism behind the event, from a core-collapse supernova to a distant, slow-moving comet – with correspondingly wide-ranging estimates of its apparent visual magnitude (−8 to +4). The recent Chandra results suggest that it was most likely a Type Ia supernova (a type with consistent absolute magnitude), and therefore similar to Tycho's Supernova (SN 1572), which had apparent magnitude −4 at a similar distance.
Gallery
See also
List of supernovae
History of supernova observation
List of supernova remnants
List of supernova candidates
References
External links
BBC News – Ancient supernova mystery solved (25 October 2011)
185
Centaurus
Circinus
Supernova remnants
Supernovae
85
Historical supernovae | SN 185 | Chemistry,Astronomy | 543 |
67,820,384 | https://en.wikipedia.org/wiki/Generative%20Design%20in%20Minecraft | GDMC (short for Generative Design in Minecraft) is a programming competition to create procedurally generated settlements in Minecraft. The competition is organized by academics from New York University, the University of Hertfordshire and the Queen Mary University of London.
Organisers
Michael Cerny Green (New York University)
Christoph Salge (University of Hertfordshire)
Rodrigo Canaan (New York University)
Christian Guckelsberger (Queen Mary University of London)
Julian Togelius (New York University)
References
External links
Official Wiki
Programming contests
Recurring events established in 2018 | Generative Design in Minecraft | Technology | 114 |
13,718,304 | https://en.wikipedia.org/wiki/Terminal%20countdown%20demonstration%20test | A terminal countdown demonstration test (TCDT) is a simulation of the final hours of a launch countdown and serves as a practice exercise in which both the launch team and flight crew rehearse launch day timelines and procedures. In the specific case of a TCDT for the Space Shuttle, the test culminated in a simulated ignition and RSLS Abort (automated shutdown of the orbiter's main engines). Following the simulated abort, the flight crew was briefed on emergency egress procedures and use of the fixed service structure slidewire system. On some earlier shuttle missions, and Apollo missions, the test would conclude with the flight crew evacuating the launch pad by use of these emergency systems, but this is no longer part of the test.
Unmanned carrier rocket launches also undergo TCDTs, when countdown procedures are followed. These vary for specific rockets, for example solid-fuelled rockets would not simulate an engine shutdown, as it is impossible to shut down a solid rocket after it has been lit.
TCDTs typically are carried out a few days before launch.
See also
Space Shuttle program
Ares (rocket)
References
Ares (rocket family)
Space Shuttle program
Spaceflight
Time | Terminal countdown demonstration test | Physics,Astronomy,Mathematics | 243 |
8,003,764 | https://en.wikipedia.org/wiki/Bra%20size | Bra size (also known as brassiere measurement or bust size) indicates the size characteristics of a bra. While there is a number of bra sizing systems in use around the world, the bra sizes usually consist of a number, indicating the size of the band around the woman's torso, and one or more letters that indicate the breast cup size. Bra cup sizes were invented in 1932 while band sizes became popular in the 1940s. For convenience, because of the impracticality of determining the size dimensions of each breast, the volume of the bra cup, or cup size, is based on the difference between band length and over-the-bust measurement.
Manufacturers try to design and manufacture bras that correctly fit the majority of women, while individual women try to identify correctly fitting bras among different styles and sizing systems.
The shape, size, position, symmetry, spacing, firmness, and sag of individual women's breasts vary considerably. Manufacturers' bra size labelling systems vary from country to country because no international standards exist. Even within a country, one study found that the bra size label was consistently different from the measured size. As a result of all these factors, about 25% of women have a difficult time finding a properly fitted bra, and some women choose to buy custom-made bras due to the unique shape of their breasts.
Measurement method origins
On 21 November 1911, Parisienne Madeleine Gabeau received a United States patent for a brassiere with soft cups and a metal band that supported and separated the breasts. To avoid the prevailing fashion that created a single "monobosom", her design provided: "...that the edges of the material d may be carried close along the inner and under contours of the breasts, so as to preserve their form, I employ an outlining band of metal b which is bent to conform to the lower curves of the breast."
Cup design origins
The term "cup" was not used to describe bras until 1916 when two patents were filed.
In October 1932, S.H. Camp and Company was the first to use letters of the alphabet (A, B, C and D) to indicate cup size, although the letters represented how pendulous the breasts were and not their volume. Camp's advertising in the February 1933 issue of Corset and Underwear Review featured letter-labeled profiles of breasts. Cup sizes A to D were not intended to be used for larger-breasted women.
In 1935, Warner's introduced its Alphabet Bra with cup sizes from size A to size D. Their bras incorporated breast volume into its sizing, and continues to be the system in use today. Before long, these cup sizes got nicknames: egg cup, tea cup, coffee cup and challenge cup, respectively. Two other companies, Model and Fay-Miss (renamed in 1935 as the Bali Brassiere Company), followed, offering A, B, C and D cup sizes in the late 1930s. Catalogue companies continued to use the designations Small, Medium and Large through the 1940s. Britain did not adopt the American cups in 1933, and resisted using cup sizes for its products until 1948. The Sears Company finally applied cup sizes to bras in its catalogue in the 1950s.
However, though various manufacturers used the same descriptions of bra sizes (e.g., A to D, small large, etc.), there was no standardisation of what these descriptions actually measured, so that each company had its own standards.
Band measurement origins
Multiple hook and eye closures were introduced in the 1930s that enabled adjustment of bands. Prior to the widespread use of bras, the undergarment of choice for Western women was a corset. To help women meet the perceived ideal female body shape, corset and girdle manufacturers used a calculation called hip spring, the difference between waist and hip measurement (usually ).
The band measurement system was created by U.S. bra manufacturers just after World War II.
Other innovations
The underwire was first added to a strapless bra in 1937 by André, a custom-bra firm. Patents for underwire-type devices in bras were issued in 1931 and 1932, but were not widely adopted by manufacturers until after World War II when metal shortages eased.
In the 1930s, Dunlop chemists were able to reliably transform rubber latex into elastic thread. After 1940, "whirlpool", or concentric stitching, was used to shape the cup structure of some designs. The synthetic fibres were quickly adopted by the industry because of their easy-care properties. Since a brassiere must be laundered frequently, easy-care fabric was in great demand.
Consumer fitting
For best results, the breasts should be measured twice: once when standing upright, once bending over at the waist with the breasts hanging down. If the difference between these two measurements is more than 10 cm, then the average is chosen for calculating the cup size. A number of reports, surveys and studies in different countries have found that between 80% and 85% of women wear incorrectly fitted bras.
In November 2005, Oprah Winfrey produced a show devoted to bras and bra sizes, during which she talked about research that eight out of ten women wear the wrong size bra.
Larger breasts and bra fit
Studies have revealed that the most common mistake made by women when selecting a bra was to choose too large a back band and too small a cup, for example, 38C instead of 34E, or 34B instead of 30D.
The heavier a person's build, the more difficult it is to obtain accurate measurements, as measuring tape sinks into the flesh more easily.
In a study conducted in the United Kingdom of 103 women seeking mammoplasty, researchers found a strong link between obesity and inaccurate back measurement. They concluded that "obesity, breast hypertrophy, fashion and bra-fitting practices combine to make those women who most need supportive bras the least likely to get accurately fitted bras."
One issue that complicates finding a correctly fitting bra is that band and cup sizes are not standardized, but vary considerably from one manufacturer to another, resulting in sizes that only provide an approximate fit. Women cannot rely on labeled bra sizes to identify a bra that fits properly. Scientific studies show that the current system of bra sizing may be inaccurate.
Manufacturers cut their bras differently, so, for example, two 34B bras from two companies may not fit the same person. Customers should pay attention to which sizing system is used by the manufacturer. The main difference is in how cup sizes increase, by 2 cm or 1 inch (= 2.54 cm, see below). Some French manufacturers also increase cup sizes by 3 cm. Unlike dress sizes, manufacturers do not agree on a single standard.
British bras currently range from A to LL cup size (with Rigby&Peller recently introducing bras by Elila which go up to US-N-Cup), while most Americans can find bras with cup sizes ranging from A to G. Some brands (Goddess, Elila) go as high as N, a size roughly equal to a British JJ-Cup. In continental Europe, Milena Lingerie from Poland produces up to cup R.
Larger sizes are usually harder to find in retail outlets. As the cup size increases, the labeled cup size of different manufacturers' bras tend to vary more widely in actual volume. One study found that the label size was consistently different from the measured size.
Even medical studies have attested to the difficulty of getting a correct fit. Research by plastic surgeons has suggested that bra size is imprecise because breast volume is not calculated accurately:
The use of the cup sizing and band measurement systems has evolved over time and continues to change. Experts recommend that women get fitted by an experienced person at a retailer offering the widest possible selection of bra sizes and brands.
Bad bra-fit symptoms
If the straps dig into the shoulder, leaving red marks or causing shoulder or neck pain, the bra band is not offering enough support. If breast tissue overflows the bottom of the bra, under the armpit, or over the top edge of the bra cup, the cup size is too small. Loose fabric in the bra cup indicates the cup size is too big. If the underwires poke the breast under the armpit or if the bra's center panel does not lie flat against the sternum, the cup size is too small. If the band rides up the torso at the back, the band size is too big. If it digs into the flesh, causing the flesh to spill over the edges of the band, the band is too small. If the band feels tight, this may be due to the cups being too small; instead of going up in band size a person should try going up in cup size. Similarly a band might feel too loose if the cup is too big. It is possible to test whether a bra band is too tight or too loose by reversing the bra on her torso so that the cups are at the back and then check for fit and comfort. Generally, if the wearer must continually adjust the bra or experiences general discomfort, the bra is a poor fit and she should get a new fitting.
Obtaining best fit
Bra experts recommend that women, especially those whose cup sizes are D or larger, get a professional bra fitting from the lingerie department of a clothing store or a specialty lingerie store. However, even professional bra fitters in different countries including New Zealand and the United Kingdom produce inconsistent measurements of the same person. There is significant heterogeneity in breast shape, density, and volume. As such, current methods of bra fitting may be insufficient for this range of chest morphology.
A 2004 study by Consumers Reports in New Zealand found that 80% of department store bra fittings resulted in a poor fit. However, because manufacturer's standards widely vary, women cannot rely on their own measurements to obtain a satisfactory fit. Some bra manufacturers and distributors state that trying on and learning to recognize a properly fitting bra is the best way to determine a correct bra size, much like shoes.
A correctly fitting bra should meet the following criteria:
When viewed from the side, the edge of the chest band should be horizontal, should not ride up the back and should be firm but comfortable.
Each cup's underwire at the front should lie flat against the sternum (not the breast), along the inframammary fold, and should not dig into the chest or the breasts, rub or poke out at the front.
The breasts should be enclosed by the cups and there should be a smooth line where the fabric at the top of the cup ends.
The apex of the breast, the nipple, must be in the center of the cup.
The breast should not bulge over the top or out the sides of the cups, even with a low-cut style such as the balconette bra.
The straps of a correctly fitted bra should not dig into or slip off the shoulder, which suggests a too-large band.
The back of the bra should not ride up and the chest band should remain parallel to the floor when viewed from the back.
The breasts should be supported primarily by the band around the rib cage, rather than by the shoulder straps.
The woman should be able to breathe and move easily without the bra slipping around.
Confirming bra fit
One method to confirm that the bra is the best fit has been nicknamed the Swoop and Scoop. After identifying a well-fitting bra, the woman bends forward (the swoop), allowing her breasts to fall into the bra, filling the cup naturally, and then fastening the bra on the outermost set of hooks. When the woman stands up, she uses the opposite hand to place each breast gently into the cup (the scoop), and she then runs her index finger along the inside top edge of the bra cup to make sure her breast tissue does not spill over the edges.
Experts suggest that women choose a bra band that fits well on the outermost hooks. This allows the wearer to use the tighter hooks on the bra strap as it stretches during its lifetime of about eight months. The band should be tight enough to support the bust, but the straps should not provide the primary support.
Consumer measurement difficulties
A bra is one of the most complicated articles of clothing to make. A typical bra design has between 20 and 48 parts, including the band, hooks, cups, lining, and straps. Major retailers place orders from manufacturers in batches of 10,000. Orders of this size require a large-scale operation to manage the cutting, sewing and packing required.
Constructing a properly fitting brassiere is difficult. Adelle Kirk, formerly a manager at the global Kurt Salmon management consulting firm that specializes in the apparel and retail businesses, said that making bras is complex:
Asymmetric breasts
Obtaining the correct size is complicated by the fact that up to 25% of women's breasts display a persistent, visible breast asymmetry, which is defined as differing in size by at least one cup size. For about 5% to 10% of women, their breasts are severely different, with the left breast being larger in 62% of cases. Minor asymmetry may be resolved by wearing a padded bra, but severe cases of developmental breast deformity — commonly called "Amazon's Syndrome" by physicians — may require corrective surgery due to morphological alterations caused by variations in shape, volume, position of the breasts relative to the inframammary fold, the position of the nipple-areola complex on the chest, or both.
Breast volume variation
Obtaining the correct size is further complicated by the fact that the size and shape of women's breasts change, if they experience menstrual cycles, during the cycle and can experience unusual or unexpectedly rapid growth in size due to pregnancy, weight gain or loss, or medical conditions. Even breathing can substantially alter the measurements.
Some women's breasts can change shape by as much as 20% per month:
Increases in average bra size
In 2010, the most common bra size sold in the UK was 36D. In 2004, market research company Mintel reported that bust sizes in the United Kingdom had increased from 1998 to 2004 in younger as well as older consumers, while a more recent study showed that the most often sold bra size in the US in 2008 was 36D.
Researchers ruled out increases in population weight as the explanation and suggested it was instead likely due to more women wearing the correct, larger size.
Consumer measurement methods
Bra retailers recommend several methods for measuring band and cup size. These are based on two primary methods, either under the bust or over the bust, and sometimes both. Calculating the correct bra band size is complicated by a variety of factors. The American National Standards Institute states that while a voluntary consensus of sizes exists, there is much confusion to the 'true' size of clothing. As a result, bra measurement can be considered an art and a science. Online shopping and in-person bra shopping experiences may differ because online recommendations are based on averages and in-person shopping can be completely personalized so the shopper may easily try on band sizes above and below her between measured band size. For the woman with a large cup size and a between band size, they may find their cup size is not available in local stores so may have to shop online where most large cup sizes are readily available on certain sites. Others recommend rounding to the nearest whole number.
Band measurement methods
There are several possible methods for measuring the bust.
Underbust +0
A measuring tape is pulled around the torso at the inframammary fold. The tape is then pulled tight while remaining horizontal and parallel to the floor. The measurement (in inches) is then rounded to the nearest even number for the band size. , Kohl's uses this method for its online fitting guide.
Underbust +4
This method begins the same way as the underbust +0 method, where a measuring tape is pulled tight around the torso under the bust while remaining horizontal. If the measurement (in inches) is even, 4 is added to calculate the band size. If it is odd, 5 is added. Kohl's used this method in 2013. The "war on plus four" was a name given to a campaign (circa 2011) against this method, with underbust +0 supporters claiming that the then-ubiquitous +4 method fails to fit a majority of women. Underbust +4 method generally only applies to the US and UK sizes.
Sizing chart
Currently, many large U.S. department stores determine band size by starting with the measurement taken underneath the bust similar to the aforementioned underbust +0 and underbust +4 methods. A sizing chart or calculator then uses this measurement to determine the band size. Band sizes calculated using this method vary between manufacturers.
Underarm/upper bust
A measuring tape is pulled around the torso under the armpit and above the bust. Because band sizes are most commonly manufactured in even numbers, the wearer must round to the closest even number.
Cup measurement methods
Bra-wearers can calculate their cup size by finding the difference between their bust size and their band size. The bust size, bust line measure, or over-bust measure is the measurement around the torso over the fullest part of the breasts, with the crest of the breast halfway between the elbow and shoulder, usually over the nipples, ideally while standing straight with arms to the side and wearing a properly fitted bra, because this practice assumes the current bra fits correctly. The measurements are made in the same units as the band size, either inches or centimetres. The cup size is calculated by subtracting the band size from the over-the-bust measurement.
The meaning of cup sizes varies
Cup sizes vary from one country to another. For example, a U.S. H-cup does not have the same size as an Australian, even though both are based on measurements in inches. The larger the cup size, the bigger the variation.
Surveys of bra sizes tend to be very dependent on the population studied and how it was obtained. For instance, one U.S. study reported that the most common size was 34B, followed by 34C, that 63% were size 34 and 39% cup size B. However, the survey sample was drawn from 103 Caucasian student volunteers at a Midwest U.S. university aged 18–25, and excluded pregnant and nursing women.
Plastic Surgeon Measuring System
Bra-wearers who have difficulty calculating a correct cup size may be able to find a correct fit using a method adopted by plastic surgeons. Using a flexible tape measure, position the tape at the outside of the chest, under the arm, where the breast tissue begins. Measure across the fullest part of the breast, usually across the nipple, to where the breast tissue stops at the breast bone.
Conversion of the measurement to cup size is shown in the "Measuring cup size" table.
Note that, in general, countries that employ metric cup sizing (like in § Continental Europe) have their own system of increments that result in cup sizes which differ from those using inches, since does not equal .
These cup measurements are only correct for converting cup sizes for a band to cm using this particular method, because cup size is relative to band size. This principle means that bras of differing band size can have the same volume. For example, the cup volume is the same for 30D, 32C, 34B, and 36A. These related bra sizes of the same cup volume are called sister sizes. For a list of such sizes, refer to § Calculating cup volume and breast weight.
Consumer fit research
A 2012 study by White and Scurr University of Portsmouth compared method that adds 4 to the band size over-the-bust method used in many United Kingdom lingerie shops with and compared that to measurements obtained using a professional method. The study relied on the professional bra-fitting method described by McGhee and Steele (2010). The study utilized a five-step approach to obtain the best fitting bra size for an individual. The study measured 45 women using the traditional selection method that adds 4 to the band size over-the-bust method. Women tried bras on until they obtained the best fit based on professional bra fitting criteria. The researchers found that 76% of women overestimated their band and 84% underestimated their cup size. When women wear bras with too big a band, breast support is reduced. Too small a cup size may cause skin irritation. They noted that "ill-fitting bras and insufficient breast support can lead to the development of musculoskeletal pain and inhibit women participating in physical activity.". The study recommended that women should be educated about the criteria for finding a well-fitting bra. They recommended that women measure under their bust to determine their band size rather than the traditional over the bust measurement method.
Manufacturer design standards
Bra-labeling systems used around the world are at times misleading and confusing. Cup and band sizes vary around the world. In countries that have adopted the European EN 13402 dress-size standard, the torso is measured in centimetres and rounded to the nearest multiple of 5 cm. Bra-fitting experts in the United Kingdom state that many women who buy off the rack without professional assistance wear up to two sizes too small.
Manufacturer Fruit of the Loom attempted to solve the problem of finding a well-fitting bra for asymmetrical breasts by introducing Pick Your Perfect Bra, which allow women to choose a bra with two different cup sizes, although it is only available in A through D cup sizes.
One very prominent discrepancy between the sizing systems is the fact that the US band sizes, based on inches, does not correspond to its centimeter based EU counterpart.
There are several sizing systems in different countries.
Cup size is determined by one of two methods: in the US and UK, increasing cup size every inch method; and in all other systems by increasing cup size for every two centimeters. Since one inch equals 2.54 centimeters, there is considerable discrepancy between the systems, which becomes more exaggerated as cup sizes increase. Many bras are only available in 36 sizes.
UK
The UK and US use the inch system. The difference in chest circumference between the cup sizes is always one inch, or 2.54 cm. The difference between 2 band sizes is 2 inches or 5.08 cm.
Leading brands and manufacturers including Panache, Bestform, Gossard, Freya, Curvy Kate, Bravissimo and Fantasie, which use the British standard band sizes (where underbust measurement equals band size) 28-30-32-34-36-38-40-42-44, and so on. Cup sizes are designated by AA-A-B-C-D-DD-E-F-FF-G-GG-H-HH-J-JJ-K-KK-L.
However, some clothing retailers and mail order companies have their own house brands and use a custom sizing system. Marks and Spencers uses AA-A-B-C-D-DD-E-F-G-GG-H-J, leaving out FF and HH, in addition to following the US band sizing convention. As a result, their J-Cup is equal to a British standard H-cup. Evans and ASDA sell bras (ASDA as part of their George clothing range) whose sizing runs A-B-C-D-DD-E-F-G-H. Their H-Cup is roughly equal to a British standard G-cup.
Some retailers reserve AA for young teens, and use AAA for women.
Australia/New Zealand
Australia and New Zealand cup and band sizes are in metric increases of 2 cm per cup similar to many European brands. Cup labelling methods and sizing schemes are inconsistent and there is great variability between brands. In general, cup sizes AA-DD follow UK labels but thereafter split off from this system and employ European labels (no double letters with cups progressing from F-G-H etc. for every 2 cm increase). However, a great many local manufacturers employ unique labelling systems Australia and New Zealand bra band sizes are labelled in dress size, although they are obtained by under bust measurement whilst dress sizes utilise bust-waist-hip. In practice very few of the leading Australian manufacturers produce sizes F+ and many disseminate sizing misinformation. The Australian demand for DD+ is largely met by various UK, US and European major brands. This has introduced further sizing scheme confusion that is poorly understood even by specialist retailers.
United States
Bra sizing in the United States is very similar to the United Kingdom. Band sizes use the same designation in inches and the cups also increase by 1-inch-steps. However, some manufacturers use conflicting sizing methods. Some label bras beyond a C cup as D-DD-DDD-DDDD-E-EE-EEE-EEEE-F..., some use the variation: D1, D2, D3, D4, D5..... but many use the following system: A, B, C, D, DD, DDD, G, H, I, J, K, L, M, N, O. and others label them like the British system D-DD-E-F-FF... Comparing the larger cup sizes between different manufacturers can be difficult.
In 2013, underwear maker Jockey International offered a new way to measure bra and cup size. It introduced a system with ten cup sizes per band size that are numbered and not lettered, designated as 1–36, 2–36 etc. The company developed the system over eight years, during which they scanned and measured the breasts and torsos of 800 women. Researchers also tracked the women's use of their bras at home. To implement the system, women must purchase a set of plastic cups from the company to find their Jockey cup size. Some analysts were critical of the requirement to buy the measurement kit, since women must pay about US$20 to adopt Jockey's proprietary system, in addition to the cost of the bras themselves.
Europe / International
European bra sizes are based on centimeters. They are also known as International. Abbreviations such as EU, Intl and Int are all referring to the same European bra size convention. These sizes are used in most of Europe and large parts of the world.
The underbust measurement is rounded to the nearest multiple of 5 cm. Band sizes run 65, 70, 75, 80 etc., increasing in steps of 5 cm, similar to the English double inch. A person with a measured underbust circumference of 78–82 cm should wear a band size 80. The tightness or snugness of the measurement (e.g. a tape measure or similar) depends on the adipose tissue softness. Softer tissue require tightening when measuring, this to ensure that the bra band will fit snugly on the body and stay in place. A loose measurement can, and often does, vary from the tighter measurement. This causes some confusion as a person with a loose measurement of 84 cm would think they have band size 85 but due to a lot of soft tissue the same person might have a snugger and tighter and of 79 cm and should choose the more appropriate band size of 80 or even smaller band size.
The cup labels begin normally with "A" for an 13±1 cm difference between bust and underbust circumference measurement measured loosely (i.e. not tightly as for bra band size), i.e. the not between bust circumference and band size (that normally require some tightening when measured).To clarify the important difference in measuring: Underbust measuring for bra band is done snugly and tight while measuring underbust for determining bra cups is done loosely. For people with much soft adipose tissue these two measurements will not be identical. In this sense the method to determine European sizes differ compared to English systems where the cup sizes are determined by bust measurement compared to bra band size. European cups increase for every additional 2 cm in difference between bust and underbust measurement, instead of 2.5 cm or 1-inch, and except for the initial cup size letters are neither doubled nor skipped. In very large cup sizes this causes smaller cups than their English counterparts.
This system has been standardized in the European dress size standard EN 13402 introduced in 2006, but was in use in many European countries before that date.
South Korea/Japan
In South Korea and Japan the torso is measured in centimetres and rounded to the nearest multiple of 5 cm. Band sizes run 65-70-75-80..., increasing in steps of 5 cm, similar to the English double inch. A person with a loosely measured underbust circumference of 78–82 cm should wear a band size 80.
The cup labels begin with "AAA" for a 5±1.25 cm difference between bust and underbust circumference, i.e. similar bust circumference and band size as in the English systems. They increase in steps of 2.5 cm, and except for the initial cup size letters are neither doubled nor skipped.
Japanese sizes are the same as Korean ones, but the cup labels begin with "AA" for a 7.5±1.25 cm difference and usually precedes the bust designation, i.e. "B75" instead of "75B".
This system has been standardized in the Korea dress size standard KS K9404 introduced in 1999 and in Japan dress size standard JIS L4006 introduced in 1998.
France/Belgium/Spain
The French and Spanish system is a permutation of the Continental European sizing system. While cup sizes are the same, band sizes are exactly 15 cm larger than the European band size.
Italy
The Italian band size uses small consecutive integers instead of the underbust circumference rounded to the nearest multiple of 5 cm. Since it starts with size 0 for European size 60, the conversion consists of a division by 5 and then a subtraction of 12. The size designations are often given in Roman numerals.
Cup sizes have traditionally used a step size of 2.5 cm, which is close to the English inch of 2.54 cm, and featured some double letters for large cups, but in recent years some Italian manufacturers have switched over to the European 2-cm system.
Here is a conversion table for bra sizes in Italy with respect other countries:
Advertising and retail influence
Manufacturers' marketing and advertising often appeals to fashion and image over fit, comfort, and function. Since about 1994, manufacturers have re-focused their advertising, moving from advertising functional brassieres that emphasize support and foundation, to selling lingerie that emphasize fashion while sacrificing basic fit and function, like linings under scratchy lace.
Engineered Alternative to traditional bras
English mechanical engineer and professor John Tyrer from Loughborough University has devised a solution to problematic bra fit by re-engineering bra design. He started investigating the problem of bra design while on an assignment from the British government after his wife returned disheartened from an unsuccessful shopping trip. His initial research into the extent of fitting problems soon revealed that of women wear the wrong size of bra.. He theorised that this widespread practice of purchasing the wrong size was due to the measurement system recommended by bra manufacturers. This sizing system employs a combination of maximum chest diameter (under bust) and maximum bust diameter (bust) rather than the actual breast volume which is to be accommodated by the bra. According to Tyrer, "to get the most supportive and fitted bra it's infinitely better if you know the volume of the breast and the size of the back.". He says the A, B, C, D cup measurement system is flawed. "It's like measuring a motor car by the diameter of the gas cap." "The whole design is fundamentally flawed. It's an instrument of torture." Tyrer has developed a bra design with crossed straps in the back. These use the weight of one breast to lift the other using counterbalance. Standard designs constrict chest movement during breathing. One of the tools used in the development of Tyrer's design has been a projective differential shape body analyzer for .
Breasts weigh up to and not . Tyrer said, "By measuring the diameter of the chest and breasts current measurements are supposed to tell you something about the size and volume of each breast, but in fact it doesn't". Bra companies remain reluctant to manufacture Tyrer's prototype, which is a front closing bra with more vertical orientation and adjustable cups.
Calculating cup volume and breast weight
The average breast weighs about . Each breast contributes to about 4–5% of the body fat. The density of fatty tissue is more or less equal to
If a cup is a hemisphere, its volume V is given by the following formula:
where r is the radius of the cup, and D is its diameter.
If the cup is a hemi-ellipsoid, its volume is given by the formula:
where a, b and c are the three semi-axes of the hemi-ellipsoid, and cw, cd and wl are respectively the cup width, the cup depth and the length of the wire.
Cups give a hemi-spherical shape to breasts and underwires give shape to cups. So the curvature radius of the underwire is the key parameter to determine volume and weight of the breast. The same underwires are used for the cups of sizes 36A, 34B, 32C, 30D etc. ... so those cups have the same volume. The reference numbers of underwire sizes are based on a B cup bra, for example underwire size 32 is for 32B cup (and 34A, 30C...). An underwire size 30 width has a curvature diameter of and this diameter increases by by size. The table below shows volume calculations for some cups that can be found in a ready-to-wear large size shop.
See also
History of bras
List of bra designs
Nursing bra
Underwire bra
Wonderbra
Notes
References
Further reading
Brassieres
Sizes in clothing
es:Sostén#Tallas y copas | Bra size | Physics,Mathematics | 7,059 |
3,877,946 | https://en.wikipedia.org/wiki/ZX8301 | The ZX8301 is an Uncommitted Logic Array (ULA) integrated circuit designed for the Sinclair QL microcomputer. Also known as the "Master Chip", it provides a Video Display Generator, the division of a 15 MHz crystal to provide the 7.5 MHz system clock, ZX8302 register address decoder, DRAM refresh and bus controller. The ZX8301 is IC22 on the QL motherboard.
The Sinclair Research business model had always been to work toward a maximum performance to price ratio (as was evidenced by the keyboard mechanisms in the QL and earlier Sinclair models). Unfortunately, this focus on price and performance often resulted in cost cutting in the design and build of Sinclair's machines. One such cost driven decision (failing to use a hardware buffer integrated circuit (IC) between the IC pins and the external RGB monitor connection) caused the ZX8301 to quickly develop a reputation for being fragile and easy to damage, particularly if the monitor plug was inserted or removed while the QL was powered up. Such action resulted in damage to the video circuitry and almost always required replacement of the ZX8301.
The ZX8301, when subsequently used in the International Computers Limited (ICL) One Per Desk featured hardware buffering, and the chip proved to be much more reliable in this configuration.
See also
Sinclair QL
One Per Desk
List of Sinclair QL clones
References
External links
http://www.worldofspectrum.org/qlfaq/Hardware
Gate arrays
Sinclair Research | ZX8301 | Technology,Engineering | 328 |
17,062,920 | https://en.wikipedia.org/wiki/DNA%20damage%20theory%20of%20aging | The DNA damage theory of aging proposes that aging is a consequence of unrepaired accumulation of naturally occurring DNA damage. Damage in this context is a DNA alteration that has an abnormal structure. Although both mitochondrial and nuclear DNA damage can contribute to aging, nuclear DNA is the main subject of this analysis. Nuclear DNA damage can contribute to aging either indirectly (by increasing apoptosis or cellular senescence) or directly (by increasing cell dysfunction).
Several review articles have shown that deficient DNA repair, allowing greater accumulation of DNA damage, causes premature aging; and that increased DNA repair facilitates greater longevity, e.g. Mouse models of nucleotide-excision–repair syndromes reveal a striking correlation between the degree to which specific DNA repair pathways are compromised and the severity of accelerated aging, strongly suggesting a causal relationship. Human population studies show that single-nucleotide polymorphisms in DNA repair genes, causing up-regulation of their expression, correlate with increases in longevity. Lombard et al. compiled a lengthy list of mouse mutational models with pathologic features of premature aging, all caused by different DNA repair defects. Freitas and de Magalhães presented a comprehensive review and appraisal of the DNA damage theory of aging, including a detailed analysis of many forms of evidence linking DNA damage to aging. As an example, they described a study showing that centenarians of 100 to 107 years of age had higher levels of two DNA repair enzymes, PARP1 and Ku70, than general-population old individuals of 69 to 75 years of age. Their analysis supported the hypothesis that improved DNA repair leads to longer life span. Overall, they concluded that while the complexity of responses to DNA damage remains only partly understood, the idea that DNA damage accumulation with age is the primary cause of aging remains an intuitive and powerful one.
In humans and other mammals, DNA damage occurs frequently and DNA repair processes have evolved to compensate. In estimates made for mice, DNA lesions occur on average 25 to 115 times per minute in each cell, or about 36,000 to 160,000 per cell per day. Some DNA damage may remain in any cell despite the action of repair processes. The accumulation of unrepaired DNA damage is more prevalent in certain types of cells, particularly in non-replicating or slowly replicating cells, such as cells in the brain, skeletal and cardiac muscle.
DNA damage and mutation
To understand the DNA damage theory of aging it is important to distinguish between DNA damage and mutation, the two major types of errors that occur in DNA. Damage and mutation are fundamentally different. DNA damage is any physical abnormality in the DNA, such as single and double strand breaks, 8-hydroxydeoxyguanosine residues and polycyclic aromatic hydrocarbon adducts. DNA damage can be recognized by enzymes, and thus can be correctly repaired using the complementary undamaged strand in DNA as a template or an undamaged sequence in a homologous chromosome if it is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented and thus translation into a protein will also be blocked. Replication may also be blocked and/or the cell may die. Descriptions of reduced function, characteristic of aging and associated with accumulation of DNA damage, are described in the next section.
In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and thus a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damages and mutations are related because DNA damages often cause errors of DNA synthesis during replication or repair and these errors are a major source of mutation.
Given these properties of DNA damage and mutation, it can be seen that DNA damages are a special problem in non-dividing or slowly dividing cells, where unrepaired damages will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell's survival. Thus, in a population of cells comprising a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism, because such mutant cells can give rise to cancer. Thus, DNA damages in frequently dividing cells, because they give rise to mutations, are a prominent cause of cancer. In contrast, DNA damages in infrequently dividing cells are likely a prominent cause of aging.
The first person to suggest that DNA damage, as distinct from mutation, is the primary cause of aging was Alexander in 1967. By the early 1980s there was significant experimental support for this idea in the literature. By the early 1990s experimental support for this idea was substantial, and furthermore it had become increasingly evident that oxidative DNA damage, in particular, is a major cause of aging.
In a series of articles from 1970 to 1977, PV Narasimh Acharya, Phd. (1924–1993) theorized and presented evidence that cells undergo "irreparable DNA damage", whereby DNA crosslinks occur when both normal cellular repair processes fail and cellular apoptosis does not occur. Specifically, Acharya noted that double-strand breaks and a "cross-linkage joining both strands at the same point is irreparable because neither strand can then serve as a template for repair. The cell will die in the next mitosis or in some rare instances, mutate."
Age-associated accumulation of DNA damage and changes in gene expression
In tissues composed of non- or infrequently replicating cells, DNA damage can accumulate with age and lead either to loss of cells, or, in surviving cells, loss of gene expression. Accumulated DNA damage is usually measured directly. Numerous studies of this type have indicated that oxidative damage to DNA is particularly important. The loss of expression of specific genes can be detected at both the mRNA level and protein level.
Other form of age-associated changes in gene expression is increased transcriptional variability, that was found first in a selected panel of genes in heart cells and, more recently, in the whole transcriptomes of immune cells, and human pancreas cells.
Brain
The adult brain is composed in large part of terminally differentiated non-dividing neurons. Many of the conspicuous features of aging reflect a decline in neuronal function. Accumulation of DNA damage with age in the mammalian brain has been reported during the period 1971 to 2008 in at least 29 studies. This DNA damage includes the oxidized nucleoside 8-oxo-2'-deoxyguanosine (8-oxo-dG), single- and double-strand breaks, DNA-protein crosslinks and malondialdehyde adducts (reviewed in Bernstein et al.). Increasing DNA damage with age has been reported in the brains of the mouse, rat, gerbil, rabbit, dog, and human.
Rutten et al. showed that single-strand breaks accumulate in the mouse brain with age. Young 4-day-old rats have about 3,000 single-strand breaks and 156 double-strand breaks per neuron, whereas in rats older than 2 years the level of damage increases to about 7,400 single-strand breaks and 600 double-strand breaks per neuron. Sen et al. showed that DNA damages which block the polymerase chain reaction in rat brain accumulate with age. Swain and Rao observed marked increases in several types of DNA damages in aging rat brain, including single-strand breaks, double-strand breaks and modified bases (8-OHdG and uracil). Wolf et al. also showed that the oxidative DNA damage 8-OHdG accumulates in rat brain with age. Similarly, it was shown that as humans age from 48 to 97 years, 8-OHdG accumulates in the brain.
Lu et al. studied the transcriptional profiles of the human frontal cortex of individuals ranging from 26 to 106 years of age. This led to the identification of a set of genes whose expression was altered after age 40. These genes play central roles in synaptic plasticity, vesicular transport and mitochondrial function. In the brain, promoters of genes with reduced expression have markedly increased DNA damage. In cultured human neurons, these gene promoters are selectively damaged by oxidative stress. Thus Lu et al. concluded that DNA damage may reduce the expression of selectively vulnerable genes involved in learning, memory and neuronal survival, initiating a program of brain aging that starts early in adult life.
Muscle
Muscle strength, and stamina for sustained physical effort, decline in function with age in humans and other species. Skeletal muscle is a tissue composed largely of multinucleated myofibers, elements that arise from the fusion of mononucleated myoblasts. Accumulation of DNA damage with age in mammalian muscle has been reported in at least 18 studies since 1971. Hamilton et al. reported that the oxidative DNA damage 8-OHdG accumulates in heart and skeletal muscle (as well as in brain, kidney and liver) of both mouse and rat with age. In humans, increases in 8-OHdG with age were reported for skeletal muscle. Catalase is an enzyme that removes hydrogen peroxide, a reactive oxygen species, and thus limits oxidative DNA damage. In mice, when catalase expression is increased specifically in mitochondria, oxidative DNA damage (8-OHdG) in skeletal muscle is decreased and lifespan is increased by about 20%. These findings suggest that mitochondria are a significant source of the oxidative damages contributing to aging.
Protein synthesis and protein degradation decline with age in skeletal and heart muscle, as would be expected, since DNA damage blocks gene transcription. In 2005, Piec et al. found numerous changes in protein expression in rat skeletal muscle with age, including lower levels of several proteins related to myosin and actin. Force is generated in striated muscle by the interactions between myosin thick filaments and actin thin filaments.
Liver
Liver hepatocytes do not ordinarily divide and appear to be terminally differentiated, but they retain the ability to proliferate when injured. With age, the mass of the liver decreases, blood flow is reduced, metabolism is impaired, and alterations in microcirculation occur. At least 21 studies have reported an increase in DNA damage with age in liver. For instance, Helbock et al. estimated that the steady state level of oxidative DNA base alterations increased from 24,000 per cell in the liver of young rats to 66,000 per cell in the liver of old rats.
One or two months after inducing DNA double-strand breaks in the livers of young mice, the mice showed multiple symptoms of aging similar to those seen in untreated livers of normally aged control mice.
Kidney
In kidney, changes with age include reduction in both renal blood flow and glomerular filtration rate, and impairment in the ability to concentrate urine and to conserve sodium and water. DNA damages, particularly oxidative DNA damages, increase with age (at least 8 studies). For instance Hashimoto et al. showed that 8-OHdG accumulates in rat kidney DNA with age.
Long-lived stem cells
Tissue-specific stem cells produce differentiated cells through a series of increasingly more committed progenitor intermediates. In hematopoiesis (blood cell formation), the process begins with long-term hematopoietic stem cells that self-renew and also produce progeny cells that upon further replication go through a series of stages leading to differentiated cells without self-renewal capacity. In mice, deficiencies in DNA repair appear to limit the capacity of hematopoietic stem cells to proliferate and self-renew with age. Sharpless and Depinho reviewed evidence that hematopoietic stem cells, as well as stem cells in other tissues, undergo intrinsic aging. They speculated that stem cells grow old, in part, as a result of DNA damage. DNA damage may trigger signalling pathways, such as apoptosis, that contribute to depletion of stem cell stocks. This has been observed in several cases of accelerated aging and may occur in normal aging too.
A key aspect of hair loss with age is the aging of the hair follicle. Ordinarily, hair follicle renewal is maintained by the stem cells associated with each follicle. Aging of the hair follicle appears to be due to the DNA damage that accumulates in renewing stem cells during aging.
Mutation theories of aging
A related theory is that mutation, as distinct from DNA damage, is the primary cause of aging. A comparison of somatic mutation rate across several mammal species found that the total number of accumulated mutations at the end of lifespan was roughly equal across a broad range of lifespans. The authors state that this strong relationship between somatic mutation rate and lifespan across different mammalian species suggests that evolution may constrain somatic mutation rates, perhaps by selection acting on different DNA repair pathways.
As discussed above, mutations tend to arise in frequently replicating cells as a result of errors of DNA synthesis when template DNA is damaged, and can give rise to cancer. However, in mice there is no increase in mutation in the brain with aging. Mice defective in a gene (Pms2) that ordinarily corrects base mispairs in DNA have about a 100-fold elevated mutation frequency in all tissues, but do not appear to age more rapidly. On the other hand, mice defective in one particular DNA repair pathway show clear premature aging, but do not have elevated mutation.
One variation of the idea that mutation is the basis of aging, that has received much attention, is that mutations specifically in mitochondrial DNA are the cause of aging. Several studies have shown that mutations accumulate in mitochondrial DNA in infrequently replicating cells with age. DNA polymerase gamma is the enzyme that replicates mitochondrial DNA. A mouse mutant with a defect in this DNA polymerase is only able to replicate its mitochondrial DNA inaccurately, so that it sustains a 500-fold higher mutation burden than normal mice. These mice showed no clear features of rapidly accelerated aging. Overall, the observations discussed in this section indicate that mutations are not the primary cause of aging.
Dietary restriction
In rodents, caloric restriction slows aging and extends lifespan. At least 4 studies have shown that caloric restriction reduces 8-OHdG damages in various organs of rodents. One of these studies showed that caloric restriction reduced accumulation of 8-OHdG with age in rat brain, heart and skeletal muscle, and in mouse brain, heart, kidney and liver. More recently, Wolf et al. showed that dietary restriction reduced accumulation of 8-OHdG with age in rat brain, heart, skeletal muscle, and liver. Thus reduction of oxidative DNA damage is associated with a slower rate of aging and increased lifespan.
Inherited defects that cause premature aging
If DNA damage is the underlying cause of aging, it would be expected that humans with inherited defects in the ability to repair DNA damages should age at a faster pace than persons without such a defect. Numerous examples of rare inherited conditions with DNA repair defects are known. Several of these show multiple striking features of premature aging, and others have fewer such features. Perhaps the most striking premature aging conditions are Werner syndrome (mean lifespan 47 years), Huchinson–Gilford progeria (mean lifespan 13 years), and Cockayne syndrome (mean lifespan 13 years).
Werner syndrome is due to an inherited defect in an enzyme (a helicase and exonuclease) that acts in base excision repair of DNA (e.g. see Harrigan et al.).
Huchinson–Gilford progeria is due to a defect in Lamin A protein which forms a scaffolding within the cell nucleus to organize chromatin and is needed for repair of double-strand breaks in DNA. A-type lamins promote genetic stability by maintaining levels of proteins that have key roles in the DNA repair processes of non-homologous end joining and homologous recombination. Mouse cells deficient for maturation of prelamin A show increased DNA damage and chromosome aberrations and are more sensitive to DNA damaging agents.
Cockayne Syndrome is due to a defect in a protein necessary for the repair process, transcription coupled nucleotide excision repair, which can remove damages, particularly oxidative DNA damages, that block transcription.
In addition to these three conditions, several other human syndromes, that also have defective DNA repair, show several features of premature aging. These include ataxia–telangiectasia, Nijmegen breakage syndrome, some subgroups of xeroderma pigmentosum, trichothiodystrophy, Fanconi anemia, Bloom syndrome and Rothmund–Thomson syndrome.
In addition to human inherited syndromes, experimental mouse models with genetic defects in DNA repair show features of premature aging and reduced lifespan.(e.g. refs.) In particular, mutant mice defective in Ku70, or Ku80, or double mutant mice deficient in both Ku70 and Ku80 exhibit early aging. The mean lifespans of the three mutant mouse strains were similar to each other, at about 37 weeks, compared to 108 weeks for the wild-type control. Six specific signs of aging were examined, and the three mutant mice were found to display the same aging signs as the control mice, but at a much earlier age. Cancer incidence was not increased in the mutant mice. Ku70 and Ku80 form the heterodimer Ku protein essential for the non-homologous end joining (NHEJ) pathway of DNA repair, active in repairing DNA double-strand breaks. This suggests an important role of NHEJ in longevity assurance.
Defects in DNA repair cause features of premature aging
Many authors have noted an association between defects in the DNA damage response and premature aging (see e.g.). If a DNA repair protein is deficient, unrepaired DNA damages tend to accumulate. Such accumulated DNA damages appear to cause features of premature aging (segmental progeria). Table 1 lists 18 DNA repair proteins which, when deficient, cause numerous features of premature aging.
Increased DNA repair and extended longevity
Table 2 lists DNA repair proteins whose increased expression is connected to extended longevity.
Lifespan in different mammalian species
DNA repair capacity
Studies comparing DNA repair capacity in different mammalian species have shown that repair capacity correlates with lifespan. The initial study of this type, by Hart and Setlow, showed that the ability of skin fibroblasts of seven mammalian species to perform DNA repair after exposure to a DNA damaging agent correlated with lifespan of the species. The species studied were shrew, mouse, rat, hamster, cow, elephant and human. This initial study stimulated many additional studies involving a wide variety of mammalian species, and the correlation between repair capacity and lifespan generally held up. In one of the more recent studies, Burkle et al. studied the level of a particular enzyme, Poly ADP ribose polymerase, which is involved in repair of single-strand breaks in DNA. They found that the lifespan of 13 mammalian species correlated with the activity of this enzyme.
The DNA repair transcriptomes of the liver of humans, naked mole-rats and mice were compared. The maximum lifespans of humans, naked mole-rat, and mouse are respectively ~120, 30 and 3 years. The longer-lived species, humans and naked mole rats expressed DNA repair genes, including core genes in several DNA repair pathways, at a higher level than did mice. In addition, several DNA repair pathways in humans and naked mole-rats were up-regulated compared with mouse. These findings suggest that increased DNA repair facilitates greater longevity.
Over the past decade, a series of papers have shown that the mitochondrial DNA (mtDNA) base composition correlates with animal species maximum life span. The mitochondrial DNA base composition is thought to reflect its nucleotide-specific (guanine, cytosine, thymidine and adenine) different mutation rates (i.e., accumulation of guanine in the mitochondrial DNA of an animal species is due to low guanine mutation rate in the mitochondria of that species).
DNA damage accumulation and repair decline
The rate of accumulation of DNA damage (double-strand breaks) in the leukocytes of dolphins, goats, reindeer, American flamingos, and griffon vultures was compared to the longevity of individuals of these different species. The species with longer lifespans were found to have slower accumulation of DNA damage, a finding consistent with the DNA damage theory of aging. In healthy humans after age 50, endogenous DNA single- and double-strand breaks increase linearly, and other forms of DNA damage also increase with age in blood mononuclear cells. Also, after age 50 DNA repair capability decreases with age.
In mice, the DNA repair process of non-homologous end-joining that repairs DNA double strand breaks, declines in efficiency from 1.8-3.8-fold, depending on the specific tissue, when 5 month old animals are compared to 24 month old animals. A study of fibroblast cells from humans varying in age from 16-75 years showed that the efficiency and fidelity of non-homologous end joining, and the efficiency of homologous recombinational DNA repair decline with age leading to increased sensitivity to ionizing radiation in older individuals. In middle aged human adults, oxidative DNA damage was found to be greater among individuals who were both frail and living in poverty.
Centenarians
Lymphoblastoid cell lines established from blood samples of humans who lived past 100 years (centenarians) have significantly higher activity of the DNA repair protein Poly (ADP-ribose) polymerase (PARP) than cell lines from younger individuals (20 to 70 years old). The lymphocytic cells of centenarians have characteristics typical of cells from young people, both in their capability of priming the mechanism of repair after H2O2 sublethal oxidative DNA damage and in their PARP capacity.
Among centenarians, those with the most severe cognitive impairment have the lowest activity of the central DNA repair enzyme apurinic/apyrimidinc (AP) endonuclease 1. AP endonuclease I is employed in the DNA base excision repair pathway and its main role is the repair of damaged or mismatched nucleotides in DNA.
Menopause
As women age, they experience a decline in reproductive performance leading to menopause. This decline is tied to a decline in the number of ovarian follicles. Although 6 to 7 million oocytes are present at mid-gestation in the human ovary, only about 500 (about 0.05%) of these ovulate, and the rest are lost. The decline in ovarian reserve appears to occur at an increasing rate with age, and leads to nearly complete exhaustion of the reserve by about age 51. As ovarian reserve and fertility decline with age, there is also a parallel increase in pregnancy failure and meiotic errors resulting in chromosomally abnormal conceptions.
BRCA1 and BRCA2 are homologous recombination repair genes. The role of declining ATM-Mediated DNA double strand DNA break (DSB) repair in oocyte aging was first proposed by Kutluk Oktay, MD, PhD based on his observations that women with BRCA mutations produced fewer oocytes in response to ovarian stimulation repair. His laboratory has further studied this hypothesis and provided an explanation for the decline in ovarian reserve with age. They showed that as women age, double-strand breaks accumulate in the DNA of their primordial follicles. Primordial follicles are immature primary oocytes surrounded by a single layer of granulosa cells. An enzyme system is present in oocytes that normally accurately repairs DNA double-strand breaks. This repair system is referred to as homologous recombinational repair, and it is especially active during meiosis. Titus et al. from Oktay Laboratory also showed that expression of four key DNA repair genes that are necessary for homologous recombinational repair (BRCA1, MRE11, Rad51 and ATM) decline in oocytes with age. This age-related decline in ability to repair double-strand damages can account for the accumulation of these damages, which then likely contributes to the decline in ovarian reserve as further explained by Turan and Oktay.
Women with an inherited mutation in the DNA repair gene BRCA1 undergo menopause prematurely, suggesting that naturally occurring DNA damages in oocytes are repaired less efficiently in these women, and this inefficiency leads to early reproductive failure. Genomic data from about 70,000 women were analyzed to identify protein-coding variation associated with age at natural menopause. Pathway analyses identified a major association with DNA damage response genes, particularly those expressed during meiosis and including a common coding variant in the BRCA1 gene.
Atherosclerosis
The most important risk factor for cardiovascular problems is chronological aging. Several research groups have reviewed evidence for a key role of DNA damage in vascular aging.
Atherosclerotic plaque contains vascular smooth muscle cells, macrophages and endothelial cells and these have been found to accumulate 8-oxoG, a common type of oxidative DNA damage. DNA strand breaks also increased in atherosclerotic plaques, thus linking DNA damage to plaque formation.
Werner syndrome (WS), a premature aging condition in humans, is caused by a genetic defect in a RecQ helicase that is employed in several DNA repair processes. WS patients develop a substantial burden of atherosclerotic plaques in their coronary arteries and aorta. These findings link excessive unrepaired DNA damage to premature aging and early atherosclerotic plaque development.
DNA damage and the epigenetic clock
Endogenous, naturally occurring DNA damages are frequent, and in humans include an average of about 10,000 oxidative damages per day and 50 double-strand DNA breaks per cell cycle.
Several reviews summarize evidence that the methylation enzyme DNMT1 is recruited to sites of oxidative DNA damage. Recruitment of DNMT1 leads to DNA methylation at the promoters of genes to inhibit transcription during repair. In addition, the 2018 review describes recruitment of DNMT1 during repair of DNA double-strand breaks. DNMT1 localization results in increased DNA methylation near the site of recombinational repair, associated with altered expression of the repaired gene. In general, repair-associated hyper-methylated promoters are restored to their former methylation level after DNA repair is complete. However, these reviews also indicate that transient recruitment of epigenetic modifiers can occasionally result in subsequent stable epigenetic alterations and gene silencing after DNA repair has been completed.
In human and mouse DNA, cytosine followed by guanine (CpG) is the least frequent dinucleotide, making up less than 1% of all dinucleotides (see CG suppression). At most CpG sites cytosine is methylated to form 5-methylcytosine. As indicated in the article CpG site, in mammals, 70% to 80% of CpG cytosines are methylated. However, in vertebrates there are CpG islands, about 300 to 3,000 base pairs long, with interspersed DNA sequences that deviate significantly from the average genomic pattern by being CpG-rich. These CpG islands are predominantly nonmethylated. In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island (see CpG islands in promoters). If the initially nonmethylated CpG sites in a CpG island become largely methylated, this causes stable silencing of the associated gene.
For humans, after adulthood is reached and during subsequent aging, the majority of CpG sequences slowly lose methylation (called epigenetic drift). However, the CpG islands that control promoters tend to gain methylation with age. The gain of methylation at CpG islands in promoter regions is correlated with age, and has been used to create an epigenetic clock (see article Epigenetic clock).
There may be some relationship between the epigenetic clock and epigenetic alterations accumulating after DNA repair. Both unrepaired DNA damage accumulated with age and accumulated methylation of CpG islands would silence genes in which they occur, interfere with protein expression, and contribute to the aging phenotype.
See also
References
DNA
Programmed cell death
Proximate theories of biological ageing
Senescence
Theories of biological ageing
Theories of ageing | DNA damage theory of aging | Chemistry,Biology | 6,089 |
25,451,300 | https://en.wikipedia.org/wiki/Remorse%20%28House%29 | "Remorse" is the 12th episode of the sixth season of House. It aired on Fox on January 25, 2010.
Plot
An attractive 27-year-old business consultant experiences intermittent episodes of excruciating ear pain. House is intrigued by the fact that Valerie (Beau Garrett) is very attractive while her husband is not.
While treating her ear pain, caused by supraventricular tachycardia, the men on the team are charmed by Valerie's beauty and personality. Only Thirteen looks beyond her superficial traits. During the half hour fMRI test, Valerie's brain bypasses the emotional centers and she uses the language sections of the brain to answer the questions, indicating that she knows what emotions are but does not feel them herself. Thus, Thirteen discovers that the patient is a psychopath. Valerie admits she drugged her co-worker to get him fired. She had been sleeping with him every Thursday; in exchange she got credit for his best work. She claims she is the same as other people except she openly admits it. More complications appear.
Throughout the differentials, Thirteen and Foreman continuously argue. House tells them to "have sex, fight, or quit," as he is tired of their bickering.
Following up on clues from a conversation with Thirteen, Valerie's husband finds out his wife was not attending weekly evening classes as claimed. Valerie counters with a sexual harassment complaint to the medical licensing board. Thirteen returns to verbally assault Valerie, but Foreman enters and removes her from the room and convinces her to ignore it. He also apologizes for firing her, and says he hopes they can work together again. Then he is paged because Valerie begins to spit up blood. Taub thinks it is primary hepatic fibrosis, so the team starts her on steroids.
Thirteen runs into Valerie's sister. Her sister recounts that Valerie protected her from their abusive father and mentions Valerie started displaying antisocial behavior at puberty. Thirteen sees a connection to Wilson's disease. After removing Valerie's nail polish to reveal blue fingernail beds (confirming copper accumulation), Valerie is started on chelation therapy.
Valerie insults her husband, saying that he is pathetic for loving her even after learning what she is. The husband, distraught, leaves. Thirteen reasons that Valerie would not have done that if she was still a psychopath. The treatment has cured her psychopathy, and she begins to feel emotions again. When Thirteen asks her what she feels, she replies that she does not know, but it hurts, indicating she is experiencing guilt for having manipulated her husband (and pushing him away as a result) but is not familiar with the sensation. Thirteen replies that "it will".
House also uncharacteristically reaches out to a former medical school colleague named Wibberly (Ray Abruzzo) he wronged. House had exchanged a medical school assignment with one Wibberly wrote to test a hypothesis a professor was going to give him a bad grade regardless of the quality of his work. Wibberly originally claimed he got a failing grade, triggering a series of events and forcing him out of medical school. Reportedly now working in a low-paid job, he was selling his home to pay bills. When House tried to give him a check to help with expenses, Wibberly admitted he got an A+ for the paper, had a successful medical practice but lost his money to a gambling habit. House drops the check in Wibberly's mailbox at the end of the episode anyway. Wilson points out that House has chosen to apologize to Wibberly because he has not seen him for years, much easier than saying sorry to Wilson or Cuddy.
At the end, Thirteen helps Foreman decipher and transcribe Taub's handwritten notes. While Foreman types, Thirteen looks at him with a softer expression. House heads for Cuddy's office and stops in front of the door when he sees Lucas and Cuddy happily looking at something on their laptop. House turns around and leaves.
Music
"Why Try to Change Me Now", written by Cy Coleman, sung by Fiona Apple
Reception
Zack Handlen of The A.V. Club graded the episode a B−, writing, "Valerie's mental illness just seems a little too controlled, a little too cool, somehow. Like Hopkins' Lecter, she's what we secretly wish crazy people were like—Machiavellian monsters who always make it to the final reel."
References
External links
House season 6 episodes
2010 American television episodes
Copper in health
Television episodes directed by Andrew Bernstein (director)
fr:Absence de conscience
it:Episodi di Dr. House - Medical Division (sesta stagione)#Rimorso | Remorse (House) | Chemistry | 970 |
66,071,267 | https://en.wikipedia.org/wiki/Virtual%20DOM | A virtual DOM is a lightweight JavaScript representation of the Document Object Model (DOM) used in declarative web frameworks such as React, Vue.js, and Elm. Since generating a virtual DOM is relatively fast, any given framework is free to rerender the virtual DOM as many times as needed relatively cheaply. The framework can then find the differences between the previous virtual DOM and the current one (diffing), and only makes the necessary changes to the actual DOM (reconciliation). While technically slower than using just vanilla JavaScript, the pattern makes it much easier to write websites with a lot of dynamic content, since markup is directly coupled with state.
Similar techniques include Ember.js' Glimmer and Angular's incremental DOM.
History
The JavaScript DOM API has historically been inconsistent across browsers, clunky to use, and difficult to scale for large projects. While libraries like jQuery aimed to improve the overall consistency and ergonomics of interacting with HTML, it too was prone to repetitive code that didn't describe the nature of the changes being made well and decoupled logic from markup.
The release of AngularJS in 2010 provided a major paradigm shift in the interaction between JavaScript and HTML with the idea of dirty checking. Instead of imperatively declaring and destroying event listeners and modifying individual DOM nodes, changes in variables were tracked and sections of the DOM were invalidated and rerendered when a variable in their scope changed. This digest cycle provided a framework to write more declarative code that coupled logic and markup in a more logical way.
While AngularJS aimed to provide a more declarative experience, it still required data to be explicitly bound to and watched by the DOM, and performance concerns were cited over the expensive process of dirty checking hundreds of variables. To alleviate these issues, React was the first major library to adopt a virtual DOM in 2013, which removed both the performance bottlenecks (since diffing and reconciling the DOM was relatively cheap) and the difficulty of binding data (since components were effectively just objects). Other benefits of a virtual DOM included improved security since XSS was effectively impossible and better extensibility since a component's state was entirely encapsulated. Its release also came with the advent of JSX, which further coupled HTML and JavaScript with an XML-like syntax extension.
Following React's success, many other web frameworks copied the general idea of an ideal DOM representation in memory, such as Vue.js in 2014, which used a template compiler instead of JSX and had fine-grained reactivity built as part of the framework.
In recent times, the virtual DOM has been criticized for being slow due to the additional time required for diffing and reconciling DOM nodes. This has led to the development of frameworks without a virtual DOM, such as Svelte, and frameworks that edit the DOM in-place such as Angular 2.
Implementations
React
React pioneered the use of a virtual DOM to make components declaratively. Virtual DOM nodes are constructed using the createElement() function, but are often transpiled from JSX to make writing components more ergonomic. In class-based React, virtual DOM nodes are returned from the render() function, while in functional hook-based components, the return value of the function itself serves as the page markup.
Vue.js
Vue.js uses a virtual DOM to handle state changes, but is usually not directly interacted with; instead, a compiler is used to transform HTML templates into virtual DOM nodes as an implementation detail. While Vue supports writing JSX and custom render functions, it's more typical to use the template compiler since a build step isn't required that way.
Svelte
Svelte does not have a virtual DOM, with its creator Rich Harris calling the virtual DOM "pure overhead". Instead of diffing and reconciling DOM nodes at runtime, Svelte uses compile-time reactivity to analyze markup and generate JavaScript code that directly edits the HTML, drastically increasing performance.
See also
Shadow DOM
References
Web development
Object models | Virtual DOM | Engineering | 874 |
3,394,642 | https://en.wikipedia.org/wiki/Dioptra | A dioptra (sometimes also named dioptre or diopter, from ) is a classical astronomical and surveying instrument, dating from the 3rd century BC. The dioptra was a sighting tube or, alternatively, a rod with a sight at both ends, attached to a stand. If fitted with protractors, it could be used to measure angles.
Use
Greek astronomers used the dioptra to measure the positions of stars; both Euclid and Geminus refer to the dioptra in their astronomical works.
It continued in use as an effective surveying tool. Adapted to surveying, the dioptra is similar to the theodolite, or surveyor's transit, which dates to the sixteenth century. It is a more accurate version of the groma.
There is some speculation that it may have been used to build the Eupalinian aqueduct. Called "one of the greatest engineering achievements of ancient times," it is a tunnel long, excavated through a mountain on the Greek island of Samos during the reign of Polycrates in the sixth century BC. Scholars disagree, however, whether the dioptra was available that early.
An entire book about the construction and surveying usage of the dioptra is credited to Hero of Alexandria (also known as Heron; a brief description of the book is available online; see Lahanas link, below). Hero was "one of history’s most ingenious engineers and applied mathematicians."
The dioptra was used extensively on aqueduct building projects. Screw turns on several different parts of the instrument made it easy to calibrate for very precise measurements
The dioptra was replaced as a surveying instrument by the theodolite.
See also
Alidade
References
Further reading
Isaac Moreno Gallo (2006) The Dioptra Tesis and reconstruction of the Dioptra.
Michael Jonathan Taunton Lewis (2001), Surveying Instruments of Greece and Rome, Cambridge University Press,
Lucio Russo (2004), The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had To Be Reborn, Berlin: Springer. .
Evans, J., (1998) The History and Practice of Ancient Astronomy, pages 34–35. Oxford University Press.
External links
Michael Lahanas, Heron of Alexandria, Inventions, Biography, Science
Nathan Sidoli (2005), Heron's Dioptra 35 and Analemma Methods: An Astronomical Determination of the Distance between Two Cities, Centaurus, 47(3), 236-258
Bamber Gascoigne, History of Measurement, historyworld.net
Tom M. Apostol (2004), The Tunnel of Samos, Engineering and Science, 64(4), 30-40
Ancient Greek astronomy
Astrometry
Astronomical instruments
Historical scientific instruments
Angle measuring instruments
Surveying instruments | Dioptra | Astronomy | 564 |
23,843,040 | https://en.wikipedia.org/wiki/Gymnopilus%20hypholomoides | Gymnopilus hypholomoides is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus hypholomoides at Index Fungorum
hypholomoides
Taxa named by William Alphonso Murrill
Fungus species | Gymnopilus hypholomoides | Biology | 68 |
6,314 | https://en.wikipedia.org/wiki/Fire%20%28classical%20element%29 | Fire is one of the four classical elements along with earth, water and air in ancient Greek philosophy and science. Fire is considered to be both hot and dry and, according to Plato, is associated with the tetrahedron.
Greek and Roman tradition
Fire is one of the four classical elements in ancient Greek philosophy and science. It was commonly associated with the qualities of energy, assertiveness, and passion. In one Greek myth, Prometheus stole fire from the gods to protect the otherwise helpless humans, but was punished for this charity.
Fire was one of many archai proposed by the pre-Socratics, most of whom sought to reduce the cosmos, or its creation, to a single substance. Heraclitus considered fire to be the most fundamental of all elements. He believed fire gave rise to the other three elements: "All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods." He had a reputation for obscure philosophical principles and for speaking in riddles. He described how fire gave rise to the other elements as the: "upward-downward path", (), a "hidden harmony" or series of transformations he called the "turnings of fire", (), first into sea, and half that sea into earth, and half that earth into rarefied air. This is a concept that anticipates both the four classical elements of Empedocles and Aristotle's transmutation of the four elements into one another.
This world, which is the same for all, no one of gods or men has made. But it always was and will be: an ever-living fire, with measures of it kindling, and measures going out.
Heraclitus regarded the soul as being a mixture of fire and water, with fire being the more noble part and water the ignoble aspect. He believed the goal of the soul is to be rid of water and become pure fire: the dry soul is the best and it is worldly pleasures that make the soul "moist". He was known as the "weeping philosopher" and died of hydropsy, a swelling due to abnormal accumulation of fluid beneath the skin.
However, Empedocles of Akragas , is best known for having selected all elements as his archai and by the time of Plato , the four Empedoclian elements were well established. In the Timaeus, Plato's major cosmological dialogue, the Platonic solid he associated with fire was the tetrahedron which is formed from four triangles and contains the least volume with the greatest surface area. This also makes fire the element with the smallest number of sides, and Plato regarded it as appropriate for the heat of fire, which he felt is sharp and stabbing, (like one of the points of a tetrahedron).
Plato's student Aristotle did not maintain his former teacher's geometric view of the elements, but rather preferred a somewhat more naturalistic explanation for the elements based on their traditional qualities. Fire the hot and dry element, like the other elements, was an abstract principle and not identical with the normal solids, liquids and combustion phenomena we experience:
What we commonly call fire. It is not really fire, for fire is an excess of heat and a sort of ebullition; but in reality, of what we call air, the part surrounding the earth is moist and warm, because it contains both vapour and a dry exhalation from the earth.
According to Aristotle, the four elements rise or fall toward their natural place in concentric layers surrounding the center of the Earth and form the terrestrial or sublunary spheres.
In ancient Greek medicine, each of the four humours became associated with an element. Yellow bile was the humor identified with fire, since both were hot and dry. Other things associated with fire and yellow bile in ancient and medieval medicine included the season of summer, since it increased the qualities of heat and aridity; the choleric temperament (of a person dominated by the yellow bile humour); the masculine; and the eastern point of the compass.
In alchemy the chemical element of sulfur was often associated with fire and its alchemical symbol and its symbol was an upward-pointing triangle. In alchemic tradition, metals are incubated by fire in the womb of the Earth and alchemists only accelerate their development.
Indian tradition
Agni is a Hindu and Vedic deity. The word agni is Sanskrit for fire (noun), cognate with Latin ignis (the root of English ignite), Russian огонь (fire), pronounced agon. Agni has three forms: fire, lightning and the sun.
Agni is one of the most important of the Vedic gods. He is the god of fire and the accepter of sacrifices. The sacrifices made to Agni go to the deities because Agni is a messenger from and to the other gods. He is ever-young, because the fire is re-lit every day, yet he is also immortal. In Indian tradition fire is also linked to Surya or the Sun and Mangala or Mars, and with the south-east direction.
Teukāya ekendriya is a name used in Jain tradition which refers to Jīvas said to be reincarnated as fire.
Ceremonial magic
Fire and the other Greek classical elements were incorporated into the Golden Dawn system. Philosophus (4=7) is the elemental grade attributed to fire; this grade is also attributed to the Qabalistic Sephirah Netzach and the planet Venus. The elemental weapon of fire is the Wand. Each of the elements has several associated spiritual beings. The archangel of fire is Michael, the angel is Aral, the ruler is Seraph, the king is Djin, and the fire elementals (following Paracelsus) are called salamanders. Fire is considered to be active; it is represented by the symbol for Leo and it is referred to the lower right point of the pentacle in the Supreme Invoking Ritual of the Pentacle. Many of these associations have since spread throughout the occult community.
Tarot
Fire in tarot symbolizes conversion or passion. Many references to fire in tarot are related to the usage of fire in the practice of alchemy, in which the application of fire is a prime method of conversion, and everything that touches fire is changed, often beyond recognition. The symbol of fire was a cue pointing towards transformation, the chemical variant being the symbol delta, which is also the classical symbol for fire. Conversion symbolized can be good, for example, refining raw crudities to gold, as seen in The Devil. Conversion can also be bad, as in The Tower, symbolizing a downfall due to anger. Fire is associated with the suit of rods/wands, and as such, represents passion from inspiration. As an element, fire has mixed symbolism because it represents energy, which can be helpful when controlled, but volatile if left unchecked.
Modern witchcraft
Fire is one of the five elements that appear in most Wiccan traditions influenced by the Golden Dawn system of magic, and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn.
Freemasonry
In freemasonry, fire is present, for example, during the ceremony of winter solstice, a symbol also of renaissance and energy. Freemasonry takes the ancient symbolic meaning of fire and recognizes its double nature: creation, light, on the one hand, and destruction and purification, on the other.
See also
Fire
Fire god
Fire worship
Pyrokinesis
Pyromancy
Pyromania
References
Further reading
Frazer, Sir James George, Myths of the Origin of Fire, London: Macmillan, 1930.
External links
Different versions of the classical elements
Overview the 5 elements
Section on 4 elements in Buddhism
a virtual exhibition about the history of fire
Classical elements
Esoteric cosmology
Fire in culture
Technical factors of astrology
History of astrology
Concepts in ancient Greek metaphysics | Fire (classical element) | Astronomy | 1,644 |
20,982,792 | https://en.wikipedia.org/wiki/Marionnet | Marionnet is a virtual network laboratory: it allows users to define, configure and run complex computer networks without any need for physical setup. Only a single, possibly even non-networked Linux host machine is required to simulate a whole Ethernet network complete with computers, routers, hubs, switches, cables, and more
Support is also provided for integrating the virtual network with the physical host network.
History
Marionnet was born in April 2005 as a simple textual interface to Netkit, written in OCaml by Jean-Vincent Loddo at the Paris 13 University for his own networking course.
The code has since been completely rewritten and redesigned in September 2005, in order to remove the dependency from Netkit and to ease the construction of a graphical interface, partly built on DOT.
The architecture was further extended and the system made more general when Luca Saiu joined the project in 2007, contributing in particular to the dynamic reconfiguration aspects.
The system is now in active use in several universities in Metropolitan France and other countries.
Current development
Marionnet has reached a fairly stable state and is being successfully used for teaching networks in several universities around the world. The current development is centered around making the system easier to use for the average end user, with a particular emphasis on documentation.
Internationalization is on the way (mostly in the branch) through GNU Gettext.
So far Marionnet has been presented at two international Computer Science conferences, many French events and at FOSDEM.
Design
On top of a Linux host, the emulation of guest machines is achieved through User Mode Linux technology that allows to run many Linux kernels in user space as regular processes.
VDE - Virtual Distributed Ethernet project is responsible of linking together Uml machines in a virtual network,
its purpose is to emulate cable, hub and switch devices allowing also to introduce perturbations in the communication.
On top of this raw emulated network Marionnet acts as a coherent manager and as a GUI.
Marionnet is an example of a complex concurrent application written in a functional language, using relatively advanced programming techniques.
Features
Dynamic reconfiguration of the network.
Full binary compatibility with user-level Linux software which runs on virtual machines.
Ability to use host X server to run graphical application (wireshark...).
Copy-on-write file systems, allowing to economize on disk space usage.
"Gateway" device to connect virtual network to host network.
Intuitive GUI with the network diagram dynamically updated.
Performance
Marionnet has showed good performance with complex networks (~15 machines) even on relatively old machines, remaining very responsive.
The main concern is disk usage but that largely depends on the distribution of choice; pinocchio is the custom distro that was developed to meet average needs.
Uses
The main goal of Marionnet is the teaching of computers networks in university laboratories, although it could be a valuable tool also for high schools.
Despite being teaching-oriented Marionnet may be used to emulate networks for test or development purposes. It is quite easy to set up, fast even with complicated configurations and the possibility of reverting filesystem changes on virtual machines makes it quite flexible.
See also
User-mode Linux
OCaml
Gtk
DOT language
Network simulation
References
External links
Marionnet wiki, 2009-01-15
Linux
Computing platforms
Virtualization software
Free virtualization software
OCaml software
Free software programmed in OCaml
Virtualization software for Linux | Marionnet | Technology | 705 |
63,772,189 | https://en.wikipedia.org/wiki/Sobelivirales | Sobelivirales is an order of RNA viruses which infect eukaryotes. Member viruses have a positive-sense single-stranded RNA genome. The name of the group is a portmanteau of member orders "sobemovirus-like" and -virales which is the suffix for a virus order.
Taxonomy
The following families are recognized:
Alvernaviridae
Barnaviridae
Solemoviridae
References
Viruses | Sobelivirales | Biology | 90 |
32,998,741 | https://en.wikipedia.org/wiki/Leonid%20Elenin | Leonid Vladimirovich Elenin (; born 10 August 1981) is a Russian amateur astronomer working with the ISON-NM observatory (H15) via the International Scientific Optical Network (ISON), which is the first Russian remote observatory in the West.
Leonid Elenin works for the Keldysh Institute of Applied Mathematics and lives in Lyubertsy, Moscow region, Russia.
Leonid Elenin is best known for discovering the comet C/2010 X1 on 10 December 2010. Elenin then discovered comet P/2011 NO1 on 7 July 2011. , Elenin had discovered five comets.
The first asteroid discovered by Leonid Elenin was 2008 XE on 1 December 2008 at Tzec (H10). The first Amor asteroid (near-Earth object) discovered by Elenin was on 10 September 2010 at ISON-NM (H15).
Elenin has also discovered the trailing L5 Jupiter trojan on 23 August 2011, the Mars-crossing asteroid on 25 August 2011, and the amor asteroid (Near-Earth object) on 27 August 2011. The first numbered asteroid discovered by Elenin at ISON-NM is 365756 ISON ().
On 29 January 2013, the Minor Planet Center awarded Leonid Elenin a 2012 Edgar Wilson Award for the discovery of comets by amateurs.
List of discovered minor planets
Source:
References
1981 births
21st-century Russian astronomers
Amateur astronomers
Discoverers of minor planets
Living people
Moscow Aviation Institute alumni | Leonid Elenin | Astronomy | 304 |
74,158,460 | https://en.wikipedia.org/wiki/Short%20circuit%20ratio%20%28synchronous%20generator%29 | In a synchronous generator, the short circuit ratio is the ratio of field current required to produce rated armature voltage at the open circuit to the field current required to produce the rated armature current at short circuit. This ratio can also be expressed as an inverse of the saturated direct-axis synchronous reactance (in p.u.):
Effects of SCR values
Higher SCR requires lower reactance that in practice means a larger air gap.
Both high and low levels of SCR have their benefits:
low SCR:
in case of a short circuit, the current is proportional to SCR, therefore generators with low SCR require less protection and thus are cheaper;
low SCR allows shorter air gap and lower excitation field, both decreasing the size (an cost) of the generator;
with low SCR the amounts of iron and copper are reduced, lowering the cost;
high SCR:
generator with high SCR provides more power when overloaded, improving the system stability in case of a contingency;
high SCR inherently provides lower voltage variations in case of an oscillatory load, thus contributing to system stability;
high-SCR generator has more synchronizing power, making it easier to operate generators in parallel.
Therefore, in practice the design of a generator is seeking an SCR that balances benefits and drawbacks for a particular application.
SCR is a measure of the electrical stiffness (also known as a synchronizing coefficient) of the machine. Synchronization coefficient is proportional to the SCR. A stiff machine has a higher SCR, is more loosely coupled to the network and thus is slower in following. A less stiff machine with lower SCR (a typical situation for modern h=generators) will follow the grid faster. Stiffness is a ratio of the change in power output to the change of power angle. For example, if the system frequency decreases, the stiffer generator provides more power thus contributing to the system stability.
Effects of construction
The larger the SCR, the smaller is alternator reactance (Xd) and inductance Ld. This is the result of larger air gaps in generator design (As in Hydro generators or Salient Pole Machines). It results into Machine loosely coupled to the grid, and its response will be slow. This increases the machines’ stability while operating on the grid, but simultaneously will increase the short circuit current delivery capability of the machine (higher short circuit current) and subsequently larger machine size and its cost. Typical values of SCR for Hydro alternators may be in the range of 1 to 1.5.
Conversely, the smaller the SCR, the larger is alternator's reactance (Xd), the larger is Ld. It results from small air gaps in machine design (As in Turbo generators or Cylindrical rotor Machines). Machines are tightly coupled to the grid, and their response will be fast. This reduces the machine's stability while operating on the grid and will reduce the short circuit current delivery capability (lower short circuit current), smaller machine size, and lower cost subsequently. Typical values of SCR for turbo alternators may be in the range of 0.45 to 0.9.
References
Sources
Electromechanical engineering | Short circuit ratio (synchronous generator) | Engineering | 670 |
741,731 | https://en.wikipedia.org/wiki/Engine%20room | On a ship, the engine room (ER) is the compartment where the machinery for marine propulsion is located. The engine room is generally the largest physical compartment of the machinery space. It houses the vessel's prime mover, usually some variations of a heat engine (steam engine, diesel engine, gas or steam turbine). On some ships, there may be more than one engine room, such as forward and aft, or port or starboard engine rooms, or may be simply numbered. To increase a vessel's safety and chances of surviving damage, the machinery necessary for the ship's operation may be segregated into various spaces.
The engine room is usually located near the bottom, at the rear or aft end of the vessel, and comprises few compartments. This design maximizes the cargo carrying capacity of the vessel and situates the prime mover close to the propeller, minimizing equipment cost and problems posed from long shaft lines. On some ships, the engine room may be situated mid-ship, such as on vessels built from 1900 to the 1960s, or forward and even high, such as on diesel-electric vessels.
Equipment
Engines
The engine room of a motor vessel typically contains several engines for different purposes. Main, or propulsion, engines are used to turn the ship's propeller and move the ship through the water. They typically burn diesel oil or heavy fuel oil, and may be able to switch between the two. There are many propulsion arrangements for motor vessels, some including multiple engines, propellers, and gearboxes.
Smaller, but still large engines drive electrical generators that provide power for the ship's electrical systems. Large ships typically have three or more synchronized generators to ensure smooth operation. The combined output of a ship's generators is well above the actual power requirement to accommodate maintenance or the loss of one generator.
On a steamship, power for both electricity and propulsion is provided by one or more large boilers giving rise to the alternate name boiler room. High pressure steam from the boiler is used to drive reciprocating engines or turbines for propulsion, and also turbo generators for electricity. Besides propulsion and auxiliary engines, a typical engine room contains many smaller engines, including generators, air compressors, feed pumps, and fuel pumps. Today, these machines are usually powered by small diesel engines or electric motors, but may also use low-pressure steam.
Engine cooling
The engine(s) get required cooling from liquid-to-liquid heat exchangers connected to fresh seawater or divertible to recirculate through tanks of seawater in the engine room. Both supplies draw heat from the engines via the coolant and oil lines. Heat exchangers are plumbed in so that oil is represented by a yellow mark on the flange of the pipes, and relies on paper type gaskets to seal the mating faces of the pipes. Sea water, or brine, is represented by a green mark on the flanges and internal coolant is represented by blue marks on the flanges.
Thrusters
In addition to this array of equipment is the ship's thruster system (on modern vessels fitted with this equipment), typically operated by electric motors controlled from the bridge. These thrusters are laterally mounted propellers that can suck or blow water from port to starboard (i.e. left to right) or vice versa. They are normally used only in maneuvering, e.g. docking operations, and are often banned in tight confines, e.g. drydocks.
Thrusters, like main propellers, are reversible by hydraulic operation. Small embedded hydraulic motors rotate the blades up to 180 degrees to reverse the direction of the thrust. A variant on this is the azipod, which are propellers mounted in a swiveling pod that can rotate to direct thrust in any direction, making fine steering easier, and allowing a ship to move sideways up to a dock, when used in conjunction with a bow thruster.
Engine Control Room
Modern merchant vessels have a special space inside the Engine Room called Engine Control Room (ECR). This is the place where all machinery could be remotely observed and controlled. There are situated also most of or at least main electricity breakers. ECR is connected with the Bridge through compulsory engine-room telegraph which provides visual indication of the orders and responses. Other means of communication are phone and emergency phone lines as well as LAN cables or fiber-optic cables depending on distance.
Human presence is not required to be round the clock in the ER due to high level of automation and computerization. Unattended machinery spaces are common practice nowadays.
Safety
Fire precautions
Engine rooms are noisy, hot, usually dirty, and potentially dangerous. The presence of flammable fuel, high voltage (HV) electrical equipment and internal combustion engines (ICE) means that a serious fire hazard exists in the engine room, which is monitored continuously by the ship's engine department and various monitoring systems.
Ventilation
If equipped with internal combustion or turbine engines, engine rooms employ some means of providing air for the operation of the engines and associated ventilation. If individuals are normally present in these rooms, additional ventilation should be available to keep engine room temperatures to acceptable limits. If personnel are not normally in the engine space, as in many pleasure boats, the ventilation need only be sufficient to supply the engines with intake air. This would require an unrestricted hull opening of the same size as the intake area of the engine itself, assuming the hull opening is in the engine room itself. Commonly, screens are placed over such openings and if this is done, airflow is reduced by approximately 50%, so the opening area is increased appropriately. The requirement for general ventilation and the requirement for sufficient combustion air are quite different. A typical arrangement might be to make the opening large enough to provide intake air plus 1000 Cubic Feet per Minute (CFM) for additional ventilation. Engines pull sufficient air into the engine room for their own operation. However, additional airflow for ventilation usually requires intake and exhaust blowers.
History
Engine rooms were separated from its associated fire room on fighting ships from the 1880s through the 1960s. If either experienced damage putting it out of action, the associated engine room could get steam from another fire room.
See also
Engine department
Marine propulsion
Marine fuel management
Mechanical room
Electrical room
Fire room
References
External links
Marine Engineers Network
Videos of engine rooms
Ship compartments
Marine propulsion | Engine room | Engineering | 1,284 |
4,496,516 | https://en.wikipedia.org/wiki/Blumlein%20pair | Blumlein pair is a stereo recording technique invented by Alan Blumlein for the creation of recordings that, upon replaying through headphones or loudspeakers, recreate the spatial characteristics of the recorded signal.
The pair consists of an array of two matched microphones that have a bi-directional ("figure-eight") polar pattern, positioned 90° from each other. Ideally, the transducers should occupy the same physical space; since this cannot be achieved, the microphone capsules are placed as close to each other as physically possible, generally with one centered directly above the other. The array is oriented so that the line bisecting the angle between the two microphones points towards the sound source to be recorded (see diagram). The pickup patterns of the pair, combined with their positioning, delivers a high degree of stereo separation in the source signal, as well as the room ambiance.
The Blumlein pair produces an exceptionally realistic stereo image, but the quality of recordings is highly dependent on the acoustics of the room and the size of the sound source.
Both ribbon and condenser microphones can be used for Blumlein-pair recording. A few types of stereo ribbon microphones (B & O, Royer, AEA) have even been purpose-built for just this type of recording. Several types of stereo condenser microphones (Neumann, AKG, Schoeps, Nevaton BPT) have also offered a Blumlein arrangement as one of their possible configurations. Individual microphones with variable patterns (such as those from Pearl/Milab) can be used. The Soundfield microphone used to make Ambisonic recordings can be adjusted to mimic two microphones of any pattern at any angle to each other, including a Blumlein pair.
In his early experiments at EMI with what he called "binaural" sound, Blumlein did not use this actual technique because he did not have access to figure-eight microphones. This meant that he had to develop ways of using omnidirectional microphones to record what we now know as stereo sound. In the claims he made in his U.K. patent application in 1931, as well as details of these techniques, he included the theoretical possibility of using directional microphones in what later became known as a Blumlein pair. During the period when Blumlein's patent (British Patent 394325) was being written, Harry F. Olson published a patent for the first practical ribbon microphone, and much of the later experimental work was carried out with this type of microphone
It is unclear when this approach became known as the Blumlein pair, although it does not appear to have been known by that name during his lifetime.
See also
ORTF stereo technique
References
External links
Visualization XY Stereo System – Blumlein Pair Eight/Eight 90° – Intensity Stereo
Microphone practices
Stereophonic techniques | Blumlein pair | Engineering | 597 |
27,165,009 | https://en.wikipedia.org/wiki/Istanbul%20Technical%20University%20Faculty%20of%20Mines | ITU Faculty of Mines (), located in Maslak campus, is one of the faculties in Istanbul Technical University, which has five departments.Among the notable faculty of the ITU Faculty of Mines are Galip Sağıroğlu, İhsan Ketin, Aykut Barka, Celal Şengör and Kazım Ergin.
The faculty's departments are:
Mining engineering
Geological engineering
Petroleum and natural gas engineering
Geophysical engineering
Mineral processing engineering
History
The Faculty of Mines was established March 1, 1953 in Istanbul. In its first years, the faculty was composed of mainly Turkish and German professors, and its program was similar to those days' famous mining schools such as RWTH Aachen University, Clausthal University of Technology then in West Germany and Freiberg University of Mining and Technology in East Germany. The Faculty of Mines accepted its first students in 1953 with a faculty of eleven scientists.
References
External links
ITU School of Mines
School of Mines anthem
Past deans
ITU Department of Mining engineering
ITU Department of Geological engineering
ITU Department of Petroleum and Natural Gas Engineering
Geophysical Engineering
Mineral Processing Engineering
Istanbul Technical University
Universities and colleges established in 1953
Schools of mines
1953 establishments in Turkey | Istanbul Technical University Faculty of Mines | Engineering | 245 |
4,501,291 | https://en.wikipedia.org/wiki/Sodium%20metasilicate | Sodium metasilicate is the chemical substance with formula , which is the main component of commercial sodium silicate solutions. It is an ionic compound consisting of sodium cations and the polymeric metasilicate anions [––]n. It is a colorless crystalline hygroscopic and deliquescent solid, soluble in water (giving an alkaline solution) but not in alcohols.
Preparation and properties
The anhydrous compound can be prepared by fusing silicon dioxide (silica, quartz) with sodium oxide in 1:1 molar ratio.
The compound crystallizes from solution as various hydrates, such as
pentahydrate ·5 (CAS 10213-79-3, EC 229-912-9, PubChem 57652358)
nonahydrate ·9 (CAS 13517-24-3, EC 229-912-9, PubChem 57654617)
Structure
In the anhydrous solid, the metasilicate anion is actually polymeric, consisting of corner-shared {SiO4} tetrahedra, and not a discrete SiO32− ion.
In addition to the anhydrous form, there are hydrates with the formula Na2SiO3·nH2O (where n = 5, 6, 8, 9), which contain the discrete, approximately tetrahedral anion SiO2(OH)22− with water of hydration. For example, the commercially available sodium silicate pentahydrate Na2SiO3·5H2O is formulated as Na2SiO2(OH)2·4H2O, and the nonahydrate Na2SiO3·9H2O is formulated as Na2SiO2(OH)2·8H2O. The pentahydrate and nonahydrate forms have their own CAS Numbers, 10213-79-3 and 13517-24-3 respectively.
Uses
Sodium Metasilicate reacts with acids to produce silica gel.
Cements and Binders - dehydrated sodium metasilicate forms cement or binding agent.
Pulp and Par - sizing agent and buffer/stabilizing agent when mixed with hydrogen peroxide.
Soaps and Detergents - as an emulsifying and suspension agent.
Automotive applications - decommissioning of old engines (CARS program), cooling system sealant, exhaust repair.
Egg Preservative - seals eggs increasing shelf life.
Crafts - forms "stalagmites" by reacting with and precipitating metal ions. Also used as a glue called "soluble glass".
Hair coloring kits
See also
Potassium metasilicate
Sodium orthosilicate
Sodium pyrosilicate
References
Inorganic silicon compounds
Sodium compounds
Metasilicates | Sodium metasilicate | Chemistry | 579 |
753,756 | https://en.wikipedia.org/wiki/Lissajous%20curve | A Lissajous curve , also known as Lissajous figure or Bowditch curve , is the graph of a system of parametric equations
which describe the superposition of two perpendicular oscillations in x and y directions of different angular frequency (a and b). The resulting family of curves was investigated by Nathaniel Bowditch in 1815, and later in more detail in 1857 by Jules Antoine Lissajous (for whom it has been named). Such motions may be considered as a particular kind of complex harmonic motion.
The appearance of the figure is sensitive to the ratio . For a ratio of 1, when the frequencies match a=b, the figure is an ellipse, with special cases including circles (, radians) and lines (). A small change to one of the frequencies will mean the x oscillation after one cycle will be slightly out of synchronization with the y motion and so the ellipse will fail to close and trace a curve slightly adjacent during the next orbit showing as a precession of the ellipse. The pattern closes if the frequencies are whole number ratios i.e. is rational.
Another simple Lissajous figure is the parabola (, ). Again a small shift of one frequency from the ratio 2 will result in the trace not closing but performing multiple loops successively shifted only closing if the ratio is rational as before. A complex dense pattern may form see below.
The visual form of such curves is often suggestive of a three-dimensional knot, and indeed many kinds of knots, including those known as Lissajous knots, project to the plane as Lissajous figures.
Visually, the ratio determines the number of "lobes" of the figure. For example, a ratio of or produces a figure with three major lobes (see image). Similarly, a ratio of produces a figure with five horizontal lobes and four vertical lobes. Rational ratios produce closed (connected) or "still" figures, while irrational ratios produce figures that appear to rotate. The ratio determines the relative width-to-height ratio of the curve. For example, a ratio of produces a figure that is twice as wide as it is high. Finally, the value of determines the apparent "rotation" angle of the figure, viewed as if it were actually a three-dimensional curve. For example, produces and components that are exactly in phase, so the resulting figure appears as an apparent three-dimensional figure viewed from straight on (0°). In contrast, any non-zero produces a figure that appears to be rotated, either as a left–right or an up–down rotation (depending on the ratio ).
Lissajous figures where , ( is a natural number) and
are Chebyshev polynomials of the first kind of degree . This property is exploited to produce a set of points, called Padua points, at which a function may be sampled in order to compute either a bivariate interpolation or quadrature of the function over the domain .
The relation of some Lissajous curves to Chebyshev polynomials is clearer to understand if the Lissajous curve which generates each of them is expressed using cosine functions rather than sine functions.
Examples
The animation shows the curve adaptation with continuously increasing fraction from 0 to 1 in steps of 0.01 ().
Below are examples of Lissajous figures with an odd natural number , an even natural number , and .
Generation
Prior to modern electronic equipment, Lissajous curves could be generated mechanically by means of a harmonograph.
Practical application
Lissajous curves can also be generated using an oscilloscope (as illustrated). An octopus circuit can be used to demonstrate the waveform images on an oscilloscope. Two phase-shifted sinusoid inputs are applied to the oscilloscope in X-Y mode and the phase relationship between the signals is presented as a Lissajous figure.
In the professional audio world, this method is used for realtime analysis of the phase relationship between the left and right channels of a stereo audio signal. On larger, more sophisticated audio mixing consoles an oscilloscope may be built-in for this purpose.
On an oscilloscope, we suppose is CH1 and is CH2, is the amplitude of CH1 and is the amplitude of CH2, is the frequency of CH1 and is the frequency of CH2, so is the ratio of frequencies of the two channels, and is the phase shift of CH1.
A purely mechanical application of a Lissajous curve with , is in the driving mechanism of the Mars Light type of oscillating beam lamps popular with railroads in the mid-1900s. The beam in some versions traces out a lopsided figure-8 pattern on its side.
Application for the case of
When the input to an LTI system is sinusoidal, the output is sinusoidal with the same frequency, but it may have a different amplitude and some phase shift. Using an oscilloscope that can plot one signal against another (as opposed to one signal against time) to plot the output of an LTI system against the input to the LTI system produces an ellipse that is a Lissajous figure for the special case of . The aspect ratio of the resulting ellipse is a function of the phase shift between the input and output, with an aspect ratio of 1 (perfect circle) corresponding to a phase shift of ±90° and an aspect ratio of ∞ (a line) corresponding to a phase shift of 0° or 180°.
The figure below summarizes how the Lissajous figure changes over different phase shifts. The phase shifts are all negative so that delay semantics can be used with a causal LTI system (note that −270° is equivalent to +90°). The arrows show the direction of rotation of the Lissajous figure.
In engineering
A Lissajous curve is used in experimental tests to determine if a device may be properly categorized as a memristor. It is also used to compare two different electrical signals: a known reference signal and a signal to be tested.
In popular culture
In motion pictures
Lissajous figures were sometimes displayed on oscilloscopes meant to simulate high-tech equipment in science-fiction TV shows and movies in the 1960s and 1970s.
The title sequence by John Whitney for Alfred Hitchcock's 1958 feature film Vertigo is based on Lissajous figures.
Company logos
Lissajous figures are sometimes used in graphic design as logos. Examples include:
The Australian Broadcasting Corporation (, , )
The Lincoln Laboratory at MIT (, , )
The open air club Else in Berlin (, , )
The University of Electro-Communications, Japan (, , ).
Disney's Movies Anywhere streaming video application uses a stylized version of the curve
Facebook's rebrand into Meta Platforms is also a Lissajous Curve, echoing the shape of a capital letter M (, , ).
Home State Brewing co. Used as their logo and signifying a single moment as well as the passage of time - Ichi-go ichi-e
In modern art
The Dadaist artist Max Ernst painted Lissajous figures directly by swinging a punctured bucket of paint over a canvas.
In music education
Lissajous curves have been used in the past to graphically represent musical intervals through the use of the Harmonograph, a device that consists of pendulums oscillating at different frequency ratios. Because different tuning systems employ different frequency ratios to define intervals, these can be compared using Lissajous curves to observe their differences. Therefore, Lissajous curves have applications in music education by graphically representing differences between intervals and among tuning systems.
See also
Lissajous orbit
Blackburn pendulum
Lemniscate of Gerono
Plane curves
Spirograph
Notes
External links
Lissajous Curve at Mathworld
Interactive demos
3D Java applets depicting the construction of Lissajous curves in an oscilloscope:
Tutorial from the NHMFL
Physics applet by Chiu-king Ng
Detailed Lissajous figures simulation Drawing Lissajous figures with interactive sliders in Javascript
Lissajous Curves: Interactive simulation of graphical representations of musical intervals and vibrating strings
Interactive Lissajous curve generator – Javascript applet using JSXGraph
Animated Lissajous figures
Lissajous Figures demonstration with interactive sliders from Wolfram Mathematica
1815 introductions
1815 in science
Plane curves
Trigonometry
Articles containing video clips | Lissajous curve | Mathematics | 1,723 |
70,124,884 | https://en.wikipedia.org/wiki/Bernoulli%20umbra | In Umbral calculus, the Bernoulli umbra is an umbra, a formal symbol, defined by the relation , where is the index-lowering operator, also known as evaluation operator and are Bernoulli numbers, called moments of the umbra. A similar umbra, defined as , where is also often used and sometimes called Bernoulli umbra as well. They are related by equality . Along with the Euler umbra, Bernoulli umbra is one of the most important umbras.
In Levi-Civita field, Bernoulli umbras can be represented by elements with power series and , with lowering index operator corresponding to taking the coefficient of of the power series. The numerators of the terms are given in OEIS A118050 and the denominators are in OEIS A118051. Since the coefficients of are non-zero, the both are infinitely large numbers, being infinitely close (but not equal, a bit smaller) to and being infinitely close (a bit smaller) to .
In Hardy fields (which are generalizations of Levi-Civita field) umbra corresponds to the germ at infinity of the function while corresponds to the germ at infinity of , where is inverse digamma function.
Exponentiation
Since Bernoulli polynomials is a generalization of Bernoulli numbers, exponentiation of Bernoulli umbra can be expressed via Bernoulli polynomials:
where is a real or complex number.
This can be further generalized using Hurwitz Zeta function:
From the Riemann functional equation for Zeta function it follows that
Derivative rule
Since and are the only two members of the sequences and that differ, the following rule follows for any analytic function :
Elementary functions of Bernoulli umbra
As a general rule, the following formula holds for any analytic function :
This allows to derive expressions for elementary functions of Bernoulli umbra.
Particularly,
Particularly,
,
,
Relations between exponential and logarithmic functions
Bernoulli umbra allows to establish relations between exponential, trigonometric and hyperbolic functions on one side and logarithms, inverse trigonometric and inverse hyperbolic functions on the other side in closed form:
References
Polynomials
Finite differences
Combinatorics
Factorial and binomial topics | Bernoulli umbra | Mathematics | 466 |
2,951,818 | https://en.wikipedia.org/wiki/Mitomycins | The mitomycins are a family of aziridine-containing natural products isolated from Streptomyces caespitosus or Streptomyces lavendulae. They include mitomycin A, mitomycin B, and mitomycin C. When the name mitomycin occurs alone, it usually refers to mitomycin C, its international nonproprietary name. Mitomycin C is used as a medicine for treating various disorders associated with the growth and spread of cells.
Biosynthesis
In general, the biosynthesis of all mitomycins proceeds via combination of 3-amino-5-hydroxybenzoic acid (AHBA), D-glucosamine, and carbamoyl phosphate, to form the mitosane core, followed by specific tailoring steps. The key intermediate, AHBA, is a common precursor to other anticancer drugs, such as rifamycin and ansamycin.
Specifically, the biosynthesis begins with the addition of phosphoenolpyruvate (PEP) to erythrose-4-phosphate (E4P) with a yet undiscovered enzyme, which is then ammoniated to give 4-amino-3-deoxy-D-arabino heptulosonic acid-7-phosphate (aminoDHAP). Next, DHQ synthase catalyzes a ring closure to give 4-amino3-dehydroquinate (aminoDHQ), which then undergoes a double oxidation via aminoDHQ dehydratase to give 4-amino-dehydroshikimate (aminoDHS). The key intermediate, 3-amino-5-hydroxybenzoic acid (AHBA), is made via aromatization by AHBA synthase.
Synthesis of the key intermediate, 3-amino-5-hydroxy-benzoic acid.
The mitosane core is synthesized as shown below via condensation of AHBA and D-glucosamine, although no specific enzyme has been characterized that mediates this transformation. Once this condensation has occurred, the mitosane core is tailored by a variety of enzymes. Both the sequence and the identity of these steps are yet to be determined.
Complete reduction of C-6 – Likely via F420-dependent tetrahydromethanopterin (H4MPT) reductase and H4MPT:CoM methyltransferase
Hydroxylation of C-5, C-7 (followed by transamination), and C-9a. – Likely via cytochrome P450 monooxygenase or benzoate hydroxylase
O-Methylation at C-9a – Likely via SAM dependent methyltransferase
Oxidation at C-5 and C8 – Unknown
Intramolecular amination to form aziridine – Unknown
Carbamoylation at C-10 – Carbamoyl transferase, with carbamoyl phosphate (C4P) being derived from L-citrulline or L-arginine
Biological effects
In the bacterium Legionella pneumophila, mitomycin C induces competence for transformation. Natural transformation is a process of DNA transfer between cells, and is regarded as a form of bacterial sexual interaction. In the fruit fly Drosophila melanogaster, exposure to mitomycin C increases recombination during meiosis, a key stage of the sexual cycle. In the plant Arabidopsis thaliana, mutant strains defective in genes necessary for recombination during meiosis and mitosis are hypersensitive to killing by mitomycin C.
Medicinal uses and research
Mitomycin C has been shown to have activity against stationary phase persisters caused by Borrelia burgdorferi, a factor in lyme disease. Mitomycin C is used to treat pancreatic and stomach cancer, and is under clinical research for its potential to treat gastrointestinal strictures, wound healing from glaucoma surgery, corneal excimer laser surgery and endoscopic dacryocystorhinostomy.
References
DNA replication inhibitors
IARC Group 2B carcinogens
Quinones
Carbamates
Ethers
Aziridines
Nitrogen heterocycles
Heterocyclic compounds with 4 rings
Enones
Methoxy compounds | Mitomycins | Chemistry | 910 |
78,841,934 | https://en.wikipedia.org/wiki/Kolmogorov%20population%20model | In biomathematics, the Kolmogorov population model, also known as the Kolmogorov equations in population dynamics, is a mathematical framework developed by Soviet mathematician Andrei Kolmogorov in 1936 that generalizes predator-prey interactions and population dynamics. The model was an improvement over earlier predator-prey models, notably the Lotka–Volterra equations, by incorporating more realistic biological assumptions and providing a qualitative analysis of population dynamics.
History
The development of the Kolmogorov population model was influenced by Kolmogorov's early interest in biology during his schoolboy years. Despite being primarily known for his contributions to probability theory and information theory, Kolmogorov made several large contributions to biomathematics. The model was particularly inspired by the work of Italian physicist Vito Volterra, who had developed his predator-prey equations based on observations of fish populations in the Adriatic Sea during World War I. Volterra's work showed that during the war, when fishing was reduced due to military activities, the proportion of predator fish increased while prey fish decreased.
Definition
The Kolmogorov population model is expressed as a system of differential equations
where represents the prey population, represents the predator population, and and are continuously differentiable functions describing the growth rates of the respective populations. The rates of population change decrease as predator numbers increase:
.
The system must admit invasion by predators when prey is present in isolation; that is, , where represents the carrying capacity of the prey population.
Applications
The Kolmogorov model addresses a limitation of the Volterra equations by imposing self-limiting growth in prey populations, preventing unrealistic exponential growth scenarios. It also provides a predictive model for the qualitative behavior of predator-prey systems without requiring explicit functional forms for the interaction terms.
The model's contributions to theoretical ecology were not immediately recognized, with significant appreciation only emerging in the 1960s through the work of American ecologists Michael Rosenzweig and Robert H. MacArthur. Their research demonstrated how the model can be used to understand non-transitory oscillations in ecological systems and the conditions for local stability of predator-prey interactions.
Recent research has shown that Kolmogorov systems can exhibit complex behaviors, including the existence of strange attractors and robust permanent subsystems, implying that even deterministic predator-prey interactions can lead to unpredictable long-term dynamics.
See also
Mathematical biology
Predator-prey interactions
References
Population ecology
Mathematical modeling
Ecological theories
Population dynamics
Partial differential equations | Kolmogorov population model | Mathematics | 507 |
24,895,816 | https://en.wikipedia.org/wiki/Influenza%20Antiviral%20Drug%20Search | The Influenza Antiviral Drug Search was a distributed computing project that was running on the BOINC platform. It is a project of the University of Texas Medical Branch.
Project purpose
The Influenza Antiviral Drug Search conducted millions of virtual docking experiments in order to discover compounds that may be suitable for real-world clinical trials to combat new or drug resistant strains of influenza virus.
One vulnerability of all influenza strains is that they need viral neuraminidase, NS1 Influenza Protein and hemagglutinin in order to infect a body. A chemical compound that can disable one of these molecules has the potential to be an effective antiviral drug.
See also
BOINC
List of distributed computing projects
World Community Grid
External links
Influenza Antiviral Drug Search
References
Pharmaceutical industry
Drug discovery | Influenza Antiviral Drug Search | Chemistry,Technology,Biology | 161 |
39,161,129 | https://en.wikipedia.org/wiki/Mario%20Party%2010 | is a 2015 party video game developed by NDcube and published by Nintendo for the Wii U video game console. It is the tenth home console release in the Mario Party series and a part of the larger Mario franchise. Featuring gameplay similar to the prior series entries, players compete against each other and computer-controlled characters to collect the most mini-stars, traversing a game board and engaging in minigames and other challenges. There are multiple game modes, including one where players traverse a board in a vehicle, sabotaging each other and making choices to collect the most mini-stars by the end. Mario Party 10 adds two modes over its predecessors: Bowser Party, where four players compete in a team against a fifth who controls Bowser on the Wii U GamePad, and Amiibo Party, where players use Amiibo figures. Their gameplay is interspersed by over 70 minigames with various play styles.
Mario Party 10 was developed by NDcube, the developers of Mario Party 9. One of the goals during the development was to focus on gameplay features not found in previous titles. To do this, they concentrated on the Wii U GamePad and Amiibo, as well as made Bowser a playable character. The game was announced at E3 2014 and advertised throughout the year. It was further detailed in a January 2015 Nintendo Direct, alongside the announcement and release of the Amiibo figures. The game was released in Japan, North America, and Europe in March 2015.
Mario Party 10 received mixed reviews, being praised for its graphics and minigames and criticized for the gameplay and the Amiibo Party mode. The Bowser Party mode and use of the GamePad, as well as its continuation of gameplay that was established in Mario Party 9, attained a mixed reception. The game sold 2.27 million copies by September 2022, making it one of the best-selling Wii U games. It was the only Mario Party game released for the platform and was followed by Super Mario Party for the Nintendo Switch in 2018.
Gameplay
Mario Party 10's gameplay builds on that of its predecessors, principally Mario Party 9. In the base "Mario Party" game mode, four players, controlled by either players or by artificial intelligence. The players traverse together in a vehicle across a game board. When the end of the board is reached, whoever has collected the most mini-stars wins. The players take turns serving as the vehicle's operator, who rolls a die to move forward by the rolled number of spaces. These contain actions that the operator decides on, such as choosing which path to take when it splits. Two boss battles are on each board, and the four players work together to defeat them. Five maps are included in Mario Party: Mushroom Park, Haunted Trail, Whimsical Waters, Airship Central, and Chaos Castle.
Scattered across the board are minigame spaces, which, when landed on, trigger one of over 70 minigames. Minigames are split into three types: free-for-alls, where all four players compete against each other; two-versus-twos, where players are randomly split into two teams and compete against each other; and three-versus-ones, where three players compete against one with an advantage. Minigames consist of memory, puzzle, and platform gameplay. Some minigames use Wii Remote controls. Bowser is displayed imprisoned on the Wii U GamePad. If the players roll all six sides of the die throughout the course of the game, Bowser is freed and offers additional hindrances and challenges, such as requiring the player to come in last place in a minigame to win it. Each round of Mario Party is roughly 30 minutes in length.
In addition to Mario Party, the game introduces the Bowser Party and Amiibo Party modes. In Bowser Party, four players that make up "Team Mario" compete against a fifth player, "Team Bowser", who controls Bowser using the Wii U GamePad. In this mode, mini-stars are replaced with one end-goal star, and each player has hearts. Team Mario is tasked with reaching the end of the board without losing all of them. Landing on certain spaces will see the player either obtain special dice, get a chance to earn more hearts, be hindered by Bowser Jr., or impact how many dice the fifth player has on their turn. If a player on Team Mario has lost all of their hearts, they can be brought back into play if the others earn additional hearts on the board. While inactive, they can provide the group with special dice to use.
After the players on Team Mario have taken a turn each, the player on Team Bowser takes theirs by rolling four dice. If the total rolled is less than the number of spaces Team Mario is from them, they roll their dice a second time. If Team Bowser manages to catch up to Team Mario, a minigame takes place where Team Bowser uses the GamePad to attempt to weaken and defeat Team Mario. There are 12 such minigames. On some boards, Team Bowser gets to hinder Team Mario by tricking them into setting off traps or facing a disadvantage on a selected route. If Team Mario is close to the goal, Team Bowser may gain an advantage, such as adding more Bowser Jr. spaces to hinder the group. Should Team Mario reach the end of the board, the operator of the vehicle must find a star hidden behind one of three enemies to win the game. A wrong choice will remove that enemy and push the team back a few spaces. Reaching the end when only one enemy remains will win the game for Team Mario. Team Bowser wins the game if they can defeat all other players beforehand.
Amiibo Party involves purchasing and using Amiibo, a toys-to-life product line by Nintendo. Select Amiibo from the Super Mario and Super Smash Bros. lines function with the mode. Amiibo Party takes place on a small, circular board, with the goal of collecting the most stars within ten turns of gameplay. The players take turns rolling the die and advancing on the board by scanning their Amiibo, landing on spaces that give and remove coins, move them forward or backwards additional spaces, reward them with powerups, and engage in minigames. Mini-stars are replaced with stars that are purchased with coins. Whoever purchases the most stars by the end of the 10 turns wins. Each Amiibo alters the board's design and how powerups are distributed. Mario Party 10 also has several smaller games and modes: "Jewel Drop" sees two players compete to match falling jewels by color without them toppling over, "Badminton Bash" features badminton gameplay for up to four players, and a tournament challenge consisting only of minigames.
Development
Mario Party 10 was developed by NDcube, the developers of Mario Party 9, and published by Nintendo. Shuichiro Nishiya reprised his role as game director, and Jumpei Horita served as producer.
The developers reused the gameplay concept of having every player progress through the board together. They noticed how, in previous titles, anyone playing would stop paying attention if it was not their turn. By having every player progress together, their actions would affect the other players, thereby keeping everyone engaged. They also decreased the amount of text to help make the game move faster: before each minigame, the game plays a video demonstration instead of explaining the controls in writing, and the characters were made more expressive so their reactions would clue the players into what was happening without the need for text.
Ideas for minigames came from NDcube staff. The team involved with minigame creation took these ideas, usually just one-sentence descriptions or drawings, and expanded upon them or merge them with others. Inspiration was drawn from recent Mario games, including New Super Mario Bros. U and Super Mario 3D World. Nishiya, who was a part of the minigame team, observed his daily life and drew inspiration for minigame ideas from it. Prior Wii U party games from Nintendo, such as Nintendo Land and Wii Party U, featured Miis as the player characters, which kept minigames grounded to reality. To help Mario Party 10 stand out from these titles, the developers based the minigames on "surreal" concepts and environments.
One of the NDcube's goals in developing Mario Party 10 was to introduce concepts original to the series, including allowing Bowser, a recurring antagonist, to be playable and having the player compete against Mario. Another goal was to emphasize using the Wii U GamePad to allow for new ways of gameplay, which resulted in the Bowser Party game mode. When first envisioning the mode, the developers conceptualized a large Bowser on screen with Mario charging toward him on the GamePad. This idea evolved into the concept included in the game, where Bowser is controlled using the GamePad instead. They considered how Bowser would attack Mario and turned these concepts into minigames. The biggest challenge in creating Bowser Party was balancing, as Mario Party is a series based on luck, which resulted in either team getting too far ahead for the other side to catch up in their prototypes. Consequently, the development team gave the losing side an extra boost if they fell too far behind. The developers also wanted to utilize Nintendo's line of Amiibo products, so they created the Amiibo Party game mode. Although they believed that "the Bowser Party and Mario Party modes alone give Mario Party 10 an appeal that surpasses that of any of the previous installment", they wanted Amiibo Party to use Amiibo in a way that was more than just a novelty.
Release
Mario Party 10 was announced at E3 2014, detailing the gameplay, Bowser Party game mode, and select playable characters. The Amiibo line was also announced, and it was specified that Mario Party 10 would support them. Television ads for the game focused on the Bowser Party and Amiibo functionality. When Nintendo sponsored the 2015 Nickelodeon Kids' Choice Awards, the ceremony advertised Mario Party 10. In a January 2015 Nintendo Direct, Amiibo Party was announced alongside the list of compatible Amiibo. Mario Party 10 was released on March 12, 2015, in Japan and on March 20 in North America and Europe. A bundle including the game and a Mario Amiibo from the Super Mario line was released in limited quantities.
Reception
Reviews
Mario Party 10 received "mixed or average reviews" according to the review aggregator website Metacritic, scoring a weighted average of 66/100. Some critics felt the game lacked adequate change from its predecessors.
The shorter runtime of rounds and its high-definition graphics were appreciated by critics, with Samuel Claiborn of IGN enjoying how much more discernible split-screen multiplayer was because of the latter. Kyle Hilliard of Game Informer also commended the graphics and highlighted the game's music, saying that "even the blandest remixes of Koji Kondo's assorted Nintendo scores are immensely enjoyable". Carter applauded the minigame variation, appreciating the variety of minigame controls that were more than shaking the Wii Remote, as was usually the case in Mario Party 9 and Mario Party 8, but criticized the lack of Bowser minigames in Bowser Party.
Ray Carsillo of Electronic Gaming Monthly considered the use of the Wii U GamePad to be lackluster in the Mario Party game mode, a sentiment Hilliard shared, especially compared to the GamePad's use in Nintendo Land. In contrast, Claiborn referred to Mario Party 10 as "one of the best uses of the Wii U GamePad yet", mainly for its use in Bowser Party. A common criticism was that Mario Party 10 continued disliked gameplay mechanics from Mario Party 9, mainly the vehicle mechanic. Destructoids Chris Carter had become used to the gimmick but still considered it to be dull in comparison to individual movement, as it caused the game to lose variation as spaces on the board with uniques game mechanics can be easily passed over. He also criticized the lack of an ability to choose the length of each round. Other critics preferred the continued linear gameplay that was established in Mario Party 9 over that from prior series entries. Hilliard favored the progression as it sped up movement, an opinion shared with Nintendo Lifes Martin Watts, who also cited a more tactical gameplay as moves directly affected the other players. Carsillo did not consider the continued trend of cooperative gameplay to be a negative, but he criticized the lack of variation between Mario Party 10 and Mario Party 9. He overall missed the competitive feel of previous titles in the series.
Bowser Party received a mixed reception. Hilliard lauded Bowser Party and considered it to be the best mode in the game for its emphasis on skill-based gameplay. Carter criticized Bowser Party for its linear progression, which often resulted in little interaction between the two teams as one would usually have an advantage or a string of luck that would keep a continuous distance between them. Carsillo also ridiculed Bowser Party for its lack of minigame variation and balance. Writing for GameSpot, Mark Walton considered Bowser to be the least entertaining role to play, mainly for the waiting times involved, but Claiborn called the role "a blast" because of its unique gameplay. Certain critics regarded Bowser Party as being the best mode in the game, which Dermot Creegan of Hardcore Gamer agreed on, but he still considered the mode "the most barren" with considerably fewer boards and minigames. Amiibo Party was received more positively overall but criticized for various technical reasons. Carter enjoyed the mode for its return to classic Mario Party rules but criticized the boards' size. Although he appreciated the detail found on each board in Amiibo Party, he criticized the necessity to purchase $100 worth of Amiibo in order to use all supported characters. While Hilliard initially enjoyed the function of scanning the Amiibo to move in Amiibo Party, he grew tired of continuously needing to do so after the novelty wore off. Walton generally enjoyed Amiibo Party but criticized it for not being much different from the Mario Party game mode, especially since Amiibo only change the game's look and not the gameplay.
Sales
In the United Kingdom, Mario Party 10 had the second-best launch in the series, behind Mario Party 8. In the United States, roughly 290,000 physical and downloaded copies had been sold by the end of March 2015. This was faster than Mario Party 9, which only sold 230,000 in around three weeks. In Japan, Mario Party 10 sold about 50,000 copies in its first week. As of September 30, 2022, the game has worldwide sales of 2.27 million copies and is the tenth-best-selling game on the Wii U.
Notes
References
2015 video games
Asymmetrical multiplayer video games
Mario Party
Multiplayer and single-player video games
Nintendo Cube games
Nintendo Network games
Party video games
Video game sequels
Video games about size change
Video games developed in Japan
Video games set in amusement parks
Video games that use Amiibo figurines
Wii U eShop games
Wii U games
Wii U-only games | Mario Party 10 | Physics | 3,093 |
42,492,930 | https://en.wikipedia.org/wiki/HZO | HZO manufactures thin film coatings applied to devices during their assembly processes to protect electronics from damage caused by exposure to corrosive liquids. HZO headquarters are located in Raleigh, North Carolina. The company also has Centers of Excellence in Shenzhen, China and Bac Ninh, Vietnam.
Background
HZO is funded through a combination of equity and debt financing. The company publicly unveiled its protective technology for electronics at the Consumer Electronics Show (CES) in 2012. In 2013 and early 2014 the company expanded its corporate offices and production facilities worldwide to include locations in China, Japan and California. The company also has several application partners throughout Asia with further expansion anticipated. HZO revenues increased by more than 90% in 2013 and the company expects continued growth in 2014 with a consequent increase in royalties. The company’s tagline is “protection from the inside out.”
Technology
HZO’s technology is applied directly to the circuitry of devices and components, creating a physical barrier that protects electronics against damage caused by water and other corrosive liquids, humidity, sweat, dust and debris. HZO’s early customer base included medical, military, automotive and consumer markets. Products implementing HZO technology include a Tag Heuer smartphone, along with a variety of wearable devices, which has helped secure HZO as a differentiating feature in wearable computing technology. In 2013, the company unveiled enhancements to its proprietary equipment and machinery, and the company began mass manufacturing with an international brand on a wearable device.
Awards
Since its official introduction in 2012, HZO has received awards and acknowledgments at a local, national and international level. These include the 2012 CES Innovations Design and Engineering Award for Embedded Technologies, recognition as one of the Top Emerging Nanotech Innovators in both 2012 and 2013 by the National NanoBusiness Commercialization Association, and Winner of 2012 Stoel Rives & Utah Technology Council Innovation Award.
The following year, HZO also received recognition with the following awards;
“Best Advanced Technology” at the Wearable Tech Expo, New York, "Best in Show" at the Wearable Tech Expo, Los Angeles, "Best New Product or Service of the Year" by the International Business Awards, Gold Stevie Award–Consumer Electronics, "Innovative Product of the Year"–Silver Winner, Best in Biz Awards
In 2014, HZO received "Innovation World Cup Finalist" at the Wearable Technologies Conference, Munich and “Best of State in Consumer Electronics”, Utah 2014.
Media
HZO has received critical acclaim with features on MSNBC in January 2012, The Huffington Post in January, 2012 and April 2012, BBC in January 2012, and The New York Times in January 2012.
References
Nanotechnology companies
Companies based in Utah
Technology companies established in 2009
2009 establishments in Utah | HZO | Materials_science | 570 |
77,091,094 | https://en.wikipedia.org/wiki/2%2C4%2C6-Tris%28dimethylaminomethyl%29phenol | 2,4,6-Tris(dimethylaminomethyl)phenol is an aromatic organic chemical that has tertiary amine and phenolic hydroxyl functionality in the same molecule. The formula is C15H27N3O and the CAS Registry Number is 90-72-2. It is REACH registered and the European Community Number is 202-013-9.
Uses
A key use is as a catalyst for epoxy resin chemistry. It can be used as a homopolymerization catalyst for epoxy resins and also as an accelerator with epoxy resin curing agents. It is then further used in coatings, sealants, composites, adhesives and elastomers. It has been stated that it is probably the most widely used room temperature accelerator for two-component epoxy resin systems. The kinetics of curing with and without this accelerator have been extensively studied. It is the usual benchmark or control used when other catalysts and accelerators are being developed and tested.
In addition to its use in epoxy chemistry, it is also used in polyurethane chemistry for example by grafting the molecule into the polymer backbone. It is also used as a trimerization catalyst with polymeric MDI.
Polyether ether ketones may also be grafted with the molecule which then finds use in lithium batteries.
The high functionality of the molecule means it can be used to complex some transition metals and this has also been studied.
Often cited weaknesses are yellowing and odor.
Manufacture
The material is a Mannich base and is manufactured by reacting phenol, formaldehyde and dimethylamine in a reactor under vacuum and removing the water produced.
Toxicity
It is classed as a high volume chemical and as such, its toxicity profile has been extensively studied.
References
Further reading
Tertiary amines
Phenols
Catalysts
Dimethylamino compounds | 2,4,6-Tris(dimethylaminomethyl)phenol | Chemistry | 392 |
14,771,927 | https://en.wikipedia.org/wiki/GeneMark | GeneMark is a generic name for a family of ab initio gene prediction algorithms and software programs developed at the Georgia Institute of Technology in Atlanta. Developed in 1993, original GeneMark was used in 1995 as a primary gene prediction tool for annotation of the first completely sequenced bacterial genome of Haemophilus influenzae, and in 1996 for the first archaeal genome of Methanococcus jannaschii. The algorithm introduced inhomogeneous three-periodic Markov chain models of protein-coding DNA sequence that became standard in gene prediction as well as Bayesian approach to gene prediction in two DNA strands simultaneously. Species specific parameters of the models were estimated from training sets of sequences of known type (protein-coding and non-coding). The major step of the algorithm computes for a given DNA fragment posterior probabilities of either being "protein-coding" (carrying genetic code) in each of six possible reading frames (including three frames in the complementary DNA strand) or being "non-coding". The original GeneMark (developed before the advent of the HMM applications in Bioinformatics) was an HMM-like algorithm; it could be viewed as approximation to known in the HMM theory posterior decoding algorithm for appropriately defined HMM model of DNA sequence.
Further improvements in the algorithms for gene prediction in prokaryotic genomes
The GeneMark.hmm algorithm (1998) was designed to improve accuracy of prediction of short genes and gene starts. The idea was to use the inhomogeneous Markov chain models introduced in GeneMark for computing likelihoods of the sequences emitted by the states of a hidden Markov model, or rather semi-Markov HMM, or generalized HMM describing the genomic sequence. The borders between coding and non-coding regions were formally interpreted as transitions between hidden states. Additionally, the ribosome binding site model was added to the GHMM model to improve accuracy of gene start prediction. The next important step in the algorithm development was introduction of self-training or unsupervised training of the model parameters in the new gene prediction tool GeneMarkS (2001). Rapid accumulation of prokaryotic genomes in the following years has shown that the structure of sequence patterns related to gene expression regulation signals near gene starts may vary. Also, it was observed that prokaryotic genome may exhibit GC content variability due to the lateral gene transfer. The new algorithm, GeneMarkS-2 was designed to make automatic adjustments to the types of gene expression patterns and the GC content changes along the genomic sequence. GeneMarkS and, then GeneMarkS-2 have been used in the NCBI pipeline for prokaryotic genomes annotation (PGAP).
().
Heuristic Models and Gene Prediction in Metagenomes and Metatransciptomes
Accurate identification of species specific parameters of a gene finding algorithm is a necessary condition for making accurate gene predictions. However, in the studies of viral genomes one needs to estimate parameters from a rather short sequence that has no large genomic context. Importantly, starting 2004, the same question had to be addressed for gene prediction in short metagenomic sequences. A surprisingly accurate answer was found by introduction of parameter generating functions depending on a single variable, the sequence G+C content ("heurisic method" 1999). Subsequently, analysis of several hundred prokaryotic genomes led to developing more advanced heuristic method in 2010 (implemented in MetaGeneMark). Further on, the need to predict genes in RNA transcripts led to development of GeneMarkS-T (2015), a tool that identifies intron-less genes in long transcript sequences assembled from RNA-Seq reads.
Eukaryotic gene prediction
In eukaryotic genomes modeling of exon borders with introns and intergenic regions present a major challenge. The GHMM architecture of eukaryotic GeneMark.hmm includes hidden states for initial, internal, and terminal exons, introns, intergenic regions and single exon genes located in both DNA strands. Initial version of the eukaryotic GeneMark.hmm needed manual compilation of training sets of protein-coding sequences for estimation of the algorithm parameters. However, in 2005, the first self-training eukaryotic gene finder, GeneMark-ES, was developed. A fungal version of GeneMark-ES developed in 2008 features a more complex intron model and hierarchical strategy of self-training. In 2014, in GeneMark-ET the self-training of parameters was aided by extrinsic hints generated by mapping to the genome short RNA-Seq reads. Extrinsic evidence is not limited to the 'native' RNA sequences. The cross-species proteins collected in the vast protein databases could be a source for external hints, if the homologous relationships between the already known proteins and the proteins encoded by yet unknown genes in the novel genome are established. This task was solved upon developing the new algorithm, GeneMark-EP+ (2020). Integration of the RNA and protein sources of the intrinsic hints was done in GeneMark-ETP (2023). Versatility and accuracy of the eukaryotic gene finders of the GeneMark family have led to their incorporation into number of pipelines of genome annotation. Also, since 2016, the pipelines BRAKER1, BRAKER2, BRAKER3 were developed to combine the strongest features of GeneMark and AUGUSTUS.
Notably, gene prediction in eukaryotic transcripts can be done by the new algorithm GeneMarkS-T (2015)
GeneMark Family of Gene Prediction Programs
Bacteria, Archaea
GeneMark
GeneMarkS
GeneMarkS-2
Metagenomes and Metatranscriptomes
MetaGeneMark
GeneMarkS-T
Eukaryotes
GeneMark
GeneMark.hmm
GeneMark-ES: ab initio gene finding algorithm for eukaryotic genomes with automatic (unsupervised) training.
GeneMark-ET: augments GeneMark-ES by integrating RNA-Seq read alignments into the self-training procedure.
GeneMark-EP+: augments GeneMark-ES by iterative finding genes in a novel genome, detecting similarities of predicted genes to known proteins, splice-aligning of the known proteins to the genome and generating hints for the next round of prediction, and correction based on the external evidence.
GeneMark-ETP: integrates genomic, transcript and protein evidence into the gene prediction
Viruses, phages and plasmids
Heuristic models
Transcripts assembled from RNA-Seq read
GeneMarkS-T
See also
List of gene prediction software
Gene prediction
References
Borodovsky M. and McIninch J. "GeneMark: parallel gene recognition for both DNA strands." Computers & Chemistry (1993) 17 (2): 123–133. DOI
Lukashin A. and Borodovsky M. "GeneMark.hmm: new solutions for gene finding." Nucleic Acids Research (1998) 26 (4): 1107–1115. DOI PMID
Besemer J. and Borodovsky M. "Heuristic approach to deriving models for gene finding." Nucleic Acids Research (1999) 27 (19): 3911–3920. DOI PMID
Besemer J., Lomsadze A., and Borodovsky M. "GeneMarkS: a self-training method for prediction of gene starts in microbial genomes. Implications for finding sequence motifs in regulatory regions." Nucleic Acids Research (2001) 29 (12): 2607–2618. DOI PMID
Mills R., Rozanov M., Lomsadze A., Tatusova T., and Borodovsky M. "Improving gene annotation in complete viral genomes." Nucleic Acids Research (2003) 31 (23): 7041–7055. DOI PMID
Besemer J. and Borodovsky M. "GeneMark: web software for gene finding in prokaryotes, eukaryotes and viruses." Nucleic Acids Research (2005) 33 (Web Server Issue): W451-454. DOI PMID
Lomsadze A., Ter-Hovhannisyan V., Chernoff Y., and Borodovsky M. "Gene identification in novel eukaryotic genomes by self-training algorithm." Nucleic Acids Research (2005) 33 (20): 6494–6506. DOI PMID
Ter-Hovhannisyan V., Lomsadze A., Chernoff Y., and Borodovsky M. "Gene prediction in novel fungal genomes using an ab initio algorithm with unsupervised training." Genome Research (2008) 18 (12): 1979-1990. DOI PMID
Zhu W., Lomsadze A., and Borodovsky M. "Ab initio gene identification in metagenomic sequences." Nucleic Acids Research (2010) 38 (12): e132. DOI PMID
Lomsadze A., Burns P.D., and Borodovsky M. "Integration of mapped RNA-Seq reads into automatic training of eukaryotic gene finding algorithm." Nucleic Acids Research (2014) 42 (15): e119. DOI PMID
Tang S., Lomsadze A., and Borodovsky M. "Identification of protein coding regions in RNA transcripts." Nucleic Acids Research (2015) 43 (12): e78. DOI PMID
Tatusova T., DiCuccio M., Badretdin A., Chetvernin V., Nawrocki E., Zaslavsky L., Lomsadze A., Pruitt K., Borodovsky M., and Ostell J. "NCBI prokaryotic genome annotation pipeline." Nucleic Acids Research (2016) 44 (14): 6614-6624. DOI PMID
Hoff K., Lange S., Lomsadze A., Borodovsky M., and Stanke M. "BRAKER1: Unsupervised RNA-Seq-Based Genome Annotation with GeneMark-ET and AUGUSTUS." Bioinformatics (2016) 32 (5): 767-769. DOI PMID
Lomsadze A., Gemayel K., Tang S., and Borodovsky M. "Modeling leaderless transcription and atypical genes results in more accurate gene prediction in prokaryotes." Genome Research (2018) 28 (7): 1079-1089. DOI PMID
Bruna T., Hoff K., Lomsadze A., Stanke M., and Borodovsky M. "BRAKER2: automatic eukaryotic genome annotation with GeneMark-EP+ and AUGUSTUS supported by a protein database." NAR Genomics and Bioinformatics (2021) 3 (1): lqaa108 DOI PMID
Bruna T., Lomsadze A., and Borodovsky M. "GeneMark-EP+: eukaryotic gene prediction with self-training in the space of genes and proteins." NAR Genomics and Bioinformatics (2022) 2 (2): lqaa026 DOI PMID
Bruna T., Lomsadze A., and Borodovsky M. "GeneMark-ETP: Automatic Gene Finding in Eukaryotic Genomes in Consistence with Extrinsic Data." bioRxiv (Jan 5, 2023) DOI PMID
Gabriel L., Brůna T., Hoff K., Ebel M., Lomsadze A., Borodovsky M., and Stanke M. "BRAKER3: Fully automated genome annotation using RNA-Seq and protein evidence with GeneMark-ETP, AUGUSTUS and TSEBRA." bioRxiv (Nov 27, 2023) DOI PMID
External links
Metagenomics software
Mathematical and theoretical biology
Genomics
Bioinformatics software
zh:基因识别 | GeneMark | Mathematics,Biology | 2,528 |
14,249,287 | https://en.wikipedia.org/wiki/Kimchi%20refrigerator | A kimchi refrigerator is a refrigerator designed specifically to meet the storage requirements of kimchi and facilitate different fermentation processes. The kimchi refrigerator aims to be colder, with more consistent temperature, more humidity, and less moving air than a conventional refrigerator, providing the ideal environment for fermentation of kimchi. Some models may include features such as a UV sterilizer.
In a consumer survey aimed at South Korean homemakers conducted by a top-ranking Korean media agency in 2004, the kimchi refrigerator was ranked first for most wanted household appliance.
History and design
The start of the Kimchi refrigerator dates back to 1984. At that time, LG predecessor, GoldStar (), first used the word 'Kimchi refrigerator' (). The model name of the first kimchi refrigerator was 'GR-063', and according to the advertisement], the inside was made of stainless steel and the internal temperature could be set, but it is assumed that the direct cooling method was adopted. The volume of this product was 45 liters.
After many years of comprehensive work on development to best suit kimchi fermentation and storage, WiniaMando () launched commercial brand, DIMCHAE (), into the mass market in December 1995. The model name was 'CFR-052E'.
As of November 2007, more than a dozen home appliance manufacturers, including Samsung and LG, were involved in commercial production of kimchi refrigerators.
The top-loading or "lid-type" designs were introduced first. The initial design took up much space and heavy plastic kimchi containers had to be lifted to get to the bottom part. Some units are now designed instead with two deep drawers on the bottom, that are accessible from the outside. Top or bottom, the bins can be used to refrigerate anything from kimchi to fresh produce and meats.
The door-drawer types (the "stand-type" in Korea) are gaining popularity because of their space-saving ergonomic design. A single door with a wine-bar type design at the top can be opened in full or partially at the wine-bar section to save energy. The top portion can be used as a freezer.
References
Home appliances
Kimchi
Refrigerators | Kimchi refrigerator | Physics,Technology | 466 |
4,787,255 | https://en.wikipedia.org/wiki/PostBQP | In computational complexity theory, PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error (in the sense that the algorithm is correct at least 2/3 of the time on all inputs).
Postselection is not considered to be a feature that a realistic computer (even a quantum one) would possess, but nevertheless postselecting machines are interesting from a theoretical perspective.
Removing either one of the two main features (quantumness, postselection) from PostBQP gives the following two complexity classes, both of which are subsets of PostBQP:
BQP is the same as PostBQP except without postselection
BPPpath is the same as PostBQP except that instead of quantum, the algorithm is a classical randomized algorithm (with postselection)
The addition of postselection seems to make quantum Turing machines much more powerful: Scott Aaronson proved PostBQP is equal to PP, a class which is believed to be relatively powerful, whereas BQP is not known even to contain the seemingly smaller class NP. Using similar techniques, Aaronson also proved that small changes to the laws of quantum computing would have significant effects. As specific examples, under either of the two following changes, the "new" version of BQP would equal PP:
if we broadened the definition of 'quantum gate' to include not just unitary operations but linear operations, or
if the probability of measuring a basis state was proportional to instead of for any even integer p > 2.
Basic properties
In order to describe some of the properties of PostBQP we fix a formal way of describing quantum postselection. Define a quantum algorithm to be a family of quantum circuits (specifically, a uniform circuit family). We designate one qubit as the postselection qubit P and another as the output qubit Q. Then PostBQP is defined by postselecting upon the event that the postselection qubit is . Explicitly, a language L is in PostBQP if there is a quantum algorithm A so that after running A on input x and measuring the two qubits P and Q,
P = 1 with nonzero probability
if the input x is in L then
if the input x is not in L then .
One can show that allowing a single postselection step at the end of the algorithm (as described above) or allowing intermediate postselection steps during the algorithm are equivalent.
Here are three basic properties of PostBQP (which also hold for BQP via similar proofs):
PostBQP is closed under complement. Given a language L in PostBQP and a corresponding deciding circuit family, create a new circuit family by flipping the output qubit after measurement, then the new circuit family proves the complement of L is in PostBQP.
You can do probability amplification in PostBQP. The definition of PostBQP is not changed if we replace the 2/3 value in its definition by any other constant strictly between 1/2 and 1. As an example, given a PostBQP algorithm A with success probability 2/3, we can construct another algorithm which runs three independent copies of A, outputs a postselection bit equal to the conjunction of the three "inner" ones, and outputs an output bit equal to the majority of the three "inner" ones; the new algorithm will be correct with conditional probability , greater than the original 2/3.
PostBQP is closed under intersection. Suppose we have PostBQP circuit families for two languages and , with respective postselection qubits and output qubits . We may assume by probability amplification that both circuit families have success probability at least 5/6. Then we create a composite algorithm where the circuits for and are run independently, and we set P to the conjunction of and , and Q to the conjunction of and . It is not hard to see by a union bound that this composite algorithm correctly decides membership in with (conditional) probability at least 2/3.
More generally, combinations of these ideas show that PostBQP is closed under union and BQP truth-table reductions.
=
Scott Aaronson showed that the complexity classes (postselected bounded error quantum polynomial time) and PP (probabilistic polynomial time) are equal. The result was significant because this quantum computation reformulation of gave new insight and simpler proofs of properties of .
The usual definition of a circuit family is one with two outbit qubits P (postselection) and Q (output) with a single measurement of P and Q at the end such that the probability of measuring has nonzero probability, the conditional probability Pr[Q = 1|P = 1] ≥ 2/3 if the input x is in the language, and Pr[Q = 0|P = 1] ≥ 2/3 if the input x is not in the language. For technical reasons we tweak the definition of as follows: we require that for some constant c depending on the circuit family. Note this choice does not affect the basic properties of , and also it can be shown that any computation consisting of typical gates (e.g. Hadamard, Toffoli) has this property whenever .
Proving ⊆
Suppose we are given a family of circuits to decide a language L. We assume without loss of generality (e.g. see the inessential properties of quantum computers) that all gates have transition matrices that are represented with real numbers, at the expense of adding one more qubit.
Let denote the final quantum state of the circuit before the postselecting measurement is made. The overall goal of the proof is to construct a algorithm to decide L. More specifically it suffices to have L correctly compare the squared amplitude of in the states with to the squared amplitude of in the states with to determine which is bigger. The key insight is that the comparison of these amplitudes can be transformed into comparing the acceptance probability of a machine with 1/2.
Matrix view of PostBQP algorithms
Let n denote the input size, denote the total number of qubits in the circuit (inputs, ancillary, output and postselection qubits), and denote the total number of gates.
Represent the ith gate by its transition matrix Ai (a real unitary matrix) and let the initial state be (padded with zeroes). Then . Define S1 (resp. S0) to be the set of basis states corresponding to (resp. ) and define the probabilities
The definition of ensures that either or according to whether x is in L or not.
Our machine will compare and . In order to do this, we expand the definition of matrix multiplication:
where the sum is taken over all lists of G basis vectors . Now and can be expressed as a sum of pairwise products of these terms. Intuitively, we want to design a machine whose acceptance probability is something like , since then would imply that the acceptance probability is , while would imply that the acceptance probability is .
Technicality: we may assume entries of the transition matrices are rationals with denominator for some polynomial f(n).
The definition of tells us that if x is in L, and that otherwise . Let us replace all entries of A by the nearest fraction with denominator for a large polynomial that we presently describe. What will be used later is that the new values satisfy if x is in L, and if x is not in L. Using the earlier technical assumption and by analyzing how the 1-norm of the computational state changes, this is seen to be satisfied if thus clearly there is a large enough f that is polynomial in n.
Constructing the PP machine
Now we provide the detailed implementation of our machine. Let denote the sequence and define the shorthand notation
,
then
We define our machine to
pick a basis state uniformly at random
if then STOP and accept with probability 1/2, reject with probability 1/2
pick two sequences of G basis states uniformly at random
compute (which is a fraction with denominator such that )
if then accept with probability and reject with probability (which takes at most coin flips)
otherwise (then ) accept with probability and reject with probability (which again takes at most coin flips)
Then it is straightforward to compute that this machine accepts with probability
so this is a machine for the language L, as needed.
Proving ⊆
Suppose we have a machine with time complexity on input x of length . Thus the machine flips a coin at most T times during the computation. We can thus view the machine as a deterministic function f (implemented, e.g. by a classical circuit) which takes two inputs (x, r) where r, a binary string of length T, represents the results of the random coin flips that are performed by the computation, and the output of f is 1 (accept) or 0 (reject). The definition of tells us that
Thus, we want a algorithm that can determine whether the above statement is true.
Define s to be the number of random strings which lead to acceptance,
and so is the number of rejected strings.
It is straightforward to argue that without loss of generality, ; for details, see a similar without loss of generality assumption in the proof that is closed under complementation.
Aaronson's algorithm
Aaronson's algorithm for solving this problem is as follows. For simplicity, we will write all quantum states as unnormalized. First, we prepare the state . Second, we apply Hadamard gates to the second register (each of the first T qubits), measure the second register and postselect on it being the all-zero string. It is easy to verify that this leaves the last register (the last qubit) in the residual state
Where H denotes the Hadamard gate, we compute the state
.
Where are positive real numbers to be chosen later with , we compute the state and measure the second qubit, postselecting on its value being equal to 1, which leaves the first qubit in a residual state depending on which we denote
.
Visualizing the possible states of a qubit as a circle, we see that if , (i.e. if ) then lies in the open quadrant while if , (i.e. if ) then lies in the open quadrant . In fact for any fixed x (and its corresponding s), as we vary the ratio in , note that the image of is precisely the corresponding open quadrant. In the rest of the proof, we make precise the idea that we can distinguish between these two quadrants.
Analysis
Let , which is the center of , and let be orthogonal to . Any qubit in , when measured in the basis , gives the value less than 1/2 of the time. On the other hand, if and we had picked then measuring in the basis would give the value all of the time. Since we don't know s we also don't know the precise value of r*, but we can try several (polynomially many) different values for in hopes of getting one that is "near" r*.
Specifically, note and let us successively set to every value of the form for . Then elementary calculations show that for one of these values of i, the probability that the measurement of in the basis yields is at least
Overall, the algorithm is as follows. Let k be any constant strictly between 1/2 and .
We do the following experiment for each : construct and measure in the basis a total of times where C is a constant. If the proportion of measurements is greater than k, then reject. If we don't reject for any i, accept. Chernoff bounds then show that for a sufficiently large universal constant C, we correctly classify x with probability at least 2/3.
Note that this algorithm satisfies the technical assumption that the overall postselection probability is not too small: each individual measurement of has postselection probability and so the overall probability is .
Implications
See Quantum computation reformulation of PP
References
Articles containing proofs
Quantum complexity theory
Probabilistic complexity classes | PostBQP | Mathematics | 2,499 |
18,610 | https://en.wikipedia.org/wiki/Laplace%20transform | In mathematics, the Laplace transform, named after Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually , in the time domain) to a function of a complex variable (in the complex-valued frequency domain, also known as s-domain, or s-plane).
The transform is useful for converting differentiation and integration in the time domain into much easier multiplication and division in the Laplace domain (analogous to how logarithms are useful for simplifying multiplication and division into addition and subtraction). This gives the transform many applications in science and engineering, mostly as a tool for solving linear differential equations and dynamical systems by simplifying ordinary differential equations and integral equations into algebraic polynomial equations, and by simplifying convolution into multiplication. Once solved, the inverse Laplace transform reverts to the original domain.
The Laplace transform is defined (for suitable functions ) by the integral
where s is a complex number. It is related to many other transforms, most notably the Fourier transform and the Mellin transform. Formally, the Laplace transform is converted into a Fourier transform by the substitution where is real. However, unlike the Fourier transform, which gives the decomposition of a function into its components in each frequency, the Laplace transform of a function with suitable decay is an analytic function, and so has a convergent power series, the coefficients of which give the decomposition of a function into its moments. Also unlike the Fourier transform, when regarded in this way as an analytic function, the techniques of complex analysis, and especially contour integrals, can be used for calculations.
History
The Laplace transform is named after mathematician and astronomer Pierre-Simon, Marquis de Laplace, who used a similar transform in his work on probability theory. Laplace wrote extensively about the use of generating functions (1814), and the integral form of the Laplace transform evolved naturally as a result.
Laplace's use of generating functions was similar to what is now known as the z-transform, and he gave little attention to the continuous variable case which was discussed by Niels Henrik Abel.
From 1744, Leonhard Euler investigated integrals of the form
as solutions of differential equations, introducing in particular the gamma function. Joseph-Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form
which resembles a Laplace transform.
These types of integrals seem first to have attracted Laplace's attention in 1782, where he was following in the spirit of Euler in using the integrals themselves as solutions of equations. However, in 1785, Laplace took the critical step forward when, rather than simply looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form
akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power.
Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space, because those solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space. In 1821, Cauchy developed an operational calculus for the Laplace transform that could be used to study linear differential equations in much the same way the transform is now used in basic engineering. This method was popularized, and perhaps rediscovered, by Oliver Heaviside around the turn of the century.
Bernhard Riemann used the Laplace transform in his 1859 paper On the Number of Primes Less Than a Given Magnitude, in which he also developed the inversion theorem. Riemann used the Laplace transform to develop the functional equation of the Riemann zeta function, and this method is still used to related the modular transformation law of the Jacobi theta function, which is simple to prove via Poisson summation, to the functional equation.
Hjalmar Mellin was among the first to study the Laplace transform, rigorously in the Karl Weierstrass school of analysis, and apply it to the study of differential equations and special functions, at the turn of the 20th century. At around the same time, Heaviside was busy with his operational calculus. Thomas Joannes Stieltjes considered a generalization of the Laplace transform connected to his work on moments. Other contributors in this time period included Mathias Lerch, Oliver Heaviside, and Thomas Bromwich.
In 1934, Raymond Paley and Norbert Wiener published the important work Fourier transforms in the complex domain, about what is now called the Laplace transform (see below). Also during the 30s, the Laplace transform was instrumental in G H Hardy and John Edensor Littlewood's study of tauberian theorems, and this application was later expounded on by Widder (1941), who developed other aspects of the theory such as a new method for inversion. Edward Charles Titchmarsh wrote the influential Introduction to the theory of the Fourier integral (1937).
The current widespread use of the transform (mainly in engineering) came about during and soon after World War II, replacing the earlier Heaviside operational calculus. The advantages of the Laplace transform had been emphasized by Gustav Doetsch, to whom the name Laplace transform is apparently due.
Formal definition
The Laplace transform of a function , defined for all real numbers , is the function , which is a unilateral transform defined by
where s is a complex frequency-domain parameter
with real numbers and .
An alternate notation for the Laplace transform is instead of , often written as in an abuse of notation.
The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is that must be locally integrable on . For locally integrable functions that decay at infinity or are of exponential type (), the integral can be understood to be a (proper) Lebesgue integral. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at . Still more generally, the integral can be understood in a weak sense, and this is dealt with below.
One can define the Laplace transform of a finite Borel measure by the Lebesgue integral
An important special case is where is a probability measure, for example, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a probability density function . In that case, to avoid potential confusion, one often writes
where the lower limit of is shorthand notation for
This limit emphasizes that any point mass located at is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform.
Bilateral Laplace transform
When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is usually intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform, or two-sided Laplace transform, by extending the limits of integration to be the entire real axis. If that is done, the common unilateral transform simply becomes a special case of the bilateral transform, where the definition of the function being transformed is multiplied by the Heaviside step function.
The bilateral Laplace transform is defined as follows:
An alternate notation for the bilateral Laplace transform is , instead of .
Inverse Laplace transform
Two integrable functions have the same Laplace transform only if they differ on a set of Lebesgue measure zero. This means that, on the range of the transform, there is an inverse transform. In fact, besides integrable functions, the Laplace transform is a one-to-one mapping from one function space into another in many other function spaces as well, although there is usually no easy characterization of the range.
Typical function spaces in which this is true include the spaces of bounded continuous functions, the space , or more generally tempered distributions on . The Laplace transform is also defined and injective for suitable spaces of tempered distributions.
In these cases, the image of the Laplace transform lives in a space of analytic functions in the region of convergence. The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier–Mellin integral, and Mellin's inverse formula):
where is a real number so that the contour path of integration is in the region of convergence of . In most applications, the contour can be closed, allowing the use of the residue theorem. An alternative formula for the inverse Laplace transform is given by Post's inversion formula. The limit here is interpreted in the weak-* topology.
In practice, it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table and construct the inverse by inspection.
Probability theory
In pure and applied probability, the Laplace transform is defined as an expected value. If is a random variable with probability density function , then the Laplace transform of is given by the expectation
where is the expectation of random variable .
By convention, this is referred to as the Laplace transform of the random variable itself. Here, replacing by gives the moment generating function of . The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory.
Of particular use is the ability to recover the cumulative distribution function of a continuous random variable by means of the Laplace transform as follows:
Algebraic construction
The Laplace transform can be alternatively defined in a purely algebraic manner by applying a field of fractions construction to the convolution ring of functions on the positive half-line. The resulting space of abstract operators is exactly equivalent to Laplace space, but in this construction the forward and reverse transforms never need to be explicitly defined (avoiding the related difficulties with proving convergence).
Region of convergence
If is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform of converges provided that the limit
exists.
The Laplace transform converges absolutely if the integral
exists as a proper Lebesgue integral. The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former but not in the latter sense.
The set of values for which converges absolutely is either of the form or , where is an extended real constant with (a consequence of the dominated convergence theorem). The constant is known as the abscissa of absolute convergence, and depends on the growth behavior of . Analogously, the two-sided transform converges absolutely in a strip of the form , and possibly including the lines or . The subset of values of for which the Laplace transform converges absolutely is called the region of absolute convergence, or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini's theorem and Morera's theorem.
Similarly, the set of values for which converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at , then it automatically converges for all with . Therefore, the region of convergence is a half-plane of the form , possibly including some points of the boundary line .
In the region of convergence , the Laplace transform of can be expressed by integrating by parts as the integral
That is, can effectively be expressed, in the region of convergence, as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.
There are several Paley–Wiener theorems concerning the relationship between the decay properties of , and the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region . As a result, LTI systems are stable, provided that the poles of the Laplace transform of the impulse response function have negative real part.
This ROC is used in knowing about the causality and stability of a system.
Properties and theorems
The Laplace transform's key property is that it converts differentiation and integration in the time domain into multiplication and division by in the Laplace domain. Thus, the Laplace variable is also known as an operator variable in the Laplace domain: either the derivative operator or (for the integration operator.
Given the functions and , and their respective Laplace transforms and ,
the following table is a list of properties of unilateral Laplace transform:
Initial value theorem
Final value theorem
, if all poles of are in the left half-plane.
The final value theorem is useful because it gives the long-term behaviour without having to perform partial fraction decompositions (or other difficult algebra). If has a pole in the right-hand plane or poles on the imaginary axis (e.g., if or ), then the behaviour of this formula is undefined.
Relation to power series
The Laplace transform can be viewed as a continuous analogue of a power series. If is a discrete function of a positive integer , then the power series associated to is the series
where is a real variable (see Z-transform). Replacing summation over with integration over , a continuous version of the power series becomes
where the discrete function is replaced by the continuous one .
Changing the base of the power from to gives
For this to converge for, say, all bounded functions , it is necessary to require that . Making the substitution gives just the Laplace transform:
In other words, the Laplace transform is a continuous analog of a power series, in which the discrete parameter is replaced by the continuous parameter , and is replaced by .
Relation to moments
The quantities
are the moments of the function . If the first moments of converge absolutely, then by repeated differentiation under the integral,
This is of special significance in probability theory, where the moments of a random variable are given by the expectation values . Then, the relation holds
Transform of a function's derivative
It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows:
yielding
and in the bilateral case,
The general result
where denotes the th derivative of , can then be established with an inductive argument.
Evaluating integrals over the positive real axis
A useful property of the Laplace transform is the following:
under suitable assumptions on the behaviour of in a right neighbourhood of and on the decay rate of in a left neighbourhood of . The above formula is a variation of integration by parts, with the operators
and being replaced by and . Let us prove the equivalent formulation:
By plugging in the left-hand side turns into:
but assuming Fubini's theorem holds, by reversing the order of integration we get the wanted right-hand side.
This method can be used to compute integrals that would otherwise be difficult to compute using elementary methods of real calculus. For example,
Relationship to other transforms
Laplace–Stieltjes transform
The (unilateral) Laplace–Stieltjes transform of a function is defined by the Lebesgue–Stieltjes integral
The function is assumed to be of bounded variation. If is the antiderivative of :
then the Laplace–Stieltjes transform of and the Laplace transform of coincide. In general, the Laplace–Stieltjes transform is the Laplace transform of the Stieltjes measure associated to . So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on its cumulative distribution function.
Fourier transform
Let be a complex-valued Lebesgue integrable function supported on , and let be its Laplace transform. Then, within the region of convergence, we have
which is the Fourier transform of the function .
Indeed, the Fourier transform is a special case (under certain conditions) of the bilateral Laplace transform. The main difference is that the Fourier transform of a function is a complex function of a real variable (frequency), the Laplace transform of a function is a complex function of a complex variable. The Laplace transform is usually restricted to transformation of functions of with . A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable . Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function. Techniques of complex variables can also be used to directly study Laplace transforms. As a holomorphic function, the Laplace transform has a power series representation. This power series expresses a function as a linear superposition of moments of the function. This perspective has applications in probability theory.
Formally, the Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument when the condition explained below is fulfilled,
This convention of the Fourier transform ( in ) requires a factor of on the inverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system.
The above relation is valid as stated if and only if the region of convergence (ROC) of contains the imaginary axis, .
For example, the function has a Laplace transform whose ROC is . As is a pole of , substituting in does not yield the Fourier transform of , which contains terms proportional to the Dirac delta functions .
However, a relation of the form
holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of Paley–Wiener theorems.
Mellin transform
The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables.
If in the Mellin transform
we set we get a two-sided Laplace transform.
Z-transform
The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of
where is the sampling interval (in units of time e.g., seconds) and is the sampling rate (in samples per second or hertz).
Let
be a sampling impulse train (also called a Dirac comb) and
be the sampled representation of the continuous-time
The Laplace transform of the sampled signal is
This is the precise definition of the unilateral Z-transform of the discrete function
with the substitution of .
Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal,
The similarity between the Z- and Laplace transforms is expanded upon in the theory of time scale calculus.
Borel transform
The integral form of the Borel transform
is a special case of the Laplace transform for an entire function of exponential type, meaning that
for some constants and . The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined.
Fundamental relationships
Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms.
Table of selected Laplace transforms
The following table provides Laplace transforms for many common functions of a single variable. For definitions and explanations, see the Explanatory Notes at the end of the table.
Because the Laplace transform is a linear operator,
The Laplace transform of a sum is the sum of Laplace transforms of each term.
The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function.
Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others more quickly than by using the definition directly.
The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, .
The entries of the table that involve a time delay are required to be causal (meaning that ). A causal system is a system where the impulse response is zero for all time prior to . In general, the region of convergence for causal systems is not the same as that of anticausal systems.
s-domain equivalent circuits and impedances
The Laplace transform is often used in circuit analysis, and simple conversions to the -domain of circuit elements can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances.
Here is a summary of equivalents:
Note that the resistor is exactly the same in the time domain and the -domain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the -domain account for that.
The equivalents for current and voltage sources are simply derived from the transformations in the table above.
Examples and applications
The Laplace transform is used frequently in engineering and physics; the output of a linear time-invariant system can be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, see control theory. The Laplace transform is invertible on a large class of functions. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.
The Laplace transform can also be used to solve differential equations and is used extensively in mechanical engineering and electrical engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse Laplace transform. English electrical engineer Oliver Heaviside first proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus.
Evaluating improper integrals
Let . Then (see the table above)
From which one gets:
In the limit , one gets
provided that the interchange of limits can be justified. This is often possible as a consequence of the final value theorem. Even when the interchange cannot be justified the calculation can be suggestive. For example, with , proceeding formally one has
The validity of this identity can be proved by other means. It is an example of a Frullani integral.
Another example is Dirichlet integral.
Complex impedance of a capacitor
In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change in the electrical potential (with equations as for the SI unit system). Symbolically, this is expressed by the differential equation
where is the capacitance of the capacitor, is the electric current through the capacitor as a function of time, and is the voltage across the terminals of the capacitor, also as a function of time.
Taking the Laplace transform of this equation, we obtain
where
and
Solving for we have
The definition of the complex impedance (in ohms) is the ratio of the complex voltage divided by the complex current while holding the initial state at zero:
Using this definition and the previous equation, we find:
which is the correct expression for the complex impedance of a capacitor. In addition, the Laplace transform has large applications in control theory.
Impulse response
Consider a linear time-invariant system with transfer function
The impulse response is simply the inverse Laplace transform of this transfer function:
Partial fraction expansion
To evaluate this inverse transform, we begin by expanding using the method of partial fraction expansion,
The unknown constants and are the residues located at the corresponding poles of the transfer function. Each residue represents the relative contribution of that singularity to the transfer function's overall shape.
By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue , we multiply both sides of the equation by to get
Then by letting , the contribution from vanishes and all that is left is
Similarly, the residue is given by
Note that
and so the substitution of and into the expanded expression for gives
Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of to obtain
which is the impulse response of the system.
Convolution
The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions and . That is, the inverse of
is
Phase delay
Starting with the Laplace transform,
we find the inverse by first rearranging terms in the fraction:
We are now able to take the inverse Laplace transform of our terms:
This is just the sine of the sum of the arguments, yielding:
We can apply similar logic to find that
Statistical mechanics
In statistical mechanics, the Laplace transform of the density of states defines the partition function. That is, the canonical partition function is given by
and the inverse is given by
Spatial (not time) structure from astronomical spectrum
The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on the spatial distribution of matter of an astronomical source of radiofrequency thermal radiation too distant to resolve as more than a point, given its flux density spectrum, rather than relating the time domain with the spectrum (frequency domain).
Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possible model of the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum. When independent information on the structure of an object is available, the inverse Laplace transform method has been found to be in good agreement.
Birth and death processes
Consider a random walk, with steps occurring with probabilities . Suppose also that the time step is an Poisson process, with parameter . Then the probability of the walk being at the lattice point at time is
This leads to a system of integral equations (or equivalently a system of differential equations). However, because it is a system of convolution equations, the Laplace transform converts it into a system of linear equations for
namely:
which may now be solved by standard methods.
Tauberian theory
The Laplace transform of the measure on is given by
It is intuitively clear that, for small , the exponentially decaying integrand will become more sensitive to the concentration of the measure on larger subsets of the domain. To make this more precise, introduce the distribution function:
Formally, we expect a limit of the following kind:
Tauberian theorems are theorems relating the asymptotics of the Laplace transform, as , to those of the distribution of as . They are thus of importance in asymptotic formulae of probability and statistics, where often the spectral side has asymptotics that are simpler to infer.
Two tauberian theorems of note are the Hardy–Littlewood tauberian theorem and the Wiener tauberian theorem. The Wiener theorem generalizes the Ikehara tauberian theorem, which is the following statement:
Let A(x) be a non-negative, monotonic nondecreasing function of x, defined for 0 ≤ x < ∞. Suppose that
converges for ℜ(s) > 1 to the function ƒ(s) and that, for some non-negative number c,
has an extension as a continuous function for ℜ(s) ≥ 1.
Then the limit as x goes to infinity of e−x A(x) is equal to c.
This statement can be applied in particular to the logarithmic derivative of Riemann zeta function, and thus provides an extremely short way to prove the prime number theorem.
See also
Analog signal processing
Bernstein's theorem on monotone functions
Continuous-repayment mortgage
Hamburger moment problem
Hardy–Littlewood Tauberian theorem
Laplace–Carson transform
Moment-generating function
Nonlocal operator
Post's inversion formula
Signal-flow graph
Transfer function
Notes
References
Modern
Historical
, Chapters 3–5
Further reading
.
Mathews, Jon; Walker, Robert L. (1970), Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin,
- See Chapter VI. The Laplace transform.
J.A.C.Weidman and Bengt Fornberg: "Fully numerical Laplace transform methods", Numerical Algorithms, vol.92 (2023), pp. 985–1006. https://doi.org/10.1007/s11075-022-01368-x .
External links
Online Computation of the transform or inverse transform, wims.unice.fr
Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
Good explanations of the initial and final value theorems
Laplace Transforms at MathPages
Computational Knowledge Engine allows to easily calculate Laplace Transforms and its inverse Transform.
Laplace Calculator to calculate Laplace Transforms online easily.
Code to visualize Laplace Transforms and many example videos.
Differential equations
Fourier analysis
Mathematical physics
Integral transforms | Laplace transform | Physics,Mathematics | 6,322 |
12,947,872 | https://en.wikipedia.org/wiki/Adult%20development | Adult development encompasses the changes that occur in biological and psychological domains of human life from the end of adolescence until the end of one's life. Changes occur at the cellular level and are partially explained by biological theories of adult development and aging. Biological changes influence psychological and interpersonal/social developmental changes, which are often described by stage theories of human development. Stage theories typically focus on "age-appropriate" developmental tasks to be achieved at each stage. Erik Erikson and Carl Jung proposed stage theories of human development that encompass the entire life span, and emphasized the potential for positive change very late in life.
The concept of adulthood has legal and socio-cultural definitions. The legal definition of an adult is a person who is fully grown or developed. This is referred to as the age of majority, which is age 18 in most cultures, although there is a variation from 15 to 21. The typical perception of adulthood is that it starts at age 20 or 21. Middle-aged adulthood, starts at about age 40, followed by old age around age 60. The socio-cultural definition of being an adult is based on what a culture normatively views as being the required criteria for adulthood, which in turn, influences the lives of individuals within that culture. This may or may not coincide with the legal definition. Current views on adult development in late life focus on the concept of successful aging, defined as "...low probability of disease and disease-related disability, high cognitive and physical functional capacity, and active engagement with life."
Biomedical theories hold that one can age successfully by caring for physical health and minimizing loss in function, whereas psychosocial theories posit that capitalizing upon social and cognitive resources, such as a positive attitude or social support from neighbors, family, and friends, is key to aging successfully. Jeanne Louise Calment exemplifies successful aging as the longest living person, dying at 122 years old. Her long life can be attributed to her genetics (both parents lived into their 80s), her active lifestyle and an optimistic attitude. She enjoyed many hobbies and physical activities, and believed that laughter contributed to her longevity. She poured olive oil on all of her food and skin, which she believed also contributed to her long life and youthful appearance.
Contemporary and classic theories
Adult development is a somewhat new area of study in the field of psychology. Previously it was assumed that development would cease at the end of adolescence. Further research has concluded that development continues well after adolescence and into late adulthood. This new field of research was influenced by the aging population of the "baby boomer" generation. The population of Americans who are the age of 65 or over was at roughly 9 million in 1940. In just 60 years that total has grown to over 35 million people. This rise in population and life expectancy had shined a light on the manifestation of development throughout adulthood.
Changes in adulthood have been described by several theories and metatheories, which serve as a framework for adult development research. One of which is Erik Erikson who went beyond childhood stages and introduced the concept of continuing development until death.
Lifespan development theory
Life span development can be defined as age-relating experiences that occur from birth to the entirety of a human's life. The theory considers the lifelong accumulation of developmental additions and subtractions, with the relative proportion of gains to losses diminishing over an individual's lifetime. According to this theory, life span development has multiple trajectories (positive, negative, stable) and causes (biological, psychological, social, and cultural). Individual variation is a hallmark of this theory – not all individuals develop and age at the same rate and in the same manner.
Bronfenbrenner's ecological theory
Bronfenbrenner's ecological theory is an environmental system theory and social ecological model which focuses on five environmental systems:
Microsystem: This system is the immediate environment of an individual. It includes relationships and interactions that are closest to the individual, therefore, having a very significant and direct impact. Structures in the microsystem may include family, school, peers, or work environments.
Mesosystem: This system portrays the connections and interactions between an individual's microsystem structures. This could be demonstrated by the relationship between an individual's family and school.
Exosystem: This system contains structures that an individual does not directly interact with and is not directly impacted by; rather, the structures indirectly affect the individual through one of their microsystems. If the individual was a child their exosystem may include elements such as the legal services, their parents' work, or the school board. These elements do not directly impact the child, but they may impact some of the child's microsystems (such as their parents/family) which do directly affect the child.
Macrosystem: This system is considered to be the outermost layer of an individual's environment. It encompasses the culture and society in which a person lives in and is affected by. It includes the values, beliefs, laws, and customs by which a culture/society is dictated. The macrosystem ultimately influences the structures within the other systems and their interactions.
Chronosystem: This system encompasses the changes that occur throughout time in an individual's life. These changes may entail personal events, such as reaching puberty and the passing of a family member, as well as societal events, such as wars and technological advancements.
Jeffrey Arnett's theory of Emerging Adulthood
The theory of Emerging Adulthood was developed by Jeffery Arnett in the early 2000s. The theory is centered around changes often experienced during the transition from adolescence to adulthood. This time period takes place usually between the ages of 18 and 29. The concept of emerging adulthood is new, and likely developed due to growing numbers of college attendance and other social, economic, and cultural changes that have delayed typical markers of being an "adult".
There are five main characteristics describing what Emerging Adulthood looks like. To examine these five characteristics, in 1995 Jeffery Arnett interviewed 300 young adults aged 18 to 29 on the topic of what they wanted out of life. Due to this, Jeffery Arnett came up with the five characteristic and they go as follows: The Age Of Identity Exploration, The Age of Instability, The Age of Self Focus, The Age of Feeling in Between, and The Age of Possibilities. The Age Of Identity Exploration is one that Arnett found to be the most prevalent in the young people's lives, due to most people at this stage trying to figure out what they want in life and what their values are in life. The Age of Instability is where one comes into the area of life where everything is going to be changing frequently and some of the things that change dramatically are the status of ones love life and schooling. When one is in this stage of life, they have not yet established themselves, who they want to become, and what their career will look like. Many see this time of life in a negative light but it is where you can obtain the foundation to what you're future will be. The Age of Self Focus is the time where you start to decide who you are and what you want to become. This is where the individual will work a lot harder on themselves and begin to see significant personal growth, and will also become far more independent and self motivated. The Age of Feeling in Between stage is where one is in an area of life when you cannot do everything on your own but are starting to move away from being under the rule of your parents. The Age of Possibilities is the stage where many emerging adults have many different futures ahead of them and a sense of optimism for the different opportunities life has to offer. In addition to, this stage any emerging adults believe that they have the opportunity to have better lives than what their parents had before them. Some emerging adults in America between the ages of 18 and 25 when asked if they were adults, were unable to give a definitive answer of "yes" or "no" but more than likely give the answer that "yes, they have aspects of being an adult but they also have aspects of not being an adult yet." Overall, there is much to the theory of Emerging Adulthood but it still has criticism about its legitimacy. There are some that say that the theory neglects other classes and that this is a flaw. Some of the other things that have been stated about this theory is that it is steered too much to our time period and that this can be viewed as a major flaw and that it also focuses too much on Western culture as well.
Erik Erikson's theory of psychosocial development
Erik Erikson developed stages of ego development that extended through childhood, adolescence, and adulthood. He was trained in psychoanalysis and was highly influenced by Freud, but unlike Freud, Erikson believed that social interaction is very important to the individual's psychosocial development. His stage theory consists of 8 stages in life from birth to old age, each of which is characterized by a specific developmental task. During each stage, one developmental task is dominant, but may be carried forward into later stages as well. According to Erikson, individuals may experience tension when advancing to new stages of development, and seek to establish equilibrium within each stage. This tension is often referred to as a "crisis," a psycho-social conflict, in which an individual experiences conflict between their inner and outer worlds that are relative to whichever stage they are in. If equilibrium is not found for each task there are potential negative outcomes called maladaptation's (abnormally positive) and malignancies (abnormally negative), where malignancy is worse of the two. It posits eight sequential stages of individual human development influenced by biological, psychological, and social factors throughout the lifespan. This bio-psychosocial approach has influenced several fields of study, including gerontology, personality development, identity formation, life cycle development, and more.
Stage 1 – Trust vs. Mistrust (0 to 1.5 years)
Trust vs. Mistrust is experienced in the first years of life. Trust in infancy helps the child be secure about the world around them. Because an infant is completely dependent, they start building trust based on the dependability and quality of their caregivers. If a child successfully develops trust, he or she will feel safe and secure.
Maladaptation – sensory distortion (e.g. unrealistic, spoilt, deluded)
Malignancy – withdrawal (e.g. neurotic, depressive, afraid)
Stage 2 – Autonomy vs. Shame and Doubt (1.5 – 3 years)
After gaining trust in their caregivers, infants learn that they are responsible for their actions. They begin to make judgments and move on their own. When toddlers are punished too severely or too often, they are likely to experience shame and self-doubt.
Maladaptation – impulsivity (e.g. reckless, inconsiderate, thoughtless)
Malignancy – compulsion (e.g. anal, constrained, self-limiting)
Stage 3 – Initiative vs. Guilt (3 – 6 years)
During preschool years children start to use their power and control over the world through playing and other social interactions. Children who successfully pass this stage feel capable and able to lead others, while those who do not are left with a sense of guilt, self-doubt, and lack of initiative.
Maladaptation – ruthlessness (e.g. exploitative, uncaring, dispassionate)
Malignancy – inhibition (e.g. risk-averse, unadventurous)
Stage 4 – Industry vs. Inferiority (6 years to puberty)
When children interact with others they start to develop a sense of pride in their abilities and accomplishments. When parents, teachers, or peers command and encourage kids, they begin to feel confident in their skills. Successfully completing this stage leads to a strong belief in one's ability to handle tasks set in front of them.
Maladaptation – narrow virtuosity (e.g. workaholic, obsessive, specialist)
Malignancy – inertia (e.g. lazy, apathetic, purposeless)
Stage 5 – Identity vs. Role Confusion (adolescence)
During adolescent years, children begin to find out who they are. They explore their independence and develop a sense of self. This is Erikson's fifth stage, Identity vs Confusion. Completing this stage leads to fidelity, an ability that Erikson described as useful to live by society's standards and expectations.
Maladaptation – fanaticism (e.g. self-important, extremist)
Malignancy – repudiation (e.g. socially disconnected, cut-off)
Stage 6 – Intimacy vs. Isolation (early adulthood)
In early adulthood, individuals begin to experience intimate relationships in which they must either commit to relating and connecting to others on a personal level or retreat into isolation, afraid of commitment or vulnerability. Being intimate with someone does not always mean having a sexual component; in a platonic relationship, closeness might take the form of self-disclosure. After reaching this stage, a person is equipped to build strong, enduring relationships with other people. According to psychologist Robert Sternberg's "Triangular Theory of Love," companionate love is founded on deep affection, trust, and commitment and develops over time and becomes more prominent in long-term partnerships. In contrast, passionate love is characterized by intense feelings, physical attraction, and excitement and is usually present at the beginning of a relationship. Both types of love are thought to be distinct types that can coexist within a relationship.
Maladaptation – promiscuity (e.g. sexually needy, vulnerable)
Malignancy – exclusivity (e.g. loner, cold, self-contained)
Stage 7 – Generativity vs. Stagnation (middle adulthood)
This stage usually begins when an individual has established a career and has a family. In this stage, an individual must either contribute significantly to their careers, families and communities in order to ensure success in the next generation or they stagnate, creating a threat to their well-being which can be referred to as a "mid-life crisis." When individuals feel they have successfully fostered growth in themselves and their relationships, they will feel satisfied in their successes and contributions to the world.
Maladaptation – overextension (e.g. do-gooder, busy-body, meddling)
Malignancy – rejectivity (e.g. disinterested, cynical)
Stage 8 – Integrity vs. Despair (late adulthood)
This stage often occurs when an older individual is in retirement and expecting the end of their life. They reflect on their life and either come to the conclusion that they have found meaning and peace, or their lives were not fulfilling, and they didn't achieve what they wanted to. The former is self-accepting of who they've become, while the latter is not accepting of themselves or their circumstances in life, which leads to despair.
Maladaptation – presumption (e.g. conceited, pompous, arrogant)
Malignancy – disdain (e.g. miserable, unfulfilled, blaming)
Michael Commons's theory
Michael Commons's Model of Hierarchical Complexity (MHC) is an enhancement and simplification of Bärbel Inhelder and Jean Piaget's developmental model. It offers a standard method of examining the universal pattern of development. This model of hierarchical complexity explains development in stages that are not connected to a person's age, but on the ability of the person to complete increasingly complex hierarchical tasks. For one task to be more hierarchically complex than another, the new task must meet three requirements: 1) It must be defined in terms of the lower stage actions; 2) it must coordinate the lower stage actions; 3) it must do so in a non-arbitrary way. The following are the Common's 15 stages of development which demonstrate the increasingly complex nature of development.
0 Calculatory- exact computations only, no generalizations are made
1 Sensory and motor- organisms respond to a single stimulus in a reflexive way
2 Circular sensory-motor- basic movements like turning head, moving limbs, view objects and movements
3 Sensory-motor- form concepts, respond to stimuli in a class
4 Nominal- make connections between concepts
5 Sentential- imitate and acquire sequences, follow commands and short sequential acts
6 Preoperational- make simple deductions, follow longer lists of sequential acts, tell stories
7 Primary- apply simple logical rules, able to perform simple arithmetic
8 Concrete- able to do complex arithmetic, plan deals
9 Abstract- discriminate variables and stereotypes, make propositions
10 Formal- argue using linear, one dimensional logic
11 Systematic- construct multivariate systems and matrices
12 Metasystematic- combine or compare systems to make multi-systems
13 Paradigmatic- put metasystems together to create paradigms
14 Cross-paradigmatic- put paradigms together to form fields
15 Meta-Cross-paradigmatic- reflect on the cross-paradigmatic implications and limitations
Carl Jung's theory
Carl Jung, a Swiss psychoanalyst, formulated four stages of development and believed that development was a function of reconciling opposing forces.
Childhood: (birth to puberty) Childhood has two substages. The archaic stage is characterized by sporadic consciousness, while the monarchic stage represents the beginning of logical and abstract thinking. The ego starts to develop." Jung believed that consciousness is formed in a child starting when a child can say the word "I". And through that, the more a child distinguish him/herself from others and the world, the more ego develops. According to Jung, the psyche assumes a definite content not until puberty. That is when a teenager struggles through difficulties; he/she also begins to fantasize."
Youth: (Age 15-39) Maturing sexuality, growing consciousness, and a realization that the carefree days of childhood are gone forever. People strive to gain independence, find a mate, and raise a family.
Middle Age: (Age 40-64) The realization that you will not live forever creates tension. If you desperately try to cling to youth, you will fail in the process of self-realization. Jung believed that in midlife, one confronts one's shadow. Religiosity may increase during this period, according to Jung.
Old Age: (Age 65 and over) Consciousness is reduced. Jung thought that death is the ultimate goal of life. By realizing this, people will not face death with fear, but with a hope for rebirth.
Daniel Levinson's theory
Daniel Levinson's theory, influenced by Erikson's theory of development, explain a set of psychosocial 'seasons' through which adults must pass as they move through early adulthood and midlife. Each of these seasons is characterized by a crisis to overcome. Stages are created by the challenges of building or maintaining a life structure and by the social norms that apply to particular age groups, particularly concerning relationships and career. Levinson also emphasized that a common part of adult development is the midlife crisis. The process that underlies all these stages is individuation - a movement towards balance and wholeness over time. The key stages that he discerned in early adulthood and midlife were as follows:
Early Adult Transition (Ages 16–24)
Forming a Life Structure (Ages 24–28)
Settling down (Ages 29–34)
Becoming One's Own Man (Ages 35–40)
Midlife Transition (The early forties)
Restabilization, into Late Adulthood (Age 45 and on)
Levinson's work includes research on differences in the lives of men and women. He published The Seasons of a Man's Life and The Seasons of a Woman's Life, with findings that men and women went through essentially the same crises but differed in "The Dream." The author wrote that men's dreams are centered around occupations and women's are conflicted between occupation and marriage and family.
A biopsychosocial metatheory of adult development
The 'biopsychosocial' approach to adult development states that to understand human development in its fullness, biological, psychological, and social levels of analysis must be included. There are a variety of biopsychosocial meta-models, but all entail a commitment to the following four premises:
Human development happens concurrently at biological, psychological, and social levels throughout life, and a full descriptive account of development must include all three levels.
Development at each of these three levels reciprocally influences the other two levels; therefore nature (biology) and nurture (social environment) are in constant complex interaction when considering how and why psychological development occurs.
Biological, psychological and social descriptions, and explanations are all as valid as each other, and no level has causal primacy over the other two.
Any aspect of human development is best described and explained in relation to the whole person and their social context, as well as to their biological and cognitive-affective parts. This can be called a holistic or contextualist viewpoint, and can be contrasted with the reductionist approach to development, which tends to focus solely on biological or mechanistic explanations.
Robert Kegan's theory
Robert Kegan is an American developmental psychologist as well as the author or co-author of books such as In Over Our Heads, The Evolving Self, How the Way We Talk Can Change the Way We Work, and An Everyone Culture: Becoming a Deliberately Developmental Organization among other works. Kegan was also a professor at Harvard Graduate School of Education.
In The Evolving Self, Kegan explores human life problems through meaning-making, the process of making sense of experience by discovering problems and resolving problems. This book assists professional helpers with ways to understand how their clients make sense of their problems. Kegan proposes a framework of six evolutionary balances (developmental stages) that each have a culture of embeddedness. The culture of embeddedness can be examined in terms of three functions in development: confirmation (holding on), contradiction (letting go), and continuity (staying put). In this book, Kegan describes the process of emergence for the six evolutionary balances. These evolutionary balances have analogues to theories from Piaget, Kohlberg, Loevinger, Maslow, McClelland/Murray, and Erikson. In Over Our Heads further elaborates Kegan's perspective on adult development.
The book How the Way We Talk Can Change the Way We Work presents a practical method called the "immunity map" to help people overcome an immunity to change, an obstacle to further psychological development. The map is made of a four-column worksheet that guides a process of self-reflective inquiry.
The book An Everyone Culture: Becoming a Deliberately Developmental Organization Kegan and colleagues connect the concept of (DDOs) with adult development theories and argues for the importance of transitioning from a socialized mind to a self-authoring mind and then from a self-authoring mind to a self-transforming mind.
Normative physical changes in adulthood
Physical development in midlife and beyond include changes at the biological level (senescence) and larger organ and musculoskeletal levels. Sensory changes and degeneration begin to be common in midlife. Degeneration can include the breakdown of muscle, bones, and joints. Which leads to physical ailments such as sarcopenia or arthritis.
At the sensory level, changes occur to vision, hearing, taste, touch, and smell. Two common sensory changes that begin in midlife include our ability to see close objects and our ability to hear high pitches. Other developmental changes to vision might include cataracts, glaucoma, and the loss of central visual field with macular degeneration. Hearing also becomes impaired in midlife and aging adults, particularly in men. In the past 30 years, hearing impairment has doubled. Hearing aids as an aid for hearing loss still leave many individuals dissatisfied with their quality of hearing. Changes in olfaction and sense of taste can co-occur. "Olfactory dysfunction can impair quality of life and may be a marker for other deficits and illnesses" and can also lead to decreased satisfaction in taste when eating. Losses to the sense of touch are usually noticed when there is a decline in the ability to detect a vibratory stimulus. The loss of sense of touch can harm a person's fine motor skills such as writing and using utensils. The ability to feel painful stimuli is usually preserved in aging, but the process of decline for touch is accelerated in those with diabetes.
Physical deterioration to the body begins to increase in midlife and late life, and includes degeneration of muscle, bones, and joints. Sarcopenia, a normal developmental change, is the degeneration of muscle mass, which includes both strength and quality. This change occurs even in those who consider themselves athletes, and is accelerated by physical inactivity. Many of the contributing factors that may cause sarcopenia to include neuronal and hormonal changes, inadequate nutrition, and physical inactivity. Apoptosis has also been suggested as an underlying mechanism in the progression of sarcopenia. The prevalence of sarcopenia increases as people age and is associated with the increased likelihood of disability and restricted independence among elderly people. Approaches to preventing and treating sarcopenia are being explored by researchers. A specific preventive approach includes progressive resistance training, which is safe and effective for the elderly.
Developmental changes to various organs and organ systems occur throughout life. These changes affect responses to stress and illness, and can compromise the body's ability to cope with the demand for organs. The altered functioning of the heart, lungs, and even skin in old age can be attributed to factors like cell death or endocrine hormones. There are changes to the reproductive system in midlife adults, most notably menopause for women, the permanent end of fertility. In men, hormonal changes also affect their reproductive and sexual physiology, but these changes are not as extreme as those experienced by women.
Illnesses associated with aging
As adult bodies undergo a variety of physical changes that cause health to decline, a higher risk of contracting a variety of illnesses, both physical and mental, is possible.
Cancer
Scientists have made a distinctive connection between aging and cancer. It has been shown that the majority of cancer cases occur in those over 50 years of age. This may be due to the decline in the strength of the immune system as one ages or to co-existing conditions. There a variety of symptoms associated with cancer, commonly growths or tumors may be indicators of cancer. Radiation, chemotherapy, and in some cases, surgery, is used to treat cancer.
The following are the most common types of cancer in the elderly:
Breast Cancer
Breast cancer is the second most common cancer among women, with a five-year survival rate of 93.2%. The incidence of breast cancer in Korea was 24.2% in 2018. The number of breast cancer survivors has been consistently increasing [[ 1]]. Whereas breast cancer commonly affects those aged 50 years or older in the United States and Europe, it has the highest incidence among those in their 40s in Korea. The earlier onset of breast cancer means a longer period lived as a breast cancer survivor.
Prostate Cancer
Prostate cancer (PCa) is the second most frequent cancer diagnosed in men worldwide, only behind lung cancer. In 2020, over 1,414,259 new PCa cases and 375,304 deaths were estimated for PCa worldwide.
Lung Cancer
Lung cancer is the second most common cancer in both men and women (not counting skin cancer). Lung cancer is the leading cause of death from cancer making up almost 25% of all cancer deaths. Mortality from lung cancer is high due to its frequent presentation at a late stage. According to the 2020 statistics by the American Cancer Society, 228,820 new lung cancer cases will be diagnosed in 2020 and there will be 135,720 deaths due to lung cancer in USA. Each year more people die of lung cancer than of colon, breast and prostate cancers combined.
Bowel Cancer
Bowel cancer is a general term for cancer that begins in the large bowel. Depending on where the cancer starts, bowel cancer is sometimes called colon or rectal cancer. Bowel cancer is one of the most common types of cancer diagnosed in the UK. Most people diagnosed with it are over the age of 60.
Arthritis
Osteoarthritis is one of the most commonly experienced illnesses in adults as they age. Although there are a variety of types of arthritis they all include very similar symptoms: aching joints, stiff joints, continued joint pain, and problems moving joints.
Cardiovascular disease
It has been found that older age does increase the risk factor of contracting cardiovascular disease. Hypertension and high cholesterol have also been found to increase the likelihood of acquiring cardiovascular disease, which is also commonly found in older adults. Cardiovascular diseases include a variety of heart conditions that may induce a heart attack or other heart-related problems. Healthy eating, exercise, and avoiding smoking are usually used to prevent cardiovascular disease.
Immune system
Infection occurs more easily as one ages, as the immune system starts to slow and become less effective. Aging also changes how the immune system reacts to infection, making new infections harder to detect and attack. Essentially, the immune system has a higher chance of being compromised the older one gets.
Type 2 Diabetes
A chronic illness that effect body processes glucose. It becomes far more prevalent in those over 45. Both type 1 and type 2 diabetes may lead to the following serious illnesses such as; strokes, heart attacks, nerve damage, kidney damage and blindness.
Adult neurogenesis and neuroplasticity
New neurons are constantly formed from stem cells in parts of the adult brain throughout adulthood, a process called adult neurogenesis. The hippocampus is the area of the brain that is most active in neurogenesis. Research shows that thousands of new neurons are produced in the hippocampus every day. The brain constantly changes and rewires itself throughout adulthood, a process known as neuroplasticity. Evidence suggests that the brain changes in response to diet, exercise, social environment, stress, and toxin intake. These same external factors also influence genetic expression throughout adult life - a phenomenon known as genetic plasticity.
Non-normative cognitive changes in adulthood
Dementia is characterized by persistent, multiple cognitive deficits in the domains including, but not limited to, memory, language, and visuospatial skills and can result from central nervous system dysfunction. Two forms of dementia exist: degenerative and nondegenerative. The progression of nondegenerative dementias, like head trauma and brain infections, can be slowed or halted but degenerative forms of dementia, like Parkinson's disease, Alzheimer's disease, and Huntington's are irreversible and incurable.
Alzheimer's disease
Alzheimer's disease (AD) was discovered in 1907 by Dr. Alois Alzheimer, a German neuropathologist and psychiatrist. Physiological abnormalities associated with AD include neurofibrillary plaques and tangles. Neuritic plaques, that target the outer regions of the cortex, consist of withering neuronal material from a protein, amyloid-beta. Neurofibrillary tangles, paired helical filaments containing over-phosphorylated tau protein, are located within the nerve cell. Early symptoms of AD include difficulty remembering names and events, while later symptoms include impaired judgment, disorientation, confusion, behavior changes, and difficulty speaking, swallowing, and walking. After initial diagnosis, a person with AD can live, on average, an additional 3 to 10 years with the disease. In 2024, it was estimated that 6.9 million Americans age 65 and older had AD. Environmental factors such as head trauma, high cholesterol, and type 2 diabetes can increase the likelihood of AD.
The impact of Alzheimer's disease on individuals and their families is profound, making ongoing research and promising developments in treatments like lecanemab incredibly important. The strides being made in understanding the physiological aspects of the disease, as well as the potential treatments, provide hope for millions of people worldwide who are affected by Alzheimer's. It's heartening to hear about the progress in clinical trials for lecanemab and the possibility it holds for mitigating the early symptoms of cognitive decline. Continued investment in research and support for those living with Alzheimer's is crucial as we work towards a future where effective treatments and, ultimately, a cure can improve the lives of those impacted by this devastating condition.
Recent studies for the drug lecanemab have shown promising results for people who suffer from Alzheimer's disease. The drug has been approved for phase three clinical trials[1]. This medication treats early symptoms of cognitive deterioration in people with Alzheimer's.
Huntington's disease
Huntington's disease (HD) named after George Huntington is a disorder that is caused by an inherited defect in a single gene on chromosome 4, resulting in a progressive loss of mental faculties and physical control. HD affects personality, leads to involuntary muscle movements, cognitive impairment, and deterioration of the nervous system. Symptoms usually appear between the ages of 30 and 50 but can occur at any age, including adolescence. There is currently no cure for HD and treatments focus on managing symptoms and quality of life. Current estimates claim that 1 in 10,000 Americans have HD, however, 1 in 250,000 are at-risk of inheriting it from a parent. Most individuals with HD live 10 to 20 years after a diagnosis.
Parkinson's disease
Parkinson's disease (PD) was first described by James Parkinson in 1817. James Parkinson did describe his first findings of Parkinson's disease (PD) in early essays. It typically affects people over the age of 50 and affects about 0.3% of developed populations. PD is related to damaged nerve cells that produce dopamine. Common symptoms experienced by people with PD include trembling of the hands, arms, legs, jaw, or head; rigidity (stiffness in limbs and the midsection); bradykinesia; and postural instability, leading to impaired balance and/or coordination. Other areas such as speech, swallowing, olfaction, and sleep may be affected. No cure for PD is available, but diagnosis and treatment can help relieve symptoms. Treatment options include medications like Carbidopa/Levodopa (L-dopa), that reduce the severity of motor symptoms in patients. Alternative treatment options include non-pharmacological therapy. Surgery (pallidotomy, thalamotomy) is often viewed as the last viable option.
Around 80% of patients that have Parkinson's diseases also experience tremors. The tremor's severity is caused by dopamine levels and other factors. Gait disturbances caused by Parkinson's disease may lead to falls. Non-experts need to be aware of the features of Parkinson's disease and should have a basic understanding of how the condition should be treated between primary and secondary care. Some cases of secondary Parkinsonism have been described as iatrogenic after the use of certain drugs such as phenothiazines and reserpine. The vast majority of Parkinsonism is still of unknown etiology and many hypotheses have been proposed.
Mental health in adulthood and old age
Older adults represent a significant proportion of the population, and this proportion is expected to increase with time. Mental health concerns of older adults are important at treatment and support levels, as well as policy issues. The prevalence of suicide among older adults is higher than in any other age group.
Depression
Depression is one of the most common disorders that is present in old age and is usually comorbid with other physical and psychiatric conditions, perhaps due to the stress induced by these conditions. In older adults, depression presents as impairments already associated with age such as memory and psychomotor speed. Research indicates that higher levels of exercise can decrease the likelihood of depression in older adults even after taking into consideration factors such as chronic conditions, body mass index, and social relationships. In addition to exercise, behavioral rehabilitation and prescribed antidepressants, which is well tolerated in older adults, can be used to treat depression. Some research has indicated that a diet rich in folic acid and Vitamin B12 has been tied to preventing the development of depression among older adults.
Anxiety
Anxiety is a relatively uncommon diagnosis in older adults and it is difficult to determine its prevalence. Anxiety disorders in late life are more likely to be under-diagnosed because of medical comorbidity, cognitive decline, and changes in life circumstances that younger adults do not face. However, in the Epidemiological Catchment Area Project, researchers found that 6-month prevalence rates for anxiety disorders were lowest for the 65 years of age and older cohort. A recent study found that the prevalence of general anxiety disorder (GAD) in adults aged 55 or older in the United States was 33.7% with an onset before the age of 50.
Loneliness in adulthood plays a major factor in depression and anxiety. According to Cacioppo, loneliness is described as a time in one's life when you are emotionally sad and feel as if there is a void in your life for social interactions. Older adults tend to be lonelier due to death of a spouse or children moving away as a result of marriage or careers. Another factor is friends sometimes lose their mobility and cannot socialize like they used to, as socialization plays an important role in protecting people from becoming lonely. Loneliness is categorized in three parts, which are intimate loneliness, relational loneliness and collective loneliness. All three types of loneliness has to do with your personal environment. Older adults sometimes depend on a child, spouse, or friend to be around for them socially for daily interactions and help with everyday chores. Loneliness can be treated by mostly social involvement, such as social skills and social support.
Attention deficit hyperactivity disorder (ADHD)
ADHD is generally believed to be a children's disorder and is not commonly studied in adults. Research suggests that the overall percentage of adults with ADHD is 4.4%. However, ADHD in adults results in lower household incomes, less educational achievement as well as a higher risk of marital issues and substance abuse. Activities such as driving can be affected; adults with inattentiveness due to ADHD experience increased rates of car accidents. ADHD impairs the driver's ability to drive in such a way that it may resemble intoxicated driving. Adults with ADHD tend to be more creative, vibrant, aware of multiple activities, and are able to multitask when interested in a certain topic.
Other mental disorders
The impact of mental disorders such as schizophrenia, delusional disorders, paraphrenia, schizoaffective disorder, and bipolar disorder in adulthood is largely mediated by the environmental context. Those in hospitals and nursing homes differ in risk for a multitude of disorders in comparison to community-dwelling older adults. Differences in how these environments treat mental illness and provide social support could help explain disparities and lead to a better knowledge of how these disorders are manifested in adulthood.
Optimizing health and mental well-being in adulthood
Exercising four to six times a week for thirty to sixty minutes has physical and cognitive effects such as lowering blood sugar and increasing neural plasticity. Physical activity reduces the loss of function by 10% each decade after the age of 60 and active individuals drop their rate of decline in half. Cardio activities like walking promote endurance while strength, flexibility, and balance can all be improved through Tai Chi, yoga, and water aerobics. Diets containing foods with calcium, fiber, and potassium are especially important for good health while eliminating foods with high sodium or fat content. A well-balanced diet can increase resistance to disease and improve management of chronic health problems thus making nutrition an important factor for health and well-being in adulthood.
The effects that both aerobic and resistance training can have on the older population can go as far as expanding lifespan. Research has shown that the type of exercise chosen can cause a major difference in results. Resistance training has been found to increase cognitive function not only just in older adults, but in people with intellectual disabilities as well. A high percentage of this population happens to be older adults, but the fact that this exercise makes a difference in other populations as well, shows just how valuable it is. Although it has been shown time and time again that the effects of resistance training on the older adult population are beneficial to cognitive function, these results are not always instant. In some cases, these changes to cognitive function can take years to occur. When referring to results, these results are also not only physical. Resistance training has also been found to play a major role in decreasing depressive mood and isolation from friends. Within the older adult population, Alzheimer’s disease is the most found form of dementia that comes with major symptoms. This can lead to interference and decreased ability to perform daily tasks such as going to the grocery store or even standing and sitting. What we do know is that physical activity (especially resistance training) can help improve the overall functionality of this population. This increase in function stems from the positive effects that resistance training has on brain function. Resistance training has also been found to have a positive role in affecting neuron plasticity, neurogenesis, neuron signals, neuron receptors and most neuronal networks.
Cognitive decline, including dementia and Alzheimer’s disease, continues to be a health condition that many older adults struggle with. This group of neural diseases tend to inhibit the nervous system’s ability to properly send signals for everyday activities, sometimes even killing neurons. Due to nervous tissue’s limited regenerative ability, people with cognitive decline are often left with lifelong issues remembering information, judging situations, communicating with others, or thinking in general. The National Institutes of Health (NIH) estimates that 66% of Americans experience some level of cognitive decline in their lifetime. Physical activity has been suggested as a form of preventative medicine to slow cognitive decline; many propose this is due to its positive effects on quality of life, physical, mental, and emotional. A combination of physical changes that come with continuous exercise along with its effect on mental health and emotional connection is the broad focus of many reviews regarding the effectiveness of exercise as preventative medicine and treatment. There are various other proposed explanations, including increasing neuroplasticity and neurogenesis, secretion of neural tissue-protecting substances, and improved cardiovascular fitness. For example, a research team in Japan conducted studies in mice that compared physically active to physically inactive mice. They found that physically active mice had higher circulating irisin, a peptide made in contacting muscles that has a role in neurogenesis and other cognition factors. Most studies and literature reviews similarly conclude that moderate-intensity exercise with long-term adherence will yield the best results for retaining cognitive function in older adults. There are researchers who argue that the association is not as strong as we thought due to low long-term follow up studies, but future research can be done to understand all the factors between cognitive decline and physical activity. A large collection of clinical trial results showed that many studies didn’t follow participants after 10 years and there was less of a dose-response relationship between reducing cognitive decline symptoms with the use of consistent exercise. Overall, consistent moderate-intensity exercise should be a significant part of the lives of older adults in order to prevent and treat cognitive decline.
While there is a certain level of individualism at play, there are three articles that show how aging in a healthy way, physically and mentally, can be achieved by focusing on cognitive health, muscle retention, and curbing the effects of neurodegenerative disease. The first article I found explains how consistent exercise and boost the cognitive function of older adults, with proven immediate and long-term benefits. The article also touches on the physical limitations that can come with attempting to achieve these benefits, including things like depression and/or social isolation. It continues by arguing that physical activity can help preserve cognitive health. The second article focuses mainly on whey protein supplementation and how it supports muscle retention while aging. Although the main focus is physical muscle health, similar to the first article, it includes cognitive assessments that gauge cognition throughout the study. These assessments included things like tests for reaction time and working memory, but the main conclusions this article drew, which is a difference I noticed compared to the other two articles) is that it focused more on physical muscle health with whey protein supplementation rather than how it impacts cognitive function, preserves cognition, or prevents any cognitive disorders. The last article I researched looks explicitly at Alzheimer’s disease and how physical activity and exercise may slow its progression. Unlike the second article, but similar to the first, the third article explores the process of mitophagy - removal of damaged mitochondria - and how it may inhibit the progression of this neurodegenerative disease. In short, it explains how exercise could theoretically reduce oxidative stress, which in turn supports a healthy brain and slowing the progression of Alzheimer’s. The central notions of the first and third articles are pretty similar, and the methods of the second article (specifically the assessments for cognitive function) all brought supportive evidence and concepts pointing to the idea that physical activity and exercise help maintain cognitive function in older adults and potentially can curb the effects of neurodegenerative disease.
Mental stimulation and optimism are vital to health and well-being in late adulthood. Adults who participate in intellectually stimulating activities every day are more likely to maintain their cognitive faculties and are less likely to show a decline in memory abilities. Mental exercise activities such as crossword puzzles, spatial reasoning tasks, and other mentally stimulating activities can help adults increase their brain fitness. Additionally, researchers have found that optimism, community engagement, physical activity, and emotional support can help older adults maintain their resiliency as they continue through their life span.
Managing stress and developing coping strategies
Cognitive, physical, and social losses, as well as gains, are to be expected throughout the lifespan. Older adults typically self-report having a higher sense of well-being than their younger counterparts because of their emotional self-regulation. Researchers use Selective Optimization with Compensation Theory to explain how adults compensate for changes to their mental and physical abilities, as well as their social realities. Older adults can use both internal and external resources to help cope with these changes.
The loss of loved ones and ensuing grief and bereavement are inevitable parts of life. Positive coping strategies are used when faced with emotional crises, as well as when coping with everyday mental and physical losses. Adult development comes with both gains and losses, and it is important to be aware and plan ahead for these changes in order to age successfully.
Personality in adulthood
Personality change and stability occur in adulthood. For example, self-confidence, warmth, self-control, and emotional stability increase with age, whereas neuroticism and openness to experience tend to decline with age. As people grow older, they experience not only physical changes but psychological ones that can change throughout one's lifespan.
Personality change in adulthood
Two types of statistics are used to classify personality change over the life span. Rank-order change refers to a change in an individual's personality trait relative to other individuals. Mean-level change refers to absolute change in the individual's level of a certain trait over time. Typically, it appears that as individuals' age they show increased self-confidence, warmth, self-control, and emotional stability. These changes seem to mostly take place between the ages of 20 and 40.
Controversy
The plaster hypothesis refers to personality traits tending to stabilize by age 30. Stability in personality throughout adulthood has been observed in longitudinal and sequential research. However, personality also changes. Research on the Big 5 Personality traits include a decrease in openness and extraversion in adulthood; an increase of agreeableness with age; peak conscientiousness in middle age; and a decrease of neuroticism late in life. The concepts of both adjustment and growth as developmental processes help reconcile the large body of evidence for personality stability and the growing body of evidence for personality change.
Intelligence in adulthood
According to the lifespan approach, intelligence is a multidimensional and multidirectional construct characterized by plasticity and interindividual variability. Intellectual development throughout the lifespan is characterized by decline as well as stability and improvement. Mechanics of intelligence, the basic architecture of information processing, decreases with age. Pragmatic intelligence, knowledge acquired through culture and experience, remains relatively stable with age.
The psychometric approach assesses intelligence based on scores on standardized tests such as the Wechsler Adult Intelligence Scale and Stanford Binet for children. The Cognitive Structural approach measures intelligence by assessing the ways people conceptualize and solve problems, rather than by test scores.
Developmental trends in intelligence
Primary mental abilities are independent groups of factors that contribute to intelligent behavior and include word fluency, verbal comprehension, spatial visualization, number facility, associative memory, reasoning, and perceptual speed. Primary mental abilities decline around the age of 60 and may interfere with life functioning. Secondary mental abilities include crystallized intelligence (knowledge acquired through experience) and fluid intelligence (abilities of flexible and abstract thinking). Fluid intelligence declines steadily in adulthood while crystallized intelligence increases and remains fairly stable with age until very late in life.
Relationships
A combination of friendships and family is the support system for many individuals and an integral part of their lives from young adulthood to old age.
Family
Family relationships tend to be some of the most enduring bonds created within one's lifetime. As adults age, their children often feel a sense of filial obligation, in which they feel obligated to care for their parents. Adult children can often be informal caregivers to their parents as they help them with personal needs, chores, and finances.
Marital satisfaction remains high in older couples, oftentimes increasing shortly after retirement. This can be attributed to increased maturity and reduced conflict within the relationship. However, when health problems arise, the relationship can become strained. Studies of spousal caregivers of individuals with Alzheimer's disease show marital satisfaction is significantly lower than in couples who are not affected. Most people will experience the loss of a family member by death within their lifetime. This life event is usually accompanied by some form of bereavement, or grief. There is no set time frame for a mourning period after a loved one passes away, rather every person experiences bereavement in a different form and manner.
In the United States, Hispanic populations tend to have far less poor disease outcomes in comparison to non-Hispanic whites. The support that individuals receive when diagnosed with health problems has been proven to have a significant impact on how well the person handles it later on. Social support from not only family but friends as well can influence a person's survival rate and health outcome. The cultural differences of social relationships between Hispanics and non-Hispanic whites may help to explain this paradox. Typically, Hispanic families may be more resilient against disease outcomes because of having a stronger support system including family and friends.
Friends
Friendships, similar to family relationships, are often the support system for many individuals and a fundamental aspect of life from young adulthood to old age. Social friendships are important to emotional fulfillment, behavioral adjustment, and cognitive function. Research has shown that emotional closeness in relationships greatly increases with age even though the number of social relationships and the development of new relationships begin to decline. In young adulthood, friendships are grounded in similar aged peers with similar goals, though these relations might be less permanent than other relationships. In older adulthood, friendships have been found to be much deeper and longer lasting. While small in number, the quality of relationships is generally thought to be much stronger for older adults.
Retirement
Retirement, or the point in which a person stops employment entirely, is often either a time of psychological distress or a time of high quality and enhanced subjective well-being for individuals. Most individuals choose to retire between the ages of 50 and 70, and researchers have examined how this transition affects subjective well-being in old age. One study examined subjective well-being in retirement as a function of marital quality, life course, and gender. Results indicated a positive correlation between well-being for married couples who retire around the same time compared to couples in which one spouse retires while the other continues to work.
Retirement communities
Retirement communities provide for individuals who want to live independently but do not wish to maintain a home. They can maintain their autonomy while living in a community with individuals who are similar in age as well as within the same stage of life. The senior living industry has transformed greatly since its formation in the early 1960s. Newer active adult communities consist of added services to better accommodate those who might feel as if they're missing something compared to their previous lifestyle. These improved retirement communities are meant to help create a standard of living which strengthen engagement, socialization, and most importantly, creating a purpose for its residents.
Compared to the previous generation, older adults (born between 1946 and 1964) seem to typically search for a lifestyle that consist of the ability to continue their life and search for the "next" best thing in their lives. These can be interpreted as a career change, volunteer opportunities, learning a new skill, new degree, or even just a refocus on their health and wellness. An integration of technology into these communities allows there to be applications for convenience, nonintrusive monitoring of vitals, and the ability for members of the community to be in contact with other family members and friends 24/7. It is reported that residents have a greater awareness over their wellness factors and are more efficiently able to set their goals.
Phased Retirement
Oftentimes jobs become a part of people's identities because they work there for so long that it becomes part of who they are and is a place where they feel they belong. Abrupt retirement does not allow them to come to terms with losing this part of their identity. This can create a lot of psychological distress and it can make people not want to retire at all. It is painful and can be unhealthy to just fully retire instead of phasing into retirement, however, phased retirement has costs along with its benefits.
Retirement is a major life transition that can be complicated. Phasing into retirement makes people more flexible during the transition, can make life less stressful and easier, and can make the act of retiring a lot more tolerable. Phasing into retirement is just like it sounds, slowly coming out of working. It entails usually moving to part time from full time working. Oftentimes going from full time to part time gives people a sense of relief that allows them to realize how taxing their jobs really were on them. It also helps them realize how nice the new free time they have is. They are able to spend their free time on their hobbies and other recreational activities that got put on the back burner while they worked full time.
Although phased retirement has lots of benefits the main cost of course is reduced pay which for some is not a big deal, but for those that might struggle more financially it is not ideal. It is especially not an attractive option if they are not yet eligible for social security or Medicare. Different workplaces have different plans and ways to help, but the main stumbling block is that loss of income for many people. Phasing makes people have to lessen their involvement and commitment which still can be hard for some as they feel less needed and still feel that sense of losing part of their identity. They can start to feel disconnected since they are not around as much and might feel like they miss out on the daily things that happen.
Long-term Care
Assisted living facilities are housing options for older adults that provide a supportive living arrangement for people who need assistance with personal care, such as bathing or taking medications, but are not so impaired that they need 24-hour care. These facilities provide older adults with a home-like environment and personal control while helping to meet residents' daily routines and special needs.
Adult daycare is designed to provide social support, supervision, companionship, healthcare, and other services for adult family members who may pose safety risks if left at home alone while another family member, typically a caregiver, must work or otherwise leave the home. Adults who have cognitive impairments should be carefully introduced to adult daycare.
Nursing home facilities provide residents with 24-hour skilled medical or intermediate care. A nursing home is typically seen as a decision of last resort for many family members. While the patient is receiving comprehensive care, the cost of nursing homes can be very high with only a few insurance companies choosing to cover it. There is research that looks into other methods of care, such as independent care or independent living.
Independent living communities are facilities where people may have access to fully furnished homes or private apartments. Independent living communities are useful for seniors who want to preserve their independence while dealing with a limited number of medical issues. Independent living communities are noted for their strong sense of community, which is enhanced by social outings and other recreational activities. These continuing care communities offer this type of care to residents as a way to maintain a comprehensive continuum of care and other services as their needs fluctuate.
See also
Positive adult development
Notes
External links
Developmental psychology | Adult development | Biology | 11,754 |
14,263,763 | https://en.wikipedia.org/wiki/Calcium%20malate | Calcium malate is a compound with formula Ca(C2H4O(COO)2). It is the calcium salt of malic acid. As a food additive, it has the E number E352.
It is related to, but different from, calcium citrate malate.
References
Malates
Calcium compounds
E-number additives | Calcium malate | Chemistry | 73 |
50,221,650 | https://en.wikipedia.org/wiki/Gurtam | Gurtam (Belarusian “гуртам” — together) is a developer and provider of software for GPS monitoring, vehicle telematics and fleet management.
As of September 2021, Wialon, the GPS tracking system developed by Gurtam, monitored more than 3 000 000 assets worldwide. The company's business network involves over 2,200 fleet management service providers in more than 130 countries. According to the report of Berg Insight research agency, Wialon is the leading GPS monitoring system in CIS, occupying about 40% of the CIS commercial carrier market.
Operations
The most popular Gurtam services are Wialon Hosting (SaaS) and Wialon Local (server-based solution).
Gurtam constantly participates in industrial expos and events worldwide including CeBIT, GSMA Mobile World Congress, Securex South Africa, Expo Seguridad Mexico, CTIA, GITEX, Telematics Conference SE Europe etc.
History
Random facts
In 2009, the company provided Wialon software for vehicle tracking at the international automobile race “A National Sport – Bridge of Friendship”. Further on the company cooperates with other automobile racing coordinators, including those engaged in “Beijing-Paris” multi-stage rally, Can-Am Trophy Russia open off-road ATV-series, “Ukraine Trophy 2012” trophy-raid.
In 2014, Gurtam arranged Kilimanjaro expedition to test hardware and proprietary tracking system in extreme conditions.
According to Berg Insight market research agency, Wialon was recognized a leading satellite monitoring system in the CIS and the Russian Federation in 2014 and 2015.
Since July 2021 Gurtam has joined the IoT M2M Council (IMC) as a Sustaining Member company.
References
Satellite navigation
Software companies of Lithuania
Vehicle technology | Gurtam | Engineering | 363 |
21,647,771 | https://en.wikipedia.org/wiki/Genus%20theory | In the mathematical theory of games, genus theory in impartial games is a theory by which some games played under the misère play convention can be analysed, to predict the outcome class of games.
Genus theory was first published in the book On Numbers and Games, and later in Winning Ways for your Mathematical Plays Volume 2.
Unlike the Sprague–Grundy theory for normal play impartial games, genus theory is not a complete theory for misère play impartial games.
Genus of a game
The genus of a game is defined using the mex (minimum excludant) of the options of a game.
g+ is the grundy value or nimber of a game under the normal play convention.
g- or λ0 is the outcome class of a game under the misère play convention.
More specifically, to find g+, *0 is defined to have g+ = 0, and all other games have g+ equal to the mex of its options.
To find g−, *0 has g− = 1, and all other games have g− equal to the mex of the g− of its options.
λ1, λ2..., is equal to the g− value of a game added to a number of *2 nim games, where the number is equal to the subscript.
Thus the genus of a game is gλ0λ1λ2....
*0 has genus value 0120. Note that the superscript continues indefinitely, but in practice, a superscript is written with a finite number of digits, because it can be proven that eventually, the last 2 digits alternate indefinitely...
Outcomes of sums of games
It can be used to predict the outcome of:
The sum of any nimbers and any tame games
The sum of any one game given its genus, any number of nim games *1, *2 or *3, and optionally one other nim game with nimber 4 or higher
The sum of a restive game and any number of nim games of any size
In addition, some restive or restless pairs can form tame games, if they are equivalent. Two games are equivalent if they have the same options, where the same options are defined as options to equivalent games. Adding an option from which there is a reversible move does not affect equivalency.
Some restive pairs, when added to another restive game of the same species, are still tame.
A half tame game, added to itself, is equivalent to *0.
Reversible moves
It is important for further understanding of Genus theory, to know how reversible moves work. Suppose there are two games A and B, where A and B have the same options (moves available), then they are of course, equivalent.
If B has an extra option, say to a game X, then A and B are still equivalent if there is a move from X to A.
That is, B is the same as A in every way, except for an extra move (X), which can be reversed.
Types of games
Different games (positions) can be classified into several types:
Nim
Tame
Restive
Restless
Half tame
Wild
Nim
This does not mean that a position is exactly like a nim heap under the misère play convention, but classifying a game as nim means that it is equivalent to a nim heap.
A game is a nim game, if:
it has a genus 01, 10, 22, 33...
it has moves only to single nim heaps, i.e. move to a position *1, or *2, but not e.g. *x+*y (but see next point)
it may also have moves to games which are not nim, provided they are not required to determine the genus, and those games each have at least one option to a nim game of the same genus
Tame
These are positions which we can pretend are nim positions (note difference between nim positions, which can be many nim heaps added together, and a single nim heap, which can only be 1 nim heap). A game G is tame if:
it has a genus 01, 10, or 00, 11, 22, 33...
all options of G are tame
G may also have wild options (positions which are not tame or nim) if they do not affect the genus, and each option have reversible moves to tame games with genus g? and ?λ.
Note the moves to g? and ?λ may actually be the same option. ? means any number.
See also
Indistinguishability quotient
References
On Numbers and Games by John Horton Conway
Winning Ways for Your Mathematical Plays by Elwyn Berlekamp, John Conway and Richard Guy.
Combinatorial game theory | Genus theory | Mathematics | 999 |
9,536,893 | https://en.wikipedia.org/wiki/Armature%20%28computer%20animation%29 | An armature is a kinematic chain used in computer animation to simulate the motions of virtual human or animal characters. In the context of animation, the inverse kinematics of the armature is the most relevant computational algorithm.
There are two types of digital armatures: Keyframing (stop-motion) armatures and real-time (puppeteering) armatures. Keyframing armatures were initially developed to assist in animating digital characters without basing the movement on a live performance. The animator poses a device manually for each keyframe, while the character in the animation is set up with a mechanical structure equivalent to the armature. The device is connected to the animation software through a driver program and each move is recorded for a particular frame in time. Real-time armatures are similar, but they are puppeteered by one or more people and captured in real time.
See also
Linkages
Skeletal animation
References
3D graphics software
Computational physics | Armature (computer animation) | Physics | 202 |
7,709,259 | https://en.wikipedia.org/wiki/Time%20in%20Uzbekistan | Uzbekistan time is the standard time in Uzbekistan; it is 5 hours ahead of UTC, UTC+05:00. The standard time uses no daylight saving time, though there has been constant debate whether to adopt it in order to increase leisure time.
After the breakup of the Soviet Union there were two time zones in Uzbekistan. In the Soviet era most time zones were daylight time in the winter and double daylight time in the summer. The western part of the country observed Samarkand Time 5 or 6 hours ahead of UTC. The eastern part observed Tashkent Time 6 or 7 hours ahead of UTC. In 1991 the clocks did not move forward in the spring to maintain single daylight time only in the summer. That fall a unified time zone was adopted 5 hours ahead of UTC.
See also
GMT
Time zone
UTC+05:00
Uzbekistan
References
Geography of Uzbekistan | Time in Uzbekistan | Physics | 174 |
62,155,660 | https://en.wikipedia.org/wiki/Methoxymethanol | Methoxymethanol is a chemical compound which is both an ether and an alcohol, a hemiformal. The structural formula can be written as CH3OCH2OH. It has been discovered in space.
Formation
Methoxymethanol forms spontaneously when a water solution of formaldehyde and methanol are mixed. or when formaldehyde is bubbled through methanol.
In space methoxymethanol can form when methanol radicals (CH2OH or CH3O) react. These are radiolysis products derived when ultraviolet light or cosmic rays hit frozen methanol.
Methanol can react with carbon dioxide and hydrogen at 80°C and some pressure with a ruthenium or cobalt catalyst, to yield some methoxymethanol.
Properties
Different conformations of the molecule are Gauche-gauce (Gg), Gauche-gauce' (Gg'), and Trans-gauche (Tg).
References
Ethers
Primary alcohols | Methoxymethanol | Chemistry | 209 |
3,029,842 | https://en.wikipedia.org/wiki/Quarterland | A Quarterland or Ceathramh (Scottish Gaelic) was a Scottish land measurement. It was used mainly in the west and north.
It was supposed to be equivalent to eight fourpennylands, roughly equivalent to a quarter of a markland. However, in Islay, a quarterland was equivalent to a quarter of an ounceland. Half of a quarterland would be an ochdamh(ie.one-eighth), and in Islay a quarter of a quarterland a leothras(ie.one-sixteenth).
The name appears in many Scottish placenames, notably Kirriemuir.
Kerrowaird – Ceathramh àrd (High Quarterland)
Kerrowgair – Ceathramh geàrr (Rough Quarterland)
Kerry (Cowal) - An Ceathramh Còmh’lach (The Cowal Quarterland)
Kerrycroy - An Ceathramh cruaidh (The Hard Quarterland)
Kirriemuir – An Ceathramh Mòr/Ceathramh Mhoire (either "The Big Quarterland" or "Mary’s Quarterland")
Ceathramh was also used in Gàidhlig for a bushel and a firlot (or four pecks), as was Feòirling, the term used for a farthlingland.
Isle of Man
The Isle of Man retained a similar system into historic times: in the traditional land divisions of treens (c.f. the Scottish Gaelic word trian, a third part) which are in turn subdivided into smaller units called quarterlands.
See also
Obsolete Scottish units of measurement
In the East Highlands:
Rood
Scottish acre = 4 roods
Oxgang (Damh-imir) = the area an ox could plow in a year (around 20 acres)
Ploughgate (?) = 8 oxgangs
Daugh (Dabhach) = 4 ploughgates
In the West Highlands:
Markland (Marg-fhearann) = 8 Ouncelands (varied)
Ounceland (Tir-unga) =20 Pennylands
Pennyland (Peighinn) = basic unit; sub-divided into half penny-land and farthing-land
(Other terms in use; Quarterland (Ceathramh): variable value; Groatland (Còta bàn)
Townland
Township (Scotland)
References
Obsolete Scottish units of measurement
History of the Isle of Man
Units of area | Quarterland | Mathematics | 507 |
7,872,461 | https://en.wikipedia.org/wiki/Mechanoluminescence | Mechanoluminescence is light emission resulting from any mechanical action on a solid. It can be produced through ultrasound, or through other means.
Electrochemiluminescence is the emission induced by an electrochemical stimulus.
Fractoluminescence is caused by stress that results in the formation of fractures, that in turn yield light.
Piezoluminescence is caused by pressure that results in elastic deformation and large polarization from the piezoelectric effect.
Sonoluminescence is the emission of short bursts of light from imploding bubbles in a liquid when excited by sound.
Triboluminescence is nominally caused by rubbing, but sometimes occurs because of resulting fractoluminescence. It is often used as a synonym.
See also
List of light sources
References
External links
Ultrasound Generates Intense Mechanoluminescence
Luminescence
Light sources | Mechanoluminescence | Chemistry | 180 |
7,803,772 | https://en.wikipedia.org/wiki/Compiled%20Wireless%20Markup%20Language | In networking for mobile devices, WMLC is a format for the efficient transmission of WML web pages over Wireless Application Protocol (WAP). Its primary purpose is to compress (or rather tokenise) a WML page for transport over low-bandwidth internet connections such as GPRS/2G.
WMLC is apparently synonymous with Wireless Application Protocol Binary XML (WBXML).
Description
WMLC is most efficient for pages that contain frequently repeated strings of characters. Commonly used phrases such as "www." and "http://www." are tokenised and replaced with a single byte just before transmission and then re-inserted at the destination.
WMLC has an added advantage that the data can be progressively decoded unlike some compression algorithms that require all of the data to be available before decompression begins. As soon as the first few bytes of WMLC data are available, the WAP browser can start creating the page, this means the user can see the page being constructed as it is downloaded.
Content type is application/vnd.wap.wmlc.
References
Internet protocols | Compiled Wireless Markup Language | Technology | 232 |
70,452,732 | https://en.wikipedia.org/wiki/Cool%20Companions%20on%20Ultrawide%20Orbits | The COol Companions ON Ultrawide orbiTS (COCONUTS) program is a large-scale survey for wide-orbit planetary and substellar companions considered the first survey of this type of celestial bodies. In 2021, the team announced COCONUTS-2b, the closest exoplanet directly imaged ever. The program is a dedicated large-scale search for wide-orbit giant planets and brown dwarf companions, targeting a sample of 300,000 stars. By using multi-wavelength photometry and multi-epoch astrometry, astronomers are able to assess the candidates' companionship and ultracool nature.
List of discoveries
See also
List of exoplanet search projects
References
Exoplanet search projects | Cool Companions on Ultrawide Orbits | Astronomy | 142 |
5,538,293 | https://en.wikipedia.org/wiki/Antioxidant%20effect%20of%20polyphenols%20and%20natural%20phenols | A polyphenol antioxidant is a hypothesized type of antioxidant studied in vitro. Numbering over 4,000 distinct chemical structures mostly from plants, such polyphenols have not been demonstrated to be antioxidants in vivo.
In vitro at high experimental doses, polyphenols may affect cell-to-cell signaling, receptor sensitivity, inflammatory enzyme activity or gene regulation. None of these hypothetical effects has been confirmed in humans by high-quality clinical research, .
Sources of polyphenols
The main source of polyphenols is dietary, since they are found in a wide array of phytochemical-bearing foods. For example, honey; most legumes; fruits such as apples, blackberries, blueberries, cantaloupe, pomegranate, cherries, cranberries, grapes, pears, plums, raspberries, aronia berries, and strawberries (berries in general have high polyphenol content) and vegetables such as broccoli, cabbage, celery, onion and parsley are rich in polyphenols. Red wine, chocolate, black tea, white tea, green tea, olive oil and many grains are sources. Ingestion of polyphenols occurs by consuming a wide array of plant foods.
Biochemical theory
The regulation theory considers a polyphenolic ability to scavenge free radicals and up-regulate certain metal chelation reactions. Various reactive oxygen species, such as singlet oxygen, peroxynitrite and hydrogen peroxide, must be continually removed from cells to maintain healthy metabolic function. Diminishing the concentrations of reactive oxygen species can have several benefits possibly associated with ion transport systems and so may affect redox signaling. There is no substantial evidence, however, that dietary polyphenols have an antioxidant effect in vivo.
The “deactivation” of oxidant species by polyphenolic antioxidants (POH) is based, with regard to food systems that are deteriorated by peroxyl radicals (R•), on the donation of hydrogen, which interrupts chain reactions:
R• + PhOH → R-H + PhO•
Phenoxyl radicals (PO•) generated according to this reaction may be stabilized through resonance and/or intramolecular hydrogen bonding, as proposed for quercetin, or combine to yield dimerisation products, thus terminating the chain reaction:
PhO• + PhO•→ PhO-OPh
Potential biological consequences
Consuming dietary polyphenols have been evaluated for biological activity in vitro, but there is no evidence from high-quality clinical research that they have effects in vivo. Preliminary research has been conducted and regulatory status was reviewed in 2009 by the U.S. Food and Drug Administration (FDA), with no recommended intake values established, indicating absence of proof for nutritional value. Other possible effects may result from consumption of foods rich in polyphenols, but are not yet proved scientifically in humans; accordingly, health claims on food labels are not allowed by the FDA.
Difficulty in analyzing effects of specific chemicals
It is difficult to evaluate the physiological effects of specific natural phenolic antioxidants, since such a large number of individual compounds may occur even in a single food and their fate in vivo cannot be measured.
Other more detailed chemical research has elucidated the difficulty of isolating individual phenolics. Because significant variation in phenolic content occurs among various brands of tea, there are possible inconsistencies among epidemiological studies implying beneficial health effects of phenolic antioxidants of green tea blends. The Oxygen Radical Absorbance Capacity (ORAC) test is a laboratory indicator of antioxidant potential in foods and dietary supplements. However, ORAC results cannot be confirmed to be physiologically applicable and have been designated as unreliable.
Practical aspects of dietary polyphenols
There is debate regarding the total body absorption of dietary intake of polyphenolic compounds. While some indicate potential health effects of certain specific polyphenols, most studies demonstrate low bioavailability and rapid excretion of polyphenols, indicating their potential roles only in small concentrations in vivo. More research is needed to understand the interactions between a variety of these chemicals acting in concert within the human body.
Topical application of polyphenols
There is no substantial evidence that reactive oxygen species play a role in the process of skin aging. The skin is exposed to various exogenous sources of oxidative stress, including ultraviolet radiation whose spectral components may be responsible for the extrinsic type of skin aging, sometimes termed photoaging. Controlled long-term studies on the efficacy of low molecular weight antioxidants in the prevention or treatment of skin aging in humans are absent.
Combination of antioxidants in vitro
Experiments on linoleic acid subjected to 2,2′-azobis (2-amidinopropane) dihydrochloride-induced oxidation with different combinations of phenolics show that binary mixtures can lead to either a synergetic effect or to an antagonistic effect.
Antioxidant levels of purified anthocyanin extracts were much higher than expected from anthocyanin content indicating synergistic effect of anthocyanin mixtures.
Antioxidant capacity tests
Oxygen radical absorbance capacity (ORAC)
Ferricyanide reducing power
2,2-diphenyl-1-picrylhydrazyl radical scavenging activity
See also
List of phytochemicals in food
List of antioxidants in food
Health effects of polyphenols
Free-radical theory
Nitric oxide
Resveratrol
Astaxanthin
References
Angiology
Chemopreventive agents
Antioxidants | Antioxidant effect of polyphenols and natural phenols | Chemistry | 1,203 |
625,338 | https://en.wikipedia.org/wiki/3494%20Purple%20Mountain | 3494 Purple Mountain, provisional designation , is a bright Vestian asteroid and a formerly lost minor planet from the inner regions of the asteroid belt, approximately in diameter. First observed in 1962, it was officially discovered on 7 December 1980, by Chinese astronomers at the Purple Mountain Observatory in Nanking, China, and later named in honor of the discovering observatory. The V-type asteroid has a rotation period of 5.9 hours.
Orbit and classification
Purple Mountain is a core member of the Vesta family (), a giant asteroid family of typically bright V-type asteroids. Vestian asteroids have a composition akin to cumulate eucrites (HED meteorites) and are thought to have originated deep within 4 Vesta's crust, possibly from the Rheasilvia crater, a large impact crater on its southern hemisphere near the South pole, formed as a result of a subcatastrophic collision. Vesta is the main belt's second-largest and second-most-massive body after . Based on osculating Keplerian orbital elements, the asteroid has also been classified as a member of the Flora family (), a giant asteroid family and the largest family of stony asteroids in the main-belt.
Purple Mountain orbits the Sun in the inner asteroid belt at a distance of 2.0–2.7 AU once every 3 years and 7 months (1,315 days; semi-major axis of 2.35 AU). Its orbit has an eccentricity of 0.13 and an inclination of 6° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar Observatory in December 1951, or 29 years prior to its official discovery observation.
Lost asteroid
Purple Mountain has been a lost minor planet. In November 1962, Purple Mountain was observed as at Goethe Link Observatory. A total of three additional observations were taken at Crimea–Nauchnij in 1969 and 1972, when it was designated as and , respectively, but was subsequently lost with no follow-up observations until its official discovery at Nanking in 1980.
Physical characteristics
Based on the Moving Object Catalog (MOC) of the Sloan Digital Sky Survey, Purple Mountain is a common, stony S-type asteroid, with a sequential best-type taxonomy of SV. The Collaborative Asteroid Lightcurve Link (CALL) also assumes it to be a stony S-type.
In the SMASS-I classification by Xu, the asteroid is a V-type. This agrees with its measured high albedo (see below) often seen among the core members of the Vesta family. In 2013, a spectroscopic analysis showed it to have a composition very similar to the cumulate eucrite meteorites, which also suggests that the basaltic asteroid has originated from the crust of 4 Vesta.
Rotation period
In June 2015, a rotational lightcurve of Purple Mountain was obtained from photometric observations by astronomers at Texas A&M University, using the SARA-telescopes of the Southeastern Association for Research and Astronomy consortium. The 0.9-meter SARA-North telescope is located at Kitt Peak National Observatory, Arizona, while the 0.6-meter SARA-South telescope is hosted at the Cerro Tololo Inter-American Observatory in Chile. Lightcurve analysis gave a rotation period of 5.857 hours with a brightness variation of 0.32 magnitude (). One month later, in July 2015, another period of 2.928 hours and an amplitude of 0.40 magnitude was measured at MIT's George R. Wallace Jr. Observatory (). The results are in good agreement, apart from the fact that the latter is an alternative, monomodal solution with half the period of the former. CALL adopts the longer, bimodal period solution as the better result in its Lightcurve Data Base, due to the lightcurve's distinct amplitude and the small phase angle of the first observation.
Diameter and albedo
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Purple Mountain measures 6.507 kilometers in diameter and its surface has an albedo of 0.347, while CALL assumes an albedo of 0.24 – derived from the body's classification into the Flora family – and consequently calculates a larger diameter of 7.82 kilometers based on an absolute magnitude of 12.7.
Naming
This minor planet was named in honor of the Purple Mountain Observatory (PMO), an astronomical observatory located in Nanking (Nanjing), China. Built in 1934, the observatory is known for its astrometric observations and for its numerous discoveries of small Solar System bodies. It has played an important role in developing modern Chinese astronomy. The official naming citation was published by the Minor Planet Center on 29 November 1993 ().
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
003494
003494
Named minor planets
003494
19801207
Recovered astronomical objects | 3494 Purple Mountain | Astronomy | 1,052 |
44,441,540 | https://en.wikipedia.org/wiki/Targeted%20immunization%20strategies | Targeted immunization strategies are approaches designed to increase the immunization level of populations and decrease the chances of epidemic outbreaks. Though often in regards to use in healthcare practices and the administration of vaccines to prevent biological epidemic outbreaks, these strategies refer in general to immunization schemes in complex networks, biological, social or artificial in nature. Identification of at-risk groups and individuals with higher odds of spreading the disease often plays an important role in these strategies, since targeted immunization in high-risk groups is necessary for effective eradication efforts and has a higher return on investment than immunizing larger but lower-risk groups.
Background
The success of vaccines in preventing major outbreaks relies on the mechanism of herd immunity, also known as community immunity, where the immunization of individuals provides protection for not only the individuals, but also the community at large. In cases of biological contagions such as influenza, measles, and chicken pox, immunizing a critical community size can provide protection against the disease for members who cannot be vaccinated themselves (infants, pregnant women, and immunocompromised individuals). Often however these vaccine programmes require the immunization of a large majority of the population to provide herd immunity. A few successful vaccine programmes have led to the eradication of infectious diseases like small pox and rinderpest, and the near eradication of polio, which plagued the world before the second half of the 20th century.
Network-based strategies
More recently researchers have looked at exploiting network connectivity properties to better understand and design immunization strategies to prevent major epidemic outbreaks. Many real networks like the Internet, World Wide Web, and even sexual contact networks have been shown to be scale-free networks and as such exhibit a power-law distribution for the degree distribution. In large networks this results in the vast majority of nodes (individuals in social networks) having few connections or low degree k, while a few "hubs" have many more connections than the average <k>. This wide variability (heterogeneity) in degree offers immunization strategies based on targeting members of the network according to their connectivity rather than random immunization of the network. In epidemic modeling on scale-free networks, targeted immunization schemes can considerably lower the vulnerability of a network to epidemic outbreaks over random immunization schemes. Typically these strategies result in the need for far fewer nodes to be immunized in order to provide the same level of protection to the entire network as in random immunization. In circumstances where vaccines are scarce, efficient immunization strategies become necessary to preventing infectious outbreaks.
Examples
A common approach for targeted immunization studies in scale-free networks focuses on targeting the highest degree nodes for immunization. These nodes are the most highly connected in the network, making them more likely to spread the contagion if infected. Immunizing this segment of the network can drastically reduce the impact of the disease on the network and requires the immunization of far fewer nodes compared to randomly selecting nodes. However, this strategy relies on knowing the global structure of the network, which may not always be practical.
A recent centrality measure, Percolation Centrality, introduced by Piraveenan et al. is particularly useful in identifying nodes for vaccination based on the network topology. Unlike node degree which depends on topology alone, however, percolation centrality takes into account the topological importance of a node as well as its distance from infected nodes in deciding its overall importance. Piraveenan et al. has shown that percolation centrality-based vaccination is particularly effective when the proportion of people already infected is on the same order of magnitude as the number of people who could be vaccinated before the disease spreads much further. If infection spread is at its infancy, then ring-vaccination surrounding the source of infection is most effective, whereas if the proportion of people already infected is much higher than the number of people that could be vaccinated quickly, then vaccination will only help those who are vaccinated and herd immunity cannot be achieved. Percolation centrality-based vaccination is most effective in the critical scenario where the infection has already spread too far to be completely surrounded by ring-vaccination, yet not spread wide enough so that it cannot be contained by strategic vaccination. Nevertheless, Percolation Centrality also needs full network topology to be computed, and thus is more useful in higher levels of abstraction (for example, networks of townships rather than social networks of individuals), where the corresponding network topology can more readily be obtained.
Increasing immunization coverage
Millions of children worldwide do not receive all of the routine vaccinations as per their national schedule. As immunization is a powerful public health strategy for improving child survival, it is important to determine what strategies work best to increase coverage. A Cochrane review assessed the effectiveness of intervention strategies to boost and sustain high childhood immunization coverage in low- and middle-income countries. Forty-one trials were included but most of the evidence was of low quality. Providing parents and other community members with information on immunization, health education at facilities in combination with redesigned immunization reminder cards, regular immunization outreach with and without household incentives, home visits, and integration of immunization with other services may improve childhood immunization coverage in low-and middle-income countries.
See also
Influenza vaccine
Immunization
Vaccine-preventable diseases
Smallpox eradication
Poliomyelitis eradication
Infectious diseases
ILOVEYOU (computer worm epidemic in 2000)
Epidemiology
Epidemic model
Network Science
Critical community size
Scale-free network
Complex network
Percolation theory
Pandemic
References
Vaccination
Social networks
Epidemiology
Epidemics
Preventive medicine
Pandemics | Targeted immunization strategies | Biology,Environmental_science | 1,192 |
17,235,035 | https://en.wikipedia.org/wiki/5%2C6-Dichloro-1-%CE%B2-D-ribofuranosylbenzimidazole | {{DISPLAYTITLE:5,6-Dichloro-1-β-D-ribofuranosylbenzimidazole}}
5,6-Dichloro-1-β--ribofuranosylbenzimidazole (DRB) is a chemical compound that inhibits transcription elongation by RNA Polymerase II. Sensitivity to DRB is dependent on DRB sensitivity inducing factor (DSIF), negative elongation factor (NELF), and positive transcription elongation factor b (P-TEFb). DRB is a nucleoside analog and also inhibits some protein kinases.
References
Nucleosides
Benzimidazoles
Organochlorides
Ribosides | 5,6-Dichloro-1-β-D-ribofuranosylbenzimidazole | Chemistry,Biology | 159 |
361,449 | https://en.wikipedia.org/wiki/Descent%20%28mathematics%29 | In mathematics, the idea of descent extends the intuitive idea of 'gluing' in topology. Since the topologists' glue is the use of equivalence relations on topological spaces, the theory starts with some ideas on identification.
Descent of vector bundles
The case of the construction of vector bundles from data on a disjoint union of topological spaces is a straightforward place to start.
Suppose X is a topological space covered by open sets Xi. Let Y be the disjoint union of the Xi, so that there is a natural mapping
We think of Y as 'above' X, with the Xi projection 'down' onto X. With this language, descent implies a vector bundle on Y (so, a bundle given on each Xi), and our concern is to 'glue' those bundles Vi, to make a single bundle V on X. What we mean is that V should, when restricted to Xi, give back Vi, up to a bundle isomorphism.
The data needed is then this: on each overlap
intersection of Xi and Xj, we'll require mappings
to use to identify Vi and Vj there, fiber by fiber. Further the fij must satisfy conditions based on the reflexive, symmetric and transitive properties of an equivalence relation (gluing conditions). For example, the composition
for transitivity (and choosing apt notation). The fii should be identity maps and hence symmetry becomes (so that it is fiberwise an isomorphism).
These are indeed standard conditions in fiber bundle theory (see transition map). One important application to note is change of fiber: if the fij are all you need to make a bundle, then there are many ways to make an associated bundle. That is, we can take essentially same fij, acting on various fibers.
Another major point is the relation with the chain rule: the discussion of the way there of constructing tensor fields can be summed up as 'once you learn to descend the tangent bundle, for which transitivity is the Jacobian chain rule, the rest is just 'naturality of tensor constructions'.
To move closer towards the abstract theory we need to interpret the disjoint union of the
now as
the fiber product (here an equalizer) of two copies of the projection p. The bundles on the Xij that we must control are V′ and V", the pullbacks to the fiber of V via the two different projection maps to X.
Therefore, by going to a more abstract level one can eliminate the combinatorial side (that is, leave out the indices) and get something that makes sense for p not of the special form of covering with which we began. This then allows a category theory approach: what remains to do is to re-express the gluing conditions.
History
The ideas were developed in the period 1955–1965 (which was roughly the time at which the requirements of algebraic topology were met but those of algebraic geometry were not). From the point of view of abstract category theory the work of comonads of Beck was a summation of those ideas; see Beck's monadicity theorem.
The difficulties of algebraic geometry with passage to the quotient are acute. The urgency (to put it that way) of the problem for the geometers accounts for the title of the 1959 Grothendieck seminar TDTE on theorems of descent and techniques of existence (see FGA) connecting the descent question with the representable functor question in algebraic geometry in general, and the moduli problem in particular.
Fully faithful descent
Let . Each sheaf F on X gives rise to a descent datum
,
where satisfies the cocycle condition
.
The fully faithful descent says: The functor is fully faithful. Descent theory tells conditions for which there is a fully faithful descent, and when this functor is an equivalence of categories.
See also
Grothendieck connection
Stack (mathematics)
Galois descent
Grothendieck topology
Fibered category
Beck's monadicity theorem
Cohomological descent
References
SGA 1, Ch VIII – this is the main reference
A chapter on the descent theory is more accessible than SGA.
Further reading
Other possible sources include:
Angelo Vistoli, Notes on Grothendieck topologies, fibered categories and descent theory
Mattieu Romagny, A straight way to algebraic stacks
External links
What is descent theory?
Topology
Category theory
Algebraic geometry | Descent (mathematics) | Physics,Mathematics | 899 |
53,942,957 | https://en.wikipedia.org/wiki/Smash%20and%20Grab%20%28biology%29 | Smash and Grab is the name given to a technique developed by Charles S. Hoffman and Fred Winston used in molecular biology to rescue plasmids from yeast transformants into Escherichia coli, also known as E. coli, in order to amplify and purify them. In addition, it can be used to prepare yeast genomic DNA (and DNA from tissue samples) for Southern blot analyses or polymerase chain reaction (PCR).
References
Biology terminology | Smash and Grab (biology) | Biology | 99 |
11,460,522 | https://en.wikipedia.org/wiki/Stemphylium%20sarciniforme | Stemphylium sarciniforme is a plant pathogen infecting lentil, red clover and chickpea.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pulse crop diseases
Pleosporaceae
Fungus species | Stemphylium sarciniforme | Biology | 54 |
19,058,424 | https://en.wikipedia.org/wiki/TESEO | Tecnica Empirica Stima Errori Operatori (TESEO) is a technique in the field of Human reliability Assessment (HRA), that evaluates the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
This is a time based model that describes the probability of a system operator's failure as a multiplicative function of 5 main factors. These factors are as follows:
K1: The type of task to be executed
K2: The time available to the operator to complete the task
K3: The operator's level of experience/characteristics
K4: The operator's state of mind
K5: The environmental and ergonomic conditions prevalent
Using these figures, an overall Human Error Probability (HEP) can be calculated with the formulation provided below:
K1 x K2 x K3 x K4 x K5
The specific value of each of the above functions can be obtained by consulting standard tables that take account of the method in which the HEP is derived.
Background
Developed in 1980 by Bello and Colombari, TESEO created with the intention of using it for the purpose of conducting HRA of process industries. The methodology is relatively straightforward and is easy to use but is also limited; it is useful for quick overview HRA assessments, as opposed to highly detailed and in-depth assessments. Within the field of HRA, there is a lack of theoretical foundation for the technique, as is widely acknowledged throughout.
TESEO Methodology
When putting this technique into practice, it is necessary for the designated HRA assessor to thoroughly consider the task requiring assessment and therefore also consider the value for Kn that applies in the context. Once this value has been decided upon, the tables, previously mentioned, are then consulted from which a related value for each of the identified factors is found to allow the HEP to be calculated.
Worked Example
Provided below is an example of how TESEO methodology can be used in practice; each of the stages of the process described above are worked through in order.
Context
An operator works on a production transfer line that operates between two tanks. His role is to ensure the correct product is selected for transfer from one tanker to the other by operating remotely located valves. The essential valves must be opened to perform the task.
The operator possesses average experience for this role. The individual is in a control room that has a relatively noisy environment and poor lighting. There is a time window of five minutes for the required task.
Method
The figures for the HEP calculation, obtained from the relevant tables, are given as follows:
The type of task to be executed: K1 = 0.01
Time available to complete the task: K2 = 0.5
Level of experience: K3 = 1
Operator's state of mind: K4 = 1
Environmental and ergonomic conditions: K5 = 10
The calculation for the final HEP figure is therefore calculated as:
K1 x K2 x K3 x K4 x K5
=0.01 x 0.5 x 1 x 1 x 10
= 0.05
Result
Given the result of this calculation, it can be deduced that were the control room notified of the valves’ positions and if the microclimate was better, K5 would be unity, and therefore the HEP would be 0.005, representing an improvement of 1 order of magnitude.
Advantages of TESEO
The technique of TESEO is typically quick and straightforward in comparison to other HRA tools, not only in producing a final result, but also in sensitivity analysis e.g., it is useful in identifying the effects improvements in human factors have on overall human reliability of a task. It is widely applicable to various control room designs or with procedures with varying characteristics.
Disadvantages of TESEO
There is limited work published with regards to the theoretical foundations of this technique, in particular relating to the justification of the five factor methodology. Regardless of the situation, it remains to be assumed that these 5 factors are suffice for an accurate assessment of human performance; as no other factors are considered, this suggests that to solely use these 5 factors to adequately describe the full range of error producing conditions fails to be highly realistic. Further to this, the values of K1-5 are unsubstantiated and the suggested multiplicative relationship has no sufficient theoretical or empirical evidence for justification purposes.
References
Human reliability | TESEO | Engineering | 1,070 |
6,549,572 | https://en.wikipedia.org/wiki/Woodward%20cis-hydroxylation | The Woodward cis-hydroxylation (also known as the Woodward reaction) is the chemical reaction of alkenes with iodine and silver acetate in wet acetic acid to form cis-diols.(conversion of olefin into cis-diol)
The reaction is named after its discoverer, Robert Burns Woodward.
This reaction has found application in steroid synthesis.
Reaction mechanism
The reaction of the iodine with the alkene is promoted by the silver acetate, thus forming an iodinium ion (3). The iodinium ion is opened via SN2 reaction by acetic acid (or silver acetate) to give the first intermediate, the iodo-acetate (4). Through anchimeric assistance, the iodine is displaced via another SN2 reaction to give an oxonium ion (5), which is subsequently hydrolyzed to the give the mono-ester (6).
References
See also
Prévost reaction
Organic oxidation reactions
Substitution reactions
Name reactions | Woodward cis-hydroxylation | Chemistry | 208 |
75,391,562 | https://en.wikipedia.org/wiki/Solbinsiran | Solbinsiran, is a GalNAc conjugated small interfering RNA (siRNA) therapy, that targets angiopoietin-like 3. It is developed by Eli Lilly and Company to reduce the level of apolipoprotein B and reduce the risk of cardiovascular disease.
Mechanism of action
Solbinsiran is a GalNAc-conjugated Dicer-substrate siRNA (DsiRNA) that targets ANGPTL3 expression in the liver. ANGPTL3 plays a role in regulating lipid metabolism, and by inhibiting its expression, Solbinsiran aims to lower lipid levels, particularly triglycerides and low-density lipoprotein cholesterol (LDL-C) (Triglyceride Forum).
Preclinical and Clinical Research
In preclinical studies, Solbinsiran demonstrated significant reductions in human ANGPTL3 mRNA expression in hepatocytes and a substantial reduction in circulating ANGPTL3 protein levels in cynomolgus monkeys (Triglyceride Forum). In Phase 1 studies, it showed potential as a therapeutic option for reducing ANGPTL3 levels and triglycerides (TG) in patients with dyslipidemia (Triglyceride Forum).
Clinical applications
The therapy is currently investigational and has undergone testing in clinical settings for cardiovascular diseases.
References
Drugs developed by Eli Lilly and Company
Hypolipidemic agents | Solbinsiran | Chemistry | 298 |
22,714,156 | https://en.wikipedia.org/wiki/Journal%20of%20Biological%20Inorganic%20Chemistry | Journal of Biological Inorganic Chemistry (JBIC) is a peer-reviewed scientific journal. It is an official publication of the Society of Biological Inorganic Chemistry and published by Springer Science+Business Media.
Biological inorganic chemistry is a growing field of science that embraces the principles of biology and inorganic chemistry and impacts other fields ranging from medicine to the environment. JBIC seeks to promote this field internationally. The journal is primarily concerned with advances in understanding the role of metal ions within a biological matrix—be it a protein, DNA/RNA, or a cell, as well as appropriate model studies. Manuscripts describing high-quality original research on the above topics in English are invited for submission to this journal. The journal publishes original articles, minireviews, perspective articles, protocols, and commentaries on debated issues.
Scope of the journal
Areas of research covered in the journal include: advances in the understanding of systems involving one or more metal ions set in a biological matrix - particularly metalloproteins and metal-nucleic acid complexes - in order to understand biological function at the molecular level. Synthetic analogues mimicking function, structure and spectroscopy of naturally occurring biological molecules are also covered.
The journal is abstracted/indexed in Chemical Abstracts Service, Current Contents/Life Sciences, PubMed/MEDLINE, and the Science Citation Index.
Indexed by ISI Journal of Biological Inorganic Chemistry received an impact factor of 2.538 as reported in the 2014 Journal Citation Reports by Thomson Reuters, ranking it 157 out of 289 journals in the category Biochemistry & Molecular Biology and ranking it 9th out of 44 journals in the category Chemistry, Inorganic & Nuclear.
Editor in chief
The current editor in chief of JBIC is Nils Metzler-Nolte (Ruhr University Bochum). He followed Lawrence Que (University of Minnesota) who led the journal from 1999 to 2020 and who succeeded Ivano Bertini (University of Florence) who was the founding editor of JBIC.
References
English-language journals
Inorganic chemistry journals
Biochemistry journals
Springer Science+Business Media academic journals
Academic journals established in 1996 | Journal of Biological Inorganic Chemistry | Chemistry | 415 |
39,457,035 | https://en.wikipedia.org/wiki/Fontaine%27s%20period%20rings | In mathematics, Fontaine's period rings are a collection of commutative rings first defined by Jean-Marc Fontaine that are used to classify p-adic Galois representations.
The ring BdR
The ring is defined as follows. Let denote the completion of . Let
So an element of is a sequence of elements
such that . There is a natural projection map given by . There is also a multiplicative (but not additive) map defined by , where the are arbitrary lifts of the to . The composite of with the projection is just . The general theory of Witt vectors yields a unique ring homomorphism such that for all , where denotes the Teichmüller representative of . The ring is defined to be completion of with respect to the ideal . The field is just the field of fractions of .
Notes
References
Algebraic number theory
Galois theory
Representation theory of groups
Hodge theory | Fontaine's period rings | Mathematics,Engineering | 178 |
37,830,616 | https://en.wikipedia.org/wiki/Eco-friendly%20dentistry | Eco-friendly dentistry (also called environmentally friendly dentistry, green dentistry or sustainable dentistry) aims at reducing the detrimental impact of dental services on the environment while still being able to adhere to the regulations and standards of the dental industries in their respective countries.
There are no official governing agencies that certify an office as meeting eco-friendly standards. Dental offices in the United States of America can be recognised as eco-friendly offices by becoming members of the Eco Dentistry Association. Within England there are audit programmes available from the National Union of Students such as the Green Impact tool. People who want to be involved and discuss sustainable dentistry in a free and open forum are invited to be members at the Centre for Sustainable Healthcare.
History
The term eco-friendly dentistry has roots originating from the environmental movement and environmentalism, which, in the Western world, is often perceived as having begun in the 1960s and 1970s. The rise of this movement is often credited to Rachel Carson, conservationist and author of the book Silent Spring. Subsequently, legislation in many countries throughout the world began gaining momentum in the 1970s and continues to the present day.
Eco-friendliness also has meaning in another context as a marketing term. It is used by companies to appeal to consumers of goods and services as having a low impact on the environment. Market research has found that an increasing number of consumers purchase goods and services that appeal to the values of environmental philosophy. The dental industry has adopted the concept of eco-friendliness both in a well-meaning, philosophical context and as a marketing term so that patients who subscribe to principles of sustainability can choose to visit these offices.
The term has been criticised as being used for "greenwashing", which is the practice of deceptively promoting a product or service as environmentally friendly. Legislation in countries around the world have Trade Commissions and such to stop companies profiting with baseless claims on their goods and services. Individuals and bodies that work in the dental industry have also subsequently adopted the principles of sustainability and environmentalism and also as an advertisement to patients, clients and consumers. The Eco Dentistry Association is an accreditation organisation in the United States which has proposed outcomes towards becoming more sustainable.
In 2008, the Eco Dentistry Association (EDA) was co-founded by Dr. Fred Pockrass and his wife, Ina Pockrass. The EDA provides "education, standards and connection" to patients and dentists who practice green dentistry. The EDA aims to help dentists "come up with safe and reusable alternatives that lower a dentists' operating cost by replacing paper with digital media whenever possible." As of February 2011, the EDA has approximately 600 members. After the inception of the EDA, the dental industry in America saw more dentists and oral surgeons choosing to make their offices environmentally friendly.
In 2011, The Australian Dental Association implemented a policy of sustainability to provide guidelines to assist in the environmental sustainability of dental offices in Australia. In August 2017 the FDA adopted a sustainability in dentistry policy.
Elements of eco-friendly dentistry
There is a growing amount of scientific information regarding the carbon footprint of the dental industry. These include papers by Duane relating to work carried out in Scotland and more recently England.
Recently, Public Health England published a report on the carbon footprint of NHS England dentistry. The report based on 2014 data provides a number of recommendations for the dental team in England to consider. The report demonstrated the considerable contribution of staff and patient travel to the overall carbon footprint.
To be environmentally responsible, offices can incorporate the four R's of environmental responsibility. The four R's are: reduce, reuse, recycle and rethink.
Reduce
Having a paperless dental office reduces or eliminates the use of paper by going digital. This involves converting patient files, medical histories and other documentation to an electronic system. Going paperless not only makes information sharing easier and accessible but is a great way of keeping personal information secure. This saves money, boosts productivity and saves space as there is no need for any filing cabinets and is a great way of ensuring clinical records are more accurate.
Using digital radiography allows to keep all the patients' records in one spot, reduces the amount of radiation exposure and images and clinical photographs can be shared without losing the quality of the image.
Reuse
Clean Water
In many countries around the world there are strict mandatory limits on the use of mercury and the levels found in wastewater. Mercury is traditionally used in dental restorations known as amalgam. In October 2013, Australia's Department of the Environment and Energy signed The Minamata Convention in a call for the reduction of amalgam usage by means of nine measures aiming to eventually phase out the use of amalgam. Mercury can be released into the environment when amalgam is placed, finished and polished or removed from a patient mouth and can be either rinsed into sewage systems or disposed of in landfill. By complying with the Australian Dental Association (ADA) Policy 6.11 and the current edition of the International Organization for Standardization ISO11143 Dentistry – Amalgam Separators, reducing the amount of mercury entering the environment by means of installing amalgam separators and traps to collect and separate amalgam waste before it enters the sewage system. Amalgam that is collected from traps is then collected and recycled for reuse.
With the phasing out of manual processing of radiographs and switching to digital radiography allows for offices not having to purchase developing liquids and these liquids are harmful to the environment and need to be collected to be disposed of correctly.
Water management
• Installing a water meter to monitor water usage.
• Handwashing sinks with motion-activated taps.
• Collect the water bills for the last year to benchmark a water usage audit.
• Place interpretive signs about water conservation in staff rooms, toilets and surgeries.
• Maintain and repair taps or fittings.
• Use a non-water-based approach to cleaning where possible.
• Retro flow controllers in key usage areas.
• Install 4-, 5- or 6-star water efficient appliances where appropriate.
Recycle
Dental practices can recycle paper, cardboard, aluminum and plastics from plastic barriers and other water products contributing to sustainable environmentally friendly practices. Autoclave bags can be separated after opening and the paper and plastic recycled separately.
To become more eco-friendly or environmentally friendly dental practices can purchase biodegradable products therefore allowing more waste associated with the running of the practice to be recycled. Shredding of paper documents and recycling shredded paper will contribute to sustainable practices.
References
External links
Eco Dentistry Association
Dentistry
Sustainable building | Eco-friendly dentistry | Engineering | 1,337 |
18,962,147 | https://en.wikipedia.org/wiki/Water%20footprint | A water footprint shows the extent of water use in relation to consumption by people. The water footprint of an individual, community, or business is defined as the total volume of fresh water used to produce the goods and services consumed by the individual or community or produced by the business. Water use is measured in water volume consumed (evaporated) and/or polluted per unit of time. A water footprint can be calculated for any well-defined group of consumers (e.g., an individual, family, village, city, province, state, or nation) or producers (e.g., a public organization, private enterprise, or economic sector), for a single process (such as growing rice) or for any product or service.
Traditionally, water use has been approached from the production side, by quantifying the following three columns of water use: water withdrawals in the agricultural, industrial, and domestic sector. While this does provide valuable data, it is a limited way of looking at water use in a globalised world, in which products are not always consumed in their country of origin. International trade of agricultural and industrial products in effect creates a global flow of virtual water, or embodied water (akin to the concept of embodied energy).
In 2002, the water footprint concept was introduced in order to have a consumption-based indicator of water use, that could provide useful information in addition to the traditional production-sector-based indicators of water use. It is analogous to the ecological footprint concept introduced in the 1990s. The water footprint is a geographically explicit indicator, not only showing volumes of water use and pollution, but also the locations. The global issue of water footprinting underscores the importance of fair and sustainable resource management. Due to increasing water shortages, climate change, and environmental concerns, transitioning towards a fair impact of water use is critical. The water footprint concept offers detailed insights for adequate and equitable water resource management. It advocates for a balanced and sustainable water-use approach, aiming to tackle global challenges. This approach is essential for responsible and equitable water resource utilization globally. Thus, it gives a grasp on how economic choices and processes influence the availability of adequate water resources and other ecological realities across the globe (and vice versa).
Definition and measures
There are many different aspects to water footprint and therefore different definitions and measures to describe them. Blue water footprint refers to groundwater or surface water usage, green water footprint refers to rainwater, and grey water footprint refers to the amount of water needed to dilute pollutants.
Blue water footprint
A blue water footprint refers to the volume of water that has been sourced from surface or groundwater resources (lakes, rivers, wetlands and aquifers) and has either evaporated (for example while irrigating crops), or been incorporated into a product or taken from one body of water and returned to another, or returned at a different time. Irrigated agriculture, industry and domestic water use can each have a blue water footprint.
Green water footprint
A green water footprint refers to the amount of water from precipitation that, after having been stored in the root zone of the soil (green water), is either lost by evapotranspiration or incorporated by plants. It is particularly relevant for agricultural, horticultural and forestry products.
Grey water footprint
A grey water footprint refers to the volume of water that is required to dilute pollutants (industrial discharges, seepage from tailing ponds at mining operations, untreated municipal wastewater, or nonpoint source pollution such as agricultural runoff or urban runoff) to such an extent that the quality of the water meets agreed water quality standards. It is calculated as:
where L is the pollutant load (as mass flux), cmax the maximum allowable concentration and cnat the natural concentration of the pollutant in the receiving water body (both expressed in mass/volume).
Calculation for different factors
The water footprint of a process is expressed as volumetric flow rate of water. That of a product is the whole footprint (sum) of processes in its complete supply chain divided by the number of product units. For consumers, businesses and geographic area, water footprint is indicated as volume of water per time, in particular:
That of a consumer is the sum of footprint of all consumed products.
That of a community or a nation is the sum for all of its members resp. inhabitants.
That of a business is the footprint of all produced goods.
That of a geographically delineated area is the footprint of all processes undertaken in this area. The virtual change in water of an area is the net import of virtual water Vi, net, defined as the difference of the gross import Vi of virtual water from its gross export Ve. The water footprint of national consumption WFarea,nat results from this as the sum of the water footprint of national area and its virtual change in water.
History
The concept of a water footprint was coined in 2002, by Arjen Hoekstra, Professor in water management at the University of Twente, Netherlands, and co-founder and scientific director of the Water Footprint Network, whilst working at the UNESCO-IHE Institute for Water Education, as a metric to measure the amount of water consumed and polluted to produce goods and services along their full supply chain. Water footprint is one of a family of ecological footprint indicators, which also includes carbon footprint and land footprint. The water footprint concept is further related to the idea of virtual water trade introduced in the early 1990s by Professor John Allan (2008 Stockholm Water Prize Laureate). The most elaborate publications on how to estimate water footprints are a 2004 report on the Water footprint of nations from UNESCO-IHE, the 2008 book Globalization of Water, and the 2011 manual The water footprint assessment manual: Setting the global standard. Cooperation between global leading institutions in the field has led to the establishment of the Water Footprint Network in 2008.
Water Footprint Network (WFN)
The Water Footprint Network is an international learning community (a non-profit foundation under Dutch law) which serves as a platform for sharing knowledge, tools and innovations among governments, businesses and communities concerned about growing water scarcity and increasing water pollution levels, and their impacts on people and nature. The network consists of around 100 partners from all sectors – producers, investors, suppliers and regulators – as well as non-governmental organisations and academics. It describes its mission as follows: To provide science-based, practical solutions and strategic insights that empower companies, governments, individuals and small-scale producers to transform the way we use and share fresh water within earth's limits.
International standard
In February 2011, the Water Footprint Network, in a global collaborative effort of environmental organizations, companies, research institutions and the UN, launched the Global Water Footprint Standard. In July 2014, the International Organization for Standardization issued ISO 14046:2014, Environmental management—Water footprint—Principles, requirements and guidelines, to provide practical guidance to practitioners from various backgrounds, such as large companies, public authorities, non-governmental organizations, academic and research groups as well as small and medium enterprises, for carrying out a water footprint assessment. The ISO standard is based on life-cycle assessment (LCA) principles and can be applied for different sorts of assessment of products and companies.
Life-cycle assessment of water use
Life-cycle assessment (LCA) is a systematic, phased approach to assessing the environmental aspects and potential impacts that are associated with a product, process or service. "Life cycle" refers to the major activities connected with the product's life-span, from its manufacture, use, and maintenance, to its final disposal, and also including the acquisition of raw material required to manufacture the product. Thus a method for assessing the environmental impacts of freshwater consumption was developed. It specifically looks at the damage to three areas of protection: human health, ecosystem quality, and resources. The consideration of water consumption is crucial where water-intensive products (for example agricultural goods) are concerned that need to therefore undergo a life-cycle assessment. In addition, regional assessments are equally as necessary as the impact of water use depends on its location. In short, LCA is important as it identifies the impact of water use in certain products, consumers, companies, nations, etc. which can help reduce the amount of water used.
Water positive
The Water Positive initiative can be defined as the concept where an entity, such as a company, community, or individual, goes beyond simply conserving water and actively contributes to the sustainable management and restoration of water resources. A commercial or residential development is considered water positive when it generates more water than it consumes. This involves implementing practices and technologies that reduce water consumption, improve water quality, and enhance water availability. The goal of being water positive is to leave a positive impact on water ecosystem and ensure that more water is conserved and restored than is used or depleted.
Water availability
Globally, about 4 percent of precipitation falling on land each year (about ), is used by rain-fed agriculture and about half is subject to evaporation and transpiration in forests and other natural or quasi-natural landscapes. The remainder, which goes to groundwater replenishment and surface runoff, is sometimes called "total actual renewable freshwater resources". Its magnitude was in 2012 estimated at /year. It represents water that can be used either in-stream or after withdrawal from surface and groundwater sources. Of this remainder, about were withdrawn in 2007, of which , or 69 percent, were used by agriculture, and , or 19 percent, by other industry. Most agricultural use of withdrawn water is for irrigation, which uses about 5.1 percent of total actual renewable freshwater resources. World water use has been growing rapidly in the last hundred years.
Water footprint of products (agricultural sector)
The water footprint of a product is the total volume of freshwater used to produce the product, summed over the various steps of the production chain. The water footprint of a product refers not only to the total volume of water used; it also refers to where and when the water is used. The Water Footprint Network maintains a global database on the water footprint of products: WaterStat. Nearly over 70% of the water supply worldwide is used in the agricultural sector.
The water footprints involved in various diets vary greatly, and much of the variation tends to be associated with levels of meat consumption. The following table gives examples of estimated global average water footprints of popular agricultural products.
(For more product water footprints: see the Product Gallery of the Water Footprint Network )
Water footprint of companies (industrial sector)
The water footprint of a business, the 'corporate water footprint', is defined as the total volume of freshwater that is used directly or indirectly to run and support a business. It is the total volume of water use to be associated with the use of the business outputs. The water footprint of a business consists of water used for producing/manufacturing or for supporting activities and the indirect water use in the producer's supply chain.
The Carbon Trust argue that a more robust approach is for businesses to go beyond simple volumetric measurement to assess the full range of water impact from all sites. Its work with leading global pharmaceutical company GlaxoSmithKline (GSK) analysed four key categories: water availability, water quality, health impacts, and licence to operate (including reputational and regulatory risks) in order to enable GSK to quantitatively measure, and credibly reduce, its year-on-year water impact.
The Coca-Cola Company operates over a thousand manufacturing plants in about 200 countries. Making its drink uses a lot of water. Critics say its water footprint has been large. Coca-Cola has started to look at its water sustainability. It has now set out goals to reduce its water footprint such as treating the water it uses so it goes back into the environment in a clean state. Another goal is to find sustainable sources for the raw materials it uses in its drinks, such as sugarcane, oranges, and maize. By making its water footprint better, the company can reduce costs, improve the environment, and benefit the communities in which it operates.
Water footprint of individual consumers (domestic sector)
The water footprint of an individual refers to the sum of their direct and indirect freshwater use. The direct water use is the water used at home, while the indirect water use relates to the total volume of freshwater that is used to produce the goods and services consumed.
The average global water footprint of an individual is 1,385 m3 per year. Residents of some example nations have water footprints as shown in the table:
Water footprint of nations
The water footprint of a nation is the amount of water used to produce the goods and services consumed by the inhabitants of that nation. Analysis of the water footprint of nations illustrates the global dimension of water consumption and pollution, by showing that several countries rely heavily on foreign water resources and that (consumption patterns in) many countries significantly and in various ways impact how, and how much, water is being consumed and polluted elsewhere on Earth. International water dependencies are substantial and are likely to increase with continued global trade liberalisation. The largest share (76%) of the virtual water flows between countries is related to international trade in crops and derived crop products. Trade in animal products and industrial products contributed 12% each to the global virtual water flows. The four major direct factors determining the water footprint of a country are: volume of consumption (related to the gross national income); consumption pattern (e.g. high versus low meat consumption); climate (growth conditions); and agricultural practice (water use efficiency).
Production or consumption
The assessment of total water use in connection to consumption can be approached from both ends of the supply chain. The water footprint of production estimates how much water from local sources is used or polluted in order to provide the goods and services produced in that country. The water footprint of consumption of a country looks at the amount of water used or polluted (locally, or in the case of imported goods, in other countries) in connection with all the goods and services that are consumed by the inhabitants of that country. The water footprint of production and that of consumption, can also be estimated for any administrative unit such as a city, province, river basin or the entire world.
Absolute or per capita
The absolute water footprint is the total sum of water footprints of all people. A country's per capita water footprint (that nation's water footprint divided by its number of inhabitants) can be used to compare its water footprint with those of other nations.
The global water footprint in the period 1996–2005 was 9.087 Gm3/yr (Billion Cubic Metres per year, or 9.087.000.000.000.000 liters/year), of which 74% was and green, 11% blue, 15% grey. This is an average amount per capita of 1.385 Gm3/yr., or 3.800 liters per person per day. On average 92% of this is embedded in agricultural products consumed, 4.4% in industrial products consumed, and 3.6% is domestic water use. The global water footprint related to producing goods for export is 1.762 Gm3/y.
In absolute terms, India is the country with the largest water footprint in the world, a total of 987 Gm3/yr. In relative terms (i.e. taking population size into account), the people of the USA have the largest water footprint, with 2480 m3/yr per capita, followed by the people in south European countries such as Greece, Italy and Spain (2300–2400 m3/yr per capita). High water footprints can also be found in Malaysia and Thailand. In contrast, the Chinese people have a relatively low per capita water footprint with an average of 700 m3/yr. (These numbers are also from the period 1996-2005.)
Internal or external
The internal water footprint is the amount of water used from domestic water resources; the external water footprint is the amount of water used in other countries to produce goods and services imported and consumed by the inhabitants of the country. When assessing the water footprint of a nation, it is crucial to take into account the international flows of virtual water (also called embodied water, i.e. the water used or polluted in connection to all agricultural and industrial commodities) leaving and entering the country. When taking the use of domestic water resources as a starting point for calculating a nation's water footprint, one should subtract the virtual water flows that leave the country and add the virtual water flows that enter the country.
The external part of a nation's water footprint varies strongly from country to country. Some African nations, such as Sudan, Mali, Nigeria, Ethiopia, Malawi and Chad have hardly any external water footprint, simply because they have little import. Some European countries on the other hand—e.g. Italy, Germany, the UK and the Netherlands—have external water footprints that constitute 50–80% of their total water footprint. The agricultural products that on average contribute most to the external water footprints of nations are: bovine meat, soybean, wheat, cocoa, rice, cotton and maize.
The top 10 gross virtual water exporting nations, which together account for more than half of the global virtual water export, are the United States (314 Gm3/year), China (143 Gm3/year), India (125 Gm3/year), Brazil (112 Gm3/year), Argentina (98 Gm3/year), Canada (91 Gm3/year), Australia (89 Gm3/year), Indonesia (72 Gm3/year), France (65 Gm3/year), and Germany (64 Gm3/year).
The top 10 gross virtual water importing nations are the United States (234 Gm3/year), Japan (127 Gm3/year), Germany (125 Gm3/year), China (121 Gm3/year), Italy (101 Gm3/year), Mexico (92 Gm3/year), France (78 Gm3/year), the United Kingdom (77 Gm3/year), and The Netherlands (71 Gm3/year).
Water use in continents
Europe
Each EU citizen consumes 4,815 litres of water per day on average; 44% is used in power production primarily to cool thermal plants or nuclear power plants. Energy production annual water consumption in the EU 27 in 2011 was, in billion m3: for gas 0.53, coal 1.54 and nuclear 2.44. Wind energy avoided the use of 387 million cubic metres (mn m3) of water in 2012, avoiding a cost of €743 million.
Asia
In south India the state Tamil Nadu is one of the main agricultural producers in India and it relies largely in groundwater for irrigation. In ten years, from 2002 to 2012, the Gravity Recovery and Climate Experiment calculated that the groundwater reduced in 1.4 m yr−1, which "is nearly 8% more than the annual recharge rate."
Environmental water use
Although agriculture's water use includes provision of important terrestrial environmental values (as discussed in the "Water footprint of products" section above), and much "green water" is used in maintaining forests and wild lands, there is also direct environmental use (e.g. of surface water) that may be allocated by governments. For example, in California, where water use issues are sometimes severe because of drought, about 48 percent of "dedicated water use" in an average water year is for the environment (somewhat more than for agriculture). Such environmental water use is for keeping streams flowing, maintaining aquatic and riparian habitats, keeping wetlands wet, etc.
Criticism
Insufficient consideration of consequences of proposed water saving policies to farm households
According to Dennis Wichelns of the International Water Management Institute: "Although one goal of virtual water analysis is to describe opportunities for improving water security, there is almost no mention of the potential impacts of the prescriptions arising from that analysis on farm households in industrialized or developing countries. It is essential to consider more carefully the inherent flaws in the virtual water and water footprint perspectives, particularly when seeking guidance regarding policy decisions."
Regional water scarcity should be taken into account when interpreting water footprint
The application and interpretation of water footprints may sometimes be used to promote industrial activities that lead to facile criticism of certain products. For example, the 140 litres required for coffee production for one cup might be of no harm to water resources if its cultivation occurs mainly in humid areas, but could be damaging in more arid regions. Other factors such as hydrology, climate, geology, topography, population and demographics should also be taken into account. Nevertheless, high water footprint calculations do suggest that environmental concern may be appropriate.
Many of the criticisms, including the above ones, compare the description of the water footprint of a water system to generated impacts, which is about its performance. Such a comparison between descriptive and performance factors and indicators is basically flawed.
The use of the term footprint can also confuse people familiar with the notion of a carbon footprint, because the water footprint concept includes sums of water quantities without necessarily evaluating related impacts. This is in contrast to the carbon footprint, where carbon emissions are not simply summarized but normalized by CO2 emissions, which are globally identical, to account for the environmental harm. The difference is due to the somewhat more complex nature of water; while involved in the global hydrological cycle, it is expressed in conditions both local and regional through various forms like river basins, watersheds, on down to groundwater (as part of larger aquifer systems). Furthermore, looking at the definition of the footprint itself, and comparing ecological footprint, carbon footprint and water footprint, we realize that the three terms are indeed legitimate.
Sustainable water use
Sustainable water use involves the rigorous assessment of all source of clean water to establish the current and future rates of use, the impacts of that use both downstream and in the wider area where the water may be used and the impact of contaminated water streams on the environment and economic well being of the area. It also involves the implementation of social policies such as water pricing in order to manage water demand. In some localities, water may also have spiritual relevance and the use of such water may need to take account of such interests. For example, the Maori believe that water is the source and foundation of all life and have many spiritual associations with water and places associated with water. On a national and global scale, water sustainability requires strategic and long term planning to ensure appropriate sources of clean water are identified and the environmental and economic impact of such choices are understood and accepted. The re-use and reclamation of water is also part of sustainability including downstream impacts on both surface waters and ground waters.
Sustainability assessment
Water footprint accounting has advanced substantially in recent years, however, water footprint analysis also needs sustainability assessment as its last phase. One of the developments is to employ sustainable efficiency and equity ("Sefficiency in Sequity"), which present a comprehensive approach to assessing the sustainable use of water.
Sectoral distributions of withdrawn water use
Several nations estimate sectoral distribution of use of water withdrawn from surface and groundwater sources. For example, in Canada, in 2005, 42 billion m3 of withdrawn water were used, of which about 38 billion m3 were freshwater. Distribution of this use among sectors was: thermoelectric power generation 66.2%, manufacturing 13.6%, residential 9.0%, agriculture 4.7%, commercial and institutional 2.7%, water treatment and distribution systems 2.3%, mining 1.1%, and oil and gas extraction 0.5%. The 38 billion m3 of freshwater withdrawn in that year can be compared with the nation's annual freshwater yield (estimated as streamflow) of 3,472 billion m3. Sectoral distribution is different in many respects in the US, where agriculture accounts for about 39% of fresh water withdrawals, thermoelectric power generation 38%, industrial 4%, residential 1%, and mining (including oil and gas) 1%.
Within the agricultural sector, withdrawn water use is for irrigation and for livestock. Whereas all irrigation in the US (including loss in conveyance of irrigation water) is estimated to account for about 38 percent of US withdrawn freshwater use, the irrigation water used for production of livestock feed and forage has been estimated to account for about 9 percent, and other withdrawn freshwater use for the livestock sector (for drinking, washdown of facilities, etc.) is estimated at about 0.7 percent. Because agriculture is a major user of withdrawn water, changes in the magnitude and efficiency of its water use are important. In the US, from 1980 (when agriculture's withdrawn water use peaked) to 2010, there was a 23 percent reduction in agriculture's use of withdrawn water, while US agricultural output increased by 49 percent over that period.
In the US, irrigation water application data are collected in the quinquennial Farm and Ranch Irrigation Survey, conducted as part of the Census of Agriculture. Such data indicate great differences in irrigation water use within various agricultural sectors. For example, about 14 percent of corn-for-grain land and 11 percent of soybean land in the US are irrigated, compared with 66 percent of vegetable land, 79 percent of orchard land and 97 percent of rice land.
See also
References
External links
Water Footprint Network
Personal Water Footprint Calculator
Amount of water needed per day for livestock and household use
Water
Water resources management
Water and the environment
Environmental terminology
Macroeconomic indicators
Economic globalization
Environmental indices | Water footprint | Environmental_science | 5,208 |
943,917 | https://en.wikipedia.org/wiki/Laguerre%20polynomials | In mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834–1886), are nontrivial solutions of Laguerre's differential equation:
which is a second-order linear differential equation. This equation has nonsingular solutions only if is a non-negative integer.
Sometimes the name Laguerre polynomials is used for solutions of
where is still a non-negative integer.
Then they are also named generalized Laguerre polynomials, as will be done here (alternatively associated Laguerre polynomials or, rarely, Sonine polynomials, after their inventor Nikolay Yakovlevich Sonin).
More generally, a Laguerre function is a solution when is not necessarily a non-negative integer.
The Laguerre polynomials are also used for Gauss–Laguerre quadrature to numerically compute integrals of the form
These polynomials, usually denoted , , ..., are a polynomial sequence which may be defined by the Rodrigues formula,
reducing to the closed form of a following section.
They are orthogonal polynomials with respect to an inner product
The rook polynomials in combinatorics are more or less the same as Laguerre polynomials, up to elementary changes of variables. Further see the Tricomi–Carlitz polynomials.
The Laguerre polynomials arise in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom. They also describe the static Wigner functions of oscillator systems in quantum mechanics in phase space. They further enter in the quantum mechanics of the Morse potential and of the 3D isotropic harmonic oscillator.
Physicists sometimes use a definition for the Laguerre polynomials that is larger by a factor of n! than the definition used here. (Likewise, some physicists may use somewhat different definitions of the so-called associated Laguerre polynomials.)
The first few polynomials
These are the first few Laguerre polynomials:
Recursive definition, closed form, and generating function
One can also define the Laguerre polynomials recursively, defining the first two polynomials as
and then using the following recurrence relation for any :
Furthermore,
In solution of some boundary value problems, the characteristic values can be useful:
The closed form is
The generating function for them likewise follows,
The operator form is
Polynomials of negative index can be expressed using the ones with positive index:
Generalized Laguerre polynomials
For arbitrary real α the polynomial solutions of the differential equation
are called generalized Laguerre polynomials, or associated Laguerre polynomials.
One can also define the generalized Laguerre polynomials recursively, defining the first two polynomials as
and then using the following recurrence relation for any :
The simple Laguerre polynomials are the special case of the generalized Laguerre polynomials:
The Rodrigues formula for them is
The generating function for them is
Explicit examples and properties of the generalized Laguerre polynomials
Laguerre functions are defined by confluent hypergeometric functions and Kummer's transformation as where is a generalized binomial coefficient. When is an integer the function reduces to a polynomial of degree . It has the alternative expression in terms of Kummer's function of the second kind.
The closed form for these generalized Laguerre polynomials of degree is derived by applying Leibniz's theorem for differentiation of a product to Rodrigues' formula.
Laguerre polynomials have a differential operator representation, much like the closely related Hermite polynomials. Namely, let and consider the differential operator . Then .
The first few generalized Laguerre polynomials are:
The coefficient of the leading term is ;
The constant term, which is the value at 0, is
If is non-negative, then Ln(α) has n real, strictly positive roots (notice that is a Sturm chain), which are all in the interval
The polynomials' asymptotic behaviour for large , but fixed and , is given by and summarizing by where is the Bessel function.
As a contour integral
Given the generating function specified above, the polynomials may be expressed in terms of a contour integral
where the contour circles the origin once in a counterclockwise direction without enclosing the essential singularity at 1
Recurrence relations
The addition formula for Laguerre polynomials:
Laguerre's polynomials satisfy the recurrence relations
in particular
and
or
moreover
They can be used to derive the four 3-point-rules
combined they give this additional, useful recurrence relations
Since is a monic polynomial of degree in ,
there is the partial fraction decomposition
The second equality follows by the following identity, valid for integer i and and immediate from the expression of in terms of Charlier polynomials:
For the third equality apply the fourth and fifth identities of this section.
Derivatives of generalized Laguerre polynomials
Differentiating the power series representation of a generalized Laguerre polynomial times leads to
This points to a special case () of the formula above: for integer the generalized polynomial may be written
the shift by sometimes causing confusion with the usual parenthesis notation for a derivative.
Moreover, the following equation holds:
which generalizes with Cauchy's formula to
The derivative with respect to the second variable has the form,
The generalized Laguerre polynomials obey the differential equation
which may be compared with the equation obeyed by the kth derivative of the ordinary Laguerre polynomial,
where for this equation only.
In Sturm–Liouville form the differential equation is
which shows that is an eigenvector for the eigenvalue .
Orthogonality
The generalized Laguerre polynomials are orthogonal over with respect to the measure with weighting function :
which follows from
If denotes the gamma distribution then the orthogonality relation can be written as
The associated, symmetric kernel polynomial has the representations (Christoffel–Darboux formula)
recursively
Moreover,
Turán's inequalities can be derived here, which is
The following integral is needed in the quantum mechanical treatment of the hydrogen atom,
Series expansions
Let a function have the (formal) series expansion
Then
The series converges in the associated Hilbert space if and only if
Further examples of expansions
Monomials are represented as
while binomials have the parametrization
This leads directly to
for the exponential function. The incomplete gamma function has the representation
In quantum mechanics
In quantum mechanics the Schrödinger equation for the hydrogen-like atom is exactly solvable by separation of variables in spherical coordinates. The radial part of the wave function is a (generalized) Laguerre polynomial.
Vibronic transitions in the Franck-Condon approximation can also be described using Laguerre polynomials.
Multiplication theorems
Erdélyi gives the following two multiplication theorems
Relation to Hermite polynomials
The generalized Laguerre polynomials are related to the Hermite polynomials:
where the are the Hermite polynomials based on the weighting function , the so-called "physicist's version."
Because of this, the generalized Laguerre polynomials arise in the treatment of the quantum harmonic oscillator.
Relation to hypergeometric functions
The Laguerre polynomials may be defined in terms of hypergeometric functions, specifically the confluent hypergeometric functions, as
where is the Pochhammer symbol (which in this case represents the rising factorial).
Hardy–Hille formula
The generalized Laguerre polynomials satisfy the Hardy–Hille formula
where the series on the left converges for and . Using the identity
(see generalized hypergeometric function), this can also be written as
This formula is a generalization of the Mehler kernel for Hermite polynomials, which can be recovered from it by using the relations between Laguerre and Hermite polynomials given above.
Physics Convention
The generalized Laguerre polynomials are used to describe the quantum wavefunction for hydrogen atom orbitals. The convention used throughout this article expresses the generalized Laguerre polynomials as
where is the confluent hypergeometric function.
In the physics literature, the generalized Laguerre polynomials are instead defined as
The physics version is related to the standard version by
There is yet another, albeit less frequently used, convention in the physics literature
Umbral Calculus Convention
Generalized Laguerre polynomials are linked to Umbral calculus by being Sheffer sequences for when multiplied by . In Umbral Calculus convention, the default Laguerre polynomials are defined to bewhere are the signless Lah numbers. is a sequence of polynomials of binomial type, ie they satisfy
See also
Orthogonal polynomials
Rodrigues' formula
Angelescu polynomials
Bessel polynomials
Denisyuk polynomials
Transverse mode, an important application of Laguerre polynomials to describe the field intensity within a waveguide or laser beam profile.
Notes
References
G. Szegő, Orthogonal polynomials, 4th edition, Amer. Math. Soc. Colloq. Publ., vol. 23, Amer. Math. Soc., Providence, RI, 1975.
B. Spain, M.G. Smith, Functions of mathematical physics, Van Nostrand Reinhold Company, London, 1970. Chapter 10 deals with Laguerre polynomials.
Eric W. Weisstein, "Laguerre Polynomial", From MathWorld—A Wolfram Web Resource.
External links
Polynomials
Orthogonal polynomials
Special hypergeometric functions | Laguerre polynomials | Mathematics | 1,884 |
23,682,119 | https://en.wikipedia.org/wiki/Molecular%20Endocrinology | Molecular Endocrinology is a peer-reviewed journal that publishes research on the molecular processes of hormones.
References
Academic journals established in 1987
Molecular and cellular biology journals
English-language journals
Monthly journals | Molecular Endocrinology | Chemistry | 40 |
76,916,780 | https://en.wikipedia.org/wiki/NGC%203950 | NGC 3950 is an elliptical galaxy of type E, in Ursa Major. Its redshift is 0.074602, meaning NGC 3950 is 1.03 billion light-years or 316 Mpc from Earth, which is within the Hubble distance values. This high redshift makes NGC 3950 one of the furthest New General Catalogue objects.
NGC 3950 has apparent dimensions of 0.30 x 0.3 arcmin, meaning the galaxy is 90,000 light-years across. It was discovered by Lawrence Parsons on April 27, 1875, and he described it as, "extremely faint, 2.6 arcmin north of h 1009".
In a research article published in 1990, NGC 3950 was believed to be a dwarf galaxy, and a close companion of a larger spiral galaxy, NGC 3949. But further research involving measuring its redshift in 2005 showed NGC 3950 is much further away in the background. Together with NGC 3949, they both form an optical galaxy pair called HOLM 301.
References
3950
Ursa Major
Elliptical galaxies
Discoveries by Lawrence Parsons
Astronomical objects discovered in 1875
037294
+08-22-030 | NGC 3950 | Astronomy | 241 |
11,230,289 | https://en.wikipedia.org/wiki/KCNE2 | Potassium voltage-gated channel subfamily E member 2 (KCNE2), also known as MinK-related peptide 1 (MiRP1), is a protein that in humans is encoded by the KCNE2 gene on chromosome 21. MiRP1 is a voltage-gated potassium channel accessory subunit (beta subunit) associated with Long QT syndrome. It is ubiquitously expressed in many tissues and cell types. Because of this and its ability to regulate multiple different ion channels, KCNE2 exerts considerable influence on a number of cell types and tissues. Human KCNE2 is a member of the five-strong family of human KCNE genes. KCNE proteins contain a single membrane-spanning region, extracellular N-terminal and intracellular C-terminal. KCNE proteins have been widely studied for their roles in the heart and in genetic predisposition to inherited cardiac arrhythmias. The KCNE2 gene also contains one of 27 SNPs associated with increased risk of coronary artery disease. More recently, roles for KCNE proteins in a variety of non-cardiac tissues have also been explored.
Discovery
Steve Goldstein (then at Yale University) used a BLAST search strategy, focusing on KCNE1 sequence stretches known to be important for function, to identify related expressed sequence tags (ESTs) in the NCBI database. Using sequences from these ESTs, KCNE2, 3 and 4 were cloned.
Tissue distribution
KCNE2 protein is most readily detected in the choroid plexus epithelium, gastric parietal cells, and thyroid epithelial cells. KCNE2 is also expressed in atrial and ventricular cardiomyocytes, the pancreas, pituitary gland, and lung epithelium. In situ hybridization data suggest that KCNE2 transcript may also be expressed in various neuronal populations.
Structure
Gene
The KCNE2 gene resides on chromosome 21 at the band 21q22.11 and contains 2 exons. Since human KCNE2 is located ~79 kb from KCNE1 and in the opposite direction, KCNE2 is proposed to originate from a gene duplication event.
Protein
This protein belongs to the potassium channel KCNE family and is one five single transmembrane domain voltage-gated potassium (Kv) channel ancillary subunits. KCNE2 is composed of three major domains: the N-terminal domain, the transmembrane domain, and the C-terminal domain. The N-terminal domain protrudes out of the extracellular side of the cell membrane and is, thus, soluble in the aqueous environment. Meanwhile, the transmembrane and C-terminal domains are lipid-soluble to enable the protein to incorporate into the cell membrane. The C-terminal faces the intracellular side of the membrane and may share a putative PKC phosphorylation site with other KCNE proteins.
Like other KCNEs, KCNE2 forms a heteromeric complex with the Kv α subunits.
Function
Choroid plexus epithelium
KCNE2 protein is most readily detected in the choroid plexus epithelium, at the apical side. KCNE2 forms complexes there with the voltage-gated potassium channel α subunit, Kv1.3. In addition, KCNE2 forms reciprocally regulating tripartite complexes in the choroid plexus epithelium with the KCNQ1 α subunit and the sodium-dependent myo-inositol transporter, SMIT1. Kcne2-/- mice exhibit increased seizure susceptibility, reduced immobility time in the tail suspension test, and reduced cerebrospinal fluid myo-inositol content, compared to wild-type littermates. Mega-dosing of myo-inositol reverses all these phenotypes, suggesting a link between myo-inositol and the seizure susceptibility and behavioral alterations in Kcne2-/- mice.
Gastric epithelium
KCNE2 is also highly expressed in parietal cells of the gastric epithelium, also at the apical side. In these cells, KCNQ1-KCNE2 K+ channels, which are constitutively active, provide a conduit to return K+ ions back to the stomach lumen. The K+ ions enter the parietal cell through the gastric H+/K+-ATPase, which swaps them for protons as it acidifies the stomach. While KCNQ1 channels are inhibited by low extracellular pH, KCNQ1-KCNE2 channels activity is augmented by extracellular protons, an ideal characteristic for their role in parietal cells.
Thyroid epithelium
KCNE2 forms constitutively active K+ channels with KCNQ1 in the basolateral membrane of thyroid epithelial cells. Kcne2-/- mice exhibit hypothyroidism, particularly apparent during gestation or lactation. KCNQ1-KCNE2 is required for optimal iodide uptake into the thyroid by the basolateral sodium iodide symporter (NIS). Iodide is required for biosynthesis of thyroid hormones.
Heart
KCNE2 was originally discovered to regulate hERG channel function. KCNE2 decreases macroscopic and unitary current through hERG, and speeds hERG deactivation. hERG generates IKr, the most prominent repolarizing current in human ventricular cardiomyocytes. hERG, and IKr, are highly susceptible to block by a range of structurally diverse pharmacological agents. This property means that many drugs or potential drugs have the capacity to impair human ventricular repolarization, leading to drug-induced long QT syndrome. KCNE2 may also regulate hyperpolarization-activated, cyclic-nucleotide-gated (HCN) pacemaker channels in human heart and in the hearts of other species, as well as the Cav1.2 voltage-gated calcium channel.
In mice, mERG and KCNQ1, another Kv α subunit regulated by KCNE2, are neither influential nor highly expressed in adult ventricles. However, Kcne2-/- mice exhibit QT prolongation at baseline at 7 months of age, or earlier if provoked with a QT-prolonging agent such as sevoflurane. This is because KCNE2 is a promiscuous regulatory subunit that forms complexes with Kv1.5 and with Kv4.2 in adult mouse ventricular myocytes. KCNE2 increases currents though Kv4.2 channels and slows their inactivation. KCNE2 is required for Kv1.5 to localize to the intercalated discs of mouse ventricular myocytes. Kcne2 deletion in mice reduces the native currents generated in ventricular myocytes by Kv4.2 and Kv1.5, namely Ito and IKslow, respectively.
Clinical Significance
Gastric epithelium
Kcne2-/- mice exhibit achlorhydria, gastric hyperplasia, and mis-trafficking of KCNQ1 to the parietal cell basal membrane. The mis-trafficking occurs because KCNE3 is upregulated in the parietal cells of Kcne2-/- mice, and hijacks KCNQ1, taking it to the basolateral membrane. When both Kcne2 and Kcne3 are germline-deleted in mice, KCNQ1 traffics to the parietal cell apical membrane but the gastric phenotype is even worse than for Kcne2-/- mice, emphasizing that KCNQ1 requires KCNE2 co-assembly for functional attributes other than targeting in parietal cells. Kcne2-/- mice also develop gastritis cystica profunda and gastric neoplasia. Human KCNE2 downregulation is also observed in sites of gastritis cystica profunda and gastric adenocarcinoma.
Thyroid epithelium
Positron emission tomography data show that with KCNE2, 124I uptake by the thyroid is impaired. Kcne2 deletion does not impair organification of iodide once it has been taken up by NIS. Pups raised by Kcne2-/- dams are particularly severely affected becauset they receive less milk (hypothyroidism of the dams impairs milk ejection), the milk they receive is deficient in T4, and they themselves cannot adequately transport iodide into the thyroid. Kcne2-/- pups exhibit stunted growth, alopecia, cardiomegaly and reduced cardiac ejection fraction, all of which are alleviated by thyroid hormone supplementation of pups or dams. Surrogating Kcne2-/- pups with Kcne2+/+ dams also alleviates these phenotypes, highlighting the influence of maternal genotype in this case.
Heart
As observed for hERG mutations, KCNE2 loss-of-function mutations are associated with inherited long QT syndrome, and hERG-KCNE2 channels carrying the mutations show reduced activity compared to wild-type channels. In addition, some KCNE2 mutations and also more common polymorphisms are associated with drug-induced long QT syndrome. In several cases, specific KCNE2 sequence variants increase the susceptibility to hERG-KCNE2 channel inhibition by the drug that precipitated the QT prolongation in the patient from which the gene variant was isolated. Long QT syndrome predisposes to potentially lethal ventricular cardiac arrhythmias including torsades de pointe, which can degenerate into ventricular fibrillation and sudden cardiac death. Moreover, KCNE2 gene variation can disrupt HCN1-KCNE2 channel function and this may potentially contribute to cardiac arrhythmogenesis. KCNE2 is also associated with familial atrial fibrillation, which may involve excessive KCNQ1-KCNE2 current caused by KCNE2 gain-of-function mutations.
Recently, a battery of extracardiac effects were discovered in Kcne2-/- mice that may contribute to cardiac arrhythmogenesis in Kcne2-/- mice and could potentially contribute to human cardiac arrhythmias if similar effects are observed in human populations. Kcne2 deletion in mice causes anemia, glucose intolerance, dyslipidemia, hyperkalemia and elevated serum angiotensin II. Some or all of these might contribute to predisposition to sudden cardiac death in Kcne2-/- mice in the context of myocardial ischemia and post-ischemic arrhythmogenesis.
Clinical Marker
A multi-locus genetic risk score study based on a combination of 27 loci, including the KCNE2 gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22).
See also
Voltage-gated potassium channel
KCNE1
KCNE3
KCNQ1
Notes
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Romano-Ward Syndrome
Ion channels
Articles containing video clips | KCNE2 | Chemistry | 2,439 |
8,215,394 | https://en.wikipedia.org/wiki/Fern%20spike | In paleontology, a fern spike is the occurrence of unusually high spore abundance of ferns in the fossil record, usually immediately (in a geological sense) after an extinction event. The spikes are believed to represent a large, temporary increase in the number of ferns relative to other terrestrial plants after the extinction or thinning of the latter. Fern spikes are strongly associated with the Cretaceous–Paleogene extinction event, although they have been found in other points of time and space such as at the Triassic-Jurassic boundary. Outside the fossil record, fern spikes have been observed to occur in response to local extinction events, such as the 1980 Mount St. Helens eruption.
Causes
Extinction events have historically been caused by massive environmental disturbances, such as meteor strikes. Volcanic eruptions can also wipe out local ecosystems through pyroclastic flows and landslides, leaving the ground bare for new colonization. For a population to recover and thrive after such an event, it must be able to tolerate the conditions of the disturbed environment. Ferns have multiple characteristics which predispose them to grow in those environments.
Spore characteristics
Plants generally reproduce with spores or seeds, meaning those will be what germinates in a disaster's aftermath. But spores have advantages over seeds in the environmental conditions produced by a disaster. They are generally produced in higher numbers than seeds, and are smaller, aiding wind dispersal. While many wind-dispersed pollens of seed plants are smaller and farther dispersed than spores, pollen cannot germinate into a plant and must land in a receptive flower. Some seed plants also require animals to disperse their seeds, which may not be present after a disaster. These characteristics allow ferns to rapidly colonize an area with their spores.
Fern spores require light to germinate. Following major disturbances that clear or reduce plant life, the ground would receive ample sunlight that may promote spore germination. Some species' spores contain chlorophyll, which hastens germination and may aid rapid colonization of clear ground.
Environmental tolerance
After the eruption of El Chichón, the fern Pityrogramma calomelanos was observed to regenerate from rhizomes buried by ash, even though the plants' leaves were destroyed. The rhizomes tolerated exposure to heat and sulfur from the volcanic matter. Their survival suggests resilience of ferns to the harsh environmental conditions imposed by certain kinds of disasters, and rhizome regeneration may have been a factor in fern recovery after other environmental events.
Ecology
Fern spikes follow the pattern of ecological succession. In the past and in modern times, ferns have been observed to act as pioneer species. Eventually, their abundance at a site decreases as other plants such as gymnosperms begin to grow.
Spore availability
Fern spikes cannot occur without ferns already existing in the area, so spikes occur primarily in regions where ferns are already a prominent part of the ecosystem. At the Cretaceous-Paleogene extinction event, a fern spike occurred in the New Zealand area, where ferns made up 25% of plant abundance pre-extinction. After the event, fern abundance increased to 90%.
Detection
Prehistoric fern spikes can be detected by sampling sediment. Sources include sediment that has been accumulating in a lake since the event of interest and sedimentary rocks such as sandstone. Because sediment accumulates over time and thus shows superposition, layers can be assigned to certain times. Spore concentration in a layer can be compared to the concentration at different times, and concentration of other particles such a pollen grains. A fern spike is characterized by a suddenly higher abundance of fern spores following a disaster, generally accompanied by a decrease in other plant species as indicated by their pollen. Eventually fern abundance will decrease, hence the term "spike" describing the pattern.
Modern fern spikes can simply be directly observed, and allow for observation of factors contributing to the spike that may not be detectable otherwise, such as rhizomes persisting in ash.
Significance
Because fern spikes generally coincide with certain disasters such as meteorite strikes and volcanic eruptions, their presence in the fossil record can indicate those events. A fern spike is believed to support a meteorite impact as cause of the Triassic-Jurassic extinction event, similar to the one later causing extinction at the end of the Cretaceous period.
Known events
A fern spike followed a fungal spike after the Permian–Triassic extinction event (252 Ma). It has been observed in Australia.
After the Triassic-Jurassic extinction event (201.3 Ma), ferns drastically increased in abundance while seed plants became scarce. The spike has been detected in eastern North America and Europe.
A very widespread fern spike occurred after the Cretaceous–Paleogene extinction event (66 Ma). The spike has been predominantly observed in North America, with just one observance outside the continent in Japan.
Fern spikes today are often observed after volcanic eruptions. The areas affected by the eruptions of Mount St. Helens (May 18, 1980) and El Chichón (March—April 1982) exhibited such a pattern.
See also
Ecological succession
Paleoecology
Pioneer species
References
Extinction
Extinction events
Fossil record
Paleontological concepts and hypotheses
Ferns
Paleobotany | Fern spike | Biology | 1,041 |
1,241,059 | https://en.wikipedia.org/wiki/Mark%20Kilgard | Mark J. Kilgard is a graphics software engineer working at Nvidia.
Prior to joining Nvidia, Mark Kilgard worked at Compaq and Silicon Graphics. While at Silicon Graphics, he authored the OpenGL Utility Toolkit, better known as GLUT, to make it easy to write OpenGL-based 3D examples and demos. The primary reason for this was the lack of a windowing and input API with OpenGL using GLX.
Mark Kilgard wrote and released many OpenGL technical sample programs during the pushback against Microsoft's competitive FUD against the API, and his GLUT toolkit (ported to Windows by Nate Robins) allowed these examples to run cross platform on Windows PC systems as well as SGI workstations.
At Nvidia, Mark Kilgard has helped design important parts of 3D graphics APIs. He has written key whitepapers, including "Cg in Two Pages". He is the lead author of the NV path rendering extension—a GPU-accelerated method for rendering vector graphics.
Kilgard graduated from Rice University. He has written two books: OpenGL for the X Window System (1996), and The Cg Tutorial (2003), co-authored with Randima Fernando.
References
Year of birth missing (living people)
Living people
Rice University alumni
Computer science writers
Silicon Graphics people
American computer programmers
Computer graphics professionals
Nvidia people | Mark Kilgard | Technology | 296 |
4,318,651 | https://en.wikipedia.org/wiki/Eddy%20%28fluid%20dynamics%29 | In fluid dynamics, an eddy is the swirling of a fluid and the reverse current created when the fluid is in a turbulent flow regime. The moving fluid creates a space devoid of downstream-flowing fluid on the downstream side of the object. Fluid behind the obstacle flows into the void creating a swirl of fluid on each edge of the obstacle, followed by a short reverse flow of fluid behind the obstacle flowing upstream, toward the back of the obstacle. This phenomenon is naturally observed behind large emergent rocks in swift-flowing rivers.
An eddy is a movement of fluid that deviates from the general flow of the fluid. An example for an eddy is a vortex which produces such deviation. However, there are other types of eddies that are not simple vortices. For example, a Rossby wave is an eddy which is an undulation that is a deviation from mean flow, but does not have the local closed streamlines of a vortex.
Swirl and eddies in engineering
The propensity of a fluid to swirl is used to promote good fuel/air mixing in internal combustion engines.
In fluid mechanics and transport phenomena, an eddy is not a property of the fluid, but a violent swirling motion caused by the position and direction of turbulent flow.
Reynolds number and turbulence
In 1883, scientist Osborne Reynolds conducted a fluid dynamics experiment involving water and dye, where he adjusted the velocities of the fluids and observed the transition from laminar to turbulent flow, characterized by the formation of eddies and vortices. Turbulent flow is defined as the flow in which the system's inertial forces are dominant over the viscous forces. This phenomenon is described by Reynolds number, a unit-less number used to determine when turbulent flow will occur. Conceptually, the Reynolds number is the ratio between inertial forces and viscous forces.
The general form for the Reynolds number flowing through a tube of radius (or diameter ):
where is the velocity of the fluid, is its density, is the radius of the tube, and is the dynamic viscosity of the fluid. A turbulent flow in a fluid is defined by the critical Reynolds number, for a closed pipe this works out to approximately
In terms of the critical Reynolds number, the critical velocity is represented as
Research and development
Computational fluid dynamics
These are turbulence models in which the Reynolds stresses, as obtained from a Reynolds averaging of the Navier–Stokes equations, are modelled by a linear constitutive relationship with the mean flow straining field, as:
where
is the coefficient termed turbulence "viscosity" (also called the eddy viscosity)
is the mean turbulent kinetic energy
is the mean strain rate
Note that that inclusion of in the linear constitutive relation is required by tensorial algebra purposes when solving for two-equation turbulence models (or any other turbulence model that solves a transport equation for .
Hemodynamics
Hemodynamics is the study of blood flow in the circulatory system. Blood flow in straight sections of the arterial tree are typically laminar (high, directed wall stress), but branches and curvatures in the system cause turbulent flow. Turbulent flow in the arterial tree can cause a number of concerning effects, including atherosclerotic lesions, postsurgical neointimal hyperplasia, in-stent restenosis, vein bypass graft failure, transplant vasculopathy, and aortic valve calcification.
Industrial processes
Lift and drag properties of golf balls are customized by the manipulation of dimples along the surface of the ball, allowing for the golf ball to travel further and faster in the air. The data from turbulent-flow phenomena has been used to model different transitions in fluid flow regimes, which are used to thoroughly mix fluids and increase reaction rates within industrial processes.
Fluid currents and pollution control
Oceanic and atmospheric currents transfer particles, debris, and organisms all across the globe. While the transport of organisms, such as phytoplankton, are essential for the preservation of ecosystems, oil and other pollutants are also mixed in the current flow and can carry pollution far from its origin. Eddy formations circulate trash and other pollutants into concentrated areas which researchers are tracking to improve clean-up and pollution prevention. The distribution and motion of plastics caused by eddy formations in natural water bodies can be predicted using Lagrangian transport models. Mesoscale ocean eddies play crucial roles in transferring heat poleward, as well as maintaining heat gradients at different depths.
Environmental flows
Modeling eddy development, as it relates to turbulence and fate transport phenomena, is vital in grasping an understanding of environmental systems. By understanding the transport of both particulate and dissolved solids in environmental flows, scientists and engineers will be able to efficiently formulate remediation strategies for pollution events. Eddy formations play a vital role in the fate and transport of solutes and particles in environmental flows such as in rivers, lakes, oceans, and the atmosphere. Upwelling in stratified coastal estuaries warrant the formation of dynamic eddies which distribute nutrients out from beneath the boundary layer to form plumes. Shallow waters, such as those along the coast, play a complex role in the transport of nutrients and pollutants due to the proximity of the upper-boundary driven by the wind and the lower-boundary near the bottom of the water body.
Mesoscale ocean eddies
Eddies are common in the ocean, and range in diameter from centimeters to hundreds of kilometers. The smallest scale eddies may last for a matter of seconds, while the larger features may persist for months to years.
Eddies that are between about 10 and 500 km (6 and 300 miles) in diameter and persist for periods of days to months are known in oceanography as mesoscale eddies.
Mesoscale eddies can be split into two categories: static eddies, caused by flow around an obstacle (see animation), and transient eddies, caused by baroclinic instability.
When the ocean contains a sea surface height gradient this creates a jet or current, such as the Antarctic Circumpolar Current. This current as part of a baroclinically unstable system meanders and creates eddies (in much the same way as a meandering river forms an oxbow lake). These types of mesoscale eddies have been observed in many major ocean currents, including the Gulf Stream, the Agulhas Current, the Kuroshio Current, and the Antarctic Circumpolar Current, amongst others.
Mesoscale ocean eddies are characterized by currents that flow in a roughly circular motion around the center of the eddy. The sense of rotation of these currents may either be cyclonic or anticyclonic (such as Haida Eddies). Oceanic eddies are also usually made of water masses that are different from those outside the eddy. That is, the water within an eddy usually has different temperature and salinity characteristics to the water outside the eddy. There is a direct link between the water mass properties of an eddy and its rotation. Warm eddies rotate anti-cyclonically, while cold eddies rotate cyclonically.
Because eddies may have a vigorous circulation associated with them, they are of concern to naval and commercial operations at sea. Further, because eddies transport anomalously warm or cold water as they move, they have an important influence on heat transport in certain parts of the ocean.
Influences on apex predators
The sub-tropical Northern Atlantic is known to have both cyclonic and anticyclonic eddies that are associated with high surface chlorophyll and low surface chlorophyll, respectively. The presence of chlorophyll and higher levels of chlorophyll allows this region to support higher biomass of phytoplankton, as well as, supported by areas of increased vertical nutrient fluxes and transportation of biological communities. This area of the Atlantic is also thought to be an ocean desert, which creates an interesting paradox due to it hosting a variety of large pelagic fish populations and apex predators.
These mesoscale eddies have shown to be beneficial in further creating ecosystem-based management for food web models to better understand the utilization of these eddies by both the apex predators and their prey. Gaube et al. (2018), used “Smart” Position or Temperature Transmitting tags (SPOT) and Pop-Up Satellite Archival Transmitting tags (PSAT) to track the movement and diving behavior of two female white sharks (Carcharodon carcharias) within the eddies. The eddies were defined using sea surface height (SSH) and contours using the horizontal speed-based radius scale. This study found that the white sharks dove in both cyclones but favored the anticyclone which had three times more dives as the cyclonic eddies. Additionally, in the Gulf Stream eddies, the anticyclonic eddies were 57% more common and had more dives and deeper dives than the open ocean eddies and Gulf Stream cyclonic eddies.
Within these anticyclonic eddies, the isotherm was displaced 50 meters downward allowing for the warmer water to penetrate deeper in the water column. This warmer water displacement may allow for the white sharks to make longer dives without the added energetic cost from thermal regulation in the cooler cyclones. Even though these anticyclonic eddies resulted in lower levels of chlorophyll in comparison to the cyclonic eddies, the warmer waters at deeper depths may allow for a deeper mixed layer and higher concentration of diatoms which in turn result in higher rates of primary productivity. Furthermore, the prey populations could be distributed more within these eddies attracting these larger female sharks to forage in this mesopelagic zone. This diving pattern may follow a diel vertical migration but without more evidence on the biomass of their prey within this zone, these conclusions cannot be made only using this circumstantial evidence.
The biomass in the mesopelagic zone is still understudied leading to the biomass of fish within this layer to potentially be underestimated. A more accurate measurement on this biomass may serve to benefit the commercial fishing industry providing them with additional fishing grounds within this region. Moreover, further understanding this region in the open ocean and how the removal of fish in this region may impact this pelagic food web is crucial for the fish populations and apex predators that may rely on this food source in addition to making better ecosystem-based management plans.
See also
Vortex
Eddy pumping - component of vertical motion in eddies relevant for biology and biogeochemistry
Eddy diffusion
Haida Eddies
Irminger Rings
Reynolds number - a dimensionless constant used to predict the onset of turbulent flow
Reynolds experiment
Kármán vortex street
Whirlpool
Whirlwind
River eddies in whitewater
Wake turbulence
Computational fluid dynamics
Laminar flow
Hemodynamics
Modons, or dipole eddy pairs.
References
Fluid dynamics
Vortices | Eddy (fluid dynamics) | Chemistry,Mathematics,Engineering | 2,251 |
36,216,833 | https://en.wikipedia.org/wiki/MRI%20RF%20shielding | RF shielding for MRI rooms is necessary to prevent noise of radio frequency from entering into the MRI scanner and distorting the image. The three main types of shielding used for MRIs are copper, steel, and aluminum. Copper is generally considered the best shielding for MRI rooms.
RF shielding should not be confused with magnetic shielding, which is used to prevent the magnetic field of the MRI magnet from interfering with pacemakers and other equipment outside of the MRI room.
After the MRI room has been completely shielded, all utility services such as electrical for lights, air conditioning, fire sprinklers and other penetrations into the room must be routed through specialized filters provided by the RF shielding vendor.
References
(Also see the publisher's site)
Electromagnetism | MRI RF shielding | Physics,Chemistry,Materials_science | 153 |
49,669,723 | https://en.wikipedia.org/wiki/Lansweeper | Lansweeper is an IT discovery & inventory platform which delivers insights into the status of users, devices, and software within IT environments. This platform inventories connected IT devices, enabling organizations to centrally manage their IT infrastructure. Lansweeper's automated processes identify and compile a list of connected devices, including computers, routers, servers, and printers. It furnishes device-specific information covering installed software, applied updates & patches, and user details.
History
Lansweeper was founded in Belgium in 2004.
In October 2020, Lansweeper announced the acquisition of Fing, a network scanning and device recognition platform.
In June 2021, Lansweeper received a €130 million investment from Insight Partners to accelerate further growth.
Description
The main purpose of Lansweeper derives from a discovery phase of sweeping round a local area network (LAN) and maintaining an inventory of the hardware assets and software deployed on those assets. Reports from the inventory enable complete hardware and software reports on the devices and can be used to identify problems. Lansweeper can collect information on all Windows, Linux and Mac devices as well as IP-addressable network appliances.
The software incorporates an integrated ticket-based Help Desk module used to assist issues to be captured and tracked through to completion. There is also a software module that allows Lansweeper to orchestrate software updates on Windows computers.
The Lansweeper central inventory database must be located on either an SQL Local DB or SQL Server database on a Microsoft Windows machine. Lansweeper claims that while a minimum default configuration can be supported by placing all its components on a single server, the application has the capability to scale up to hundreds of thousands of devices. While Lansweeper can be set up agentless, it may be recommended to use agents for more complex configurations.
Lansweeper has a freeware version of the product, but it is limited in the number of devices available and functionality provided unless appropriate commercial licenses are purchased.
Notes
References
External links
Network management
Utility software
IT infrastructure | Lansweeper | Technology,Engineering | 418 |
489,253 | https://en.wikipedia.org/wiki/Laser%20broom | A laser broom is a proposed ground-based laser beam-powered propulsion system that sweeps space debris out of the path of artificial satellites (such as the International Space Station) to prevent collateral damage to space equipment. It heats up one side of the debris to shift its orbit trajectory, altering the path to hit the atmosphere sooner. Space researchers have proposed that a laser broom may help mitigate Kessler syndrome, a runaway cascade of collision events between orbiting objects. Additionally, laser broom systems mounted on satellites or space station have also been proposed.
Mechanism
Lasers brooms are proposed to target space debris between in diameter. Collisions with these high-velocity debris not only cause considerable damage to the satellites but secondary fragmented debris from the collided satellite parts. A laser broom is intended to be used at a high power to penetrate through the atmosphere and ablate material from the targeted debris. The ablating material imparts a small thrust that lowers its orbital perigee towards the upper atmosphere, thereby increasing drag so that its remaining orbital life is cut short. The laser would operate in a pulsed fashion to avoid the target from self-shielding via its ablated plasma. The power levels of lasers in this concept are well below the power levels in concepts for more rapidly effective anti-satellite weapons.
Research into this field reveal the precise physical constraints required, noting the significant relevance to the space debris's orientation and resultant trajectory of the ablated object. Using a laser guide star and adaptive optics, a sufficiently large ground-based laser (1 megajoule pulsed HF laser) can offset the orbits of dozens of debris daily at a reasonable cost.
History
The Space Shuttle routinely showed evidence of "tiny" impacts upon post-flight inspection.
Orion was a proposed ground-based laser broom project in the 1990s, estimated to cost $500 million.
A space-based laser also called "Project Orion" was planned to be installed on the International Space Station in 2003. In 2015, Japanese researchers proposed adding laser broom capabilities to the Extreme Universe Space Observatory telescope, to be launched to the ISS in 2017.
In 2014, the European CLEANSPACE project published a report studying a global architecture of debris tracking and removal laser stations.
References
Further reading
2000 Earth Orbital Debris - NASA Research on Satellite and Spacecraft Effects by World Spaceflight News, CD-ROM: 862 pages
External links
BBC News report on Laser broom
Space Station gets high tech broom ABC
NASA Hopes Laser Broom Will Help Clean Up Space Debris Agence France-Presse story via SpaceDaily
Orbiting Junk Continues to Threaten International Space Station Space.com
Shuttle to test space junk broom New Scientist
SpaceViews July 1997: Articles ORION: A Solution to the Orbital Debris Problem by Claude Phipps
Wired October 2011: Space Junk Crisis: Time to Bring in the Lasers story on Wired
Removing Orbital Debris with Pulsed Lasers
Broom, laser
Spacecraft propulsion
Space debris | Laser broom | Technology | 572 |
13,979,602 | https://en.wikipedia.org/wiki/John%20Doyle%20%28engineer%29 | John Comstock Doyle is the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and BioEngineering at the California Institute of Technology. He is known for his work in control theory and his current research interests are in theoretical foundations for complex networks in engineering, biology, and multiscale physics.
Education
He earned Bachelor of Science and Master of Science degrees in electrical engineering from the Massachusetts Institute of Technology in 1977 and a Ph.D. in mathematics from the University of California, Berkeley, in 1984 with his thesis titled Matrix interpolation theory and optimal control.
Career
Doyle's early work was in the mathematics of robust control, linear-quadratic-Gaussian control robustness, (structured) singular value analysis, and H-infinity methods. He has co-authored books and software toolboxes, and a control analysis tool for high performance commercial and military aerospace systems, as well as other industrial systems.
Awards
Doyle earned the IEEE W.R.G. Baker Prize Paper Award (1991), the IEEE Automatic Control Transactions Axelby Award twice, and the AACC Schuck award. He also has been awarded the AACC Donald P. Eckman Award, the 2004 IEEE Control Systems Award and the Centennial Outstanding Young Engineer Award.
References
External links
John C. Doyle's homepage
Discover Magazine article
California Institute of Technology faculty
Control theorists
UC Berkeley College of Letters and Science alumni
MIT School of Engineering alumni
Living people
Year of birth missing (living people) | John Doyle (engineer) | Engineering | 306 |
40,374,554 | https://en.wikipedia.org/wiki/Point-set%20registration | In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation (e.g., scaling, rotation and translation) that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging.
As a special case, registration of two point sets that only differ by a 3D rotation (i.e., there is no scaling and translation), is called the Wahba Problem and also related to the orthogonal procrustes problem.
Formulation
The problem may be summarized as follows:
Let be two finite size point sets in a finite-dimensional real vector space , which contain and points respectively (e.g., recovers the typical case of when and are 3D point sets). The problem is to find a transformation to be applied to the moving "model" point set such that the difference (typically defined in the sense of point-wise Euclidean distance) between and the static "scene" set is minimized. In other words, a mapping from to is desired which yields the best alignment between the transformed "model" set and the "scene" set. The mapping may consist of a rigid or non-rigid transformation. The transformation model may be written as , using which the transformed, registered model point set is:
The output of a point set registration algorithm is therefore the optimal transformation such that is best aligned to , according to some defined notion of distance function :
where is used to denote the set of all possible transformations that the optimization tries to search for. The most popular choice of the distance function is to take the square of the Euclidean distance for every pair of points:
where denotes the vector 2-norm, is the corresponding point in set that attains the shortest distance to a given point in set after transformation. Minimizing such a function in rigid registration is equivalent to solving a least squares problem.
Types of algorithms
When the correspondences (i.e., ) are given before the optimization, for example, using feature matching techniques, then the optimization only needs to estimate the transformation. This type of registration is called correspondence-based registration. On the other hand, if the correspondences are unknown, then the optimization is required to jointly find out the correspondences and transformation together. This type of registration is called simultaneous pose and correspondence registration.
Rigid registration
Given two point sets, rigid registration yields a rigid transformation which maps one point set to the other. A rigid transformation is defined as a transformation that does not change the distance between any two points. Typically such a transformation consists of translation and rotation. In rare cases, the point set may also be mirrored. In robotics and computer vision, rigid registration has the most applications.
Non-rigid registration
Given two point sets, non-rigid registration yields a non-rigid transformation which maps one point set to the other. Non-rigid transformations include affine transformations such as scaling and shear mapping. However, in the context of point set registration, non-rigid registration typically involves nonlinear transformation. If the eigenmodes of variation of the point set are known, the nonlinear transformation may be parametrized by the eigenvalues. A nonlinear transformation may also be parametrized as a thin plate spline.
Other types
Some approaches to point set registration use algorithms that solve the more general graph matching problem. However, the computational complexity of such methods tend to be high and they are limited to rigid registrations.
In this article, we will only consider algorithms for rigid registration, where the transformation is assumed to contain 3D rotations and translations (possibly also including a uniform scaling).
The PCL (Point Cloud Library) is an open-source framework for n-dimensional point cloud and 3D geometry processing. It includes several point registration algorithms.
Correspondence-based registration
Correspondence-based methods assume the putative correspondences are given for every point . Therefore, we arrive at a setting where both point sets and have points and the correspondences are given.
Outlier-free registration
In the simplest case, one can assume that all the correspondences are correct, meaning that the points are generated as follows:where is a uniform scaling factor (in many cases is assumed), is a proper 3D rotation matrix ( is the special orthogonal group of degree ), is a 3D translation vector and models the unknown additive noise (e.g., Gaussian noise). Specifically, if the noise is assumed to follow a zero-mean isotropic Gaussian distribution with standard deviation , i.e., , then the following optimization can be shown to yield the maximum likelihood estimate for the unknown scale, rotation and translation:Note that when the scaling factor is 1 and the translation vector is zero, then the optimization recovers the formulation of the Wahba problem. Despite the non-convexity of the optimization () due to non-convexity of the set , seminal work by Berthold K.P. Horn showed that () actually admits a closed-form solution, by decoupling the estimation of scale, rotation and translation. Similar results were discovered by Arun et al. In addition, in order to find a unique transformation , at least non-collinear points in each point set are required.
More recently, Briales and Gonzalez-Jimenez have developed a semidefinite relaxation using Lagrangian duality, for the case where the model set contains different 3D primitives such as points, lines and planes (which is the case when the model is a 3D mesh). Interestingly, the semidefinite relaxation is empirically tight, i.e., a certifiably globally optimal solution can be extracted from the solution of the semidefinite relaxation.
Robust registration
The least squares formulation () is known to perform arbitrarily badly in the presence of outliers. An outlier correspondence is a pair of measurements that departs from the generative model (). In this case, one can consider a different generative model as follows:where if the th pair is an inlier, then it obeys the outlier-free model (), i.e., is obtained from by a spatial transformation plus some small noise; however, if the th pair is an outlier, then can be any arbitrary vector . Since one does not know which correspondences are outliers beforehand, robust registration under the generative model () is of paramount importance for computer vision and robotics deployed in the real world, because current feature matching techniques tend to output highly corrupted correspondences where over of the correspondences can be outliers.
Next, we describe several common paradigms for robust registration.
Maximum consensus
Maximum consensus seeks to find the largest set of correspondences that are consistent with the generative model () for some choice of spatial transformation . Formally speaking, maximum consensus solves the following optimization:where denotes the cardinality of the set . The constraint in () enforces that every pair of measurements in the inlier set must have residuals smaller than a pre-defined threshold . Unfortunately, recent analyses have shown that globally solving problem (cb.4) is NP-Hard, and global algorithms typically have to resort to branch-and-bound (BnB) techniques that take exponential-time complexity in the worst case.
Although solving consensus maximization exactly is hard, there exist efficient heuristics that perform quite well in practice. One of the most popular heuristics is the Random Sample Consensus (RANSAC) scheme. RANSAC is an iterative hypothesize-and-verify method. At each iteration, the method first randomly samples 3 out of the total number of correspondences and computes a hypothesis using Horn's method, then the method evaluates the constraints in () to count how many correspondences actually agree with such a hypothesis (i.e., it computes the residual and compares it with the threshold for each pair of measurements). The algorithm terminates either after it has found a consensus set that has enough correspondences, or after it has reached the total number of allowed iterations. RANSAC is highly efficient because the main computation of each iteration is carrying out the closed-form solution in Horn's method. However, RANSAC is non-deterministic and only works well in the low-outlier-ratio regime (e.g., below ), because its runtime grows exponentially with respect to the outlier ratio.
To fill the gap between the fast but inexact RANSAC scheme and the exact but exhaustive BnB optimization, recent researches have developed deterministic approximate methods to solve consensus maximization.
Outlier removal
Outlier removal methods seek to pre-process the set of highly corrupted correspondences before estimating the spatial transformation. The motivation of outlier removal is to significantly reduce the number of outlier correspondences, while maintaining inlier correspondences, so that optimization over the transformation becomes easier and more efficient (e.g., RANSAC works poorly when the outlier ratio is above but performs quite well when outlier ratio is below ).
Parra et al. have proposed a method called Guaranteed Outlier Removal (GORE) that uses geometric constraints to prune outlier correspondences while guaranteeing to preserve inlier correspondences. GORE has been shown to be able to drastically reduce the outlier ratio, which can significantly boost the performance of consensus maximization using RANSAC or BnB. Yang and Carlone have proposed to build pairwise translation-and-rotation-invariant measurements (TRIMs) from the original set of measurements and embed TRIMs as the edges of a graph whose nodes are the 3D points. Since inliers are pairwise consistent in terms of the scale, they must form a clique within the graph. Therefore, using efficient algorithms for computing the maximum clique of a graph can find the inliers and effectively prune the outliers. The maximum clique based outlier removal method is also shown to be quite useful in real-world point set registration problems. Similar outlier removal ideas were also proposed by Parra et al..
M-estimation
M-estimation replaces the least squares objective function in () with a robust cost function that is less sensitive to outliers. Formally, M-estimation seeks to solve the following problem:where represents the choice of the robust cost function. Note that choosing recovers the least squares estimation in (). Popular robust cost functions include -norm loss, Huber loss, Geman-McClure loss and truncated least squares loss. M-estimation has been one of the most popular paradigms for robust estimation in robotics and computer vision. Because robust objective functions are typically non-convex (e.g., the truncated least squares loss v.s. the least squares loss), algorithms for solving the non-convex M-estimation are typically based on local optimization, where first an initial guess is provided, following by iterative refinements of the transformation to keep decreasing the objective function. Local optimization tends to work well when the initial guess is close to the global minimum, but it is also prone to get stuck in local minima if provided with poor initialization.
Graduated non-convexity
Graduated non-convexity (GNC) is a general-purpose framework for solving non-convex optimization problems without initialization. It has achieved success in early vision and machine learning applications. The key idea behind GNC is to solve the hard non-convex problem by starting from an easy convex problem. Specifically, for a given robust cost function , one can construct a surrogate function with a hyper-parameter , tuning which can gradually increase the non-convexity of the surrogate function until it converges to the target function . Therefore, at each level of the hyper-parameter , the following optimization is solved:Black and Rangarajan proved that the objective function of each optimization () can be dualized into a sum of weighted least squares and a so-called outlier process function on the weights that determine the confidence of the optimization in each pair of measurements. Using Black-Rangarajan duality and GNC tailored for the Geman-McClure function, Zhou et al. developed the fast global registration algorithm that is robust against about outliers in the correspondences. More recently, Yang et al. showed that the joint use of GNC (tailored to the Geman-McClure function and the truncated least squares function) and Black-Rangarajan duality can lead to a general-purpose solver for robust registration problems, including point clouds and mesh registration.
Certifiably robust registration
Almost none of the robust registration algorithms mentioned above (except the BnB algorithm that runs in exponential-time in the worst case) comes with performance guarantees, which means that these algorithms can return completely incorrect estimates without notice. Therefore, these algorithms are undesirable for safety-critical applications like autonomous driving.
Very recently, Yang et al. has developed the first certifiably robust registration algorithm, named Truncated least squares Estimation And SEmidefinite Relaxation (TEASER). For point cloud registration, TEASER not only outputs an estimate of the transformation, but also quantifies the optimality of the given estimate. TEASER adopts the following truncated least squares (TLS) estimator:which is obtained by choosing the TLS robust cost function , where is a pre-defined constant that determines the maximum allowed residuals to be considered inliers. The TLS objective function has the property that for inlier correspondences (), the usual least square penalty is applied; while for outlier correspondences (), no penalty is applied and the outliers are discarded. If the TLS optimization () is solved to global optimality, then it is equivalent to running Horn's method on only the inlier correspondences.
However, solving () is quite challenging due to its combinatorial nature. TEASER solves () as follows : (i) It builds invariant measurements such that the estimation of scale, rotation and translation can be decoupled and solved separately, a strategy that is inspired by the original Horn's method; (ii) The same TLS estimation is applied for each of the three sub-problems, where the scale TLS problem can be solved exactly using an algorithm called adaptive voting, the rotation TLS problem can relaxed to a semidefinite program (SDP) where the relaxation is exact in practice, even with large amount of outliers; the translation TLS problem can solved using component-wise adaptive voting. A fast implementation leveraging GNC is open-sourced here. In practice, TEASER can tolerate more than outlier correspondences and runs in milliseconds.
In addition to developing TEASER, Yang et al. also prove that, under some mild conditions on the point cloud data, TEASER's estimated transformation has bounded errors from the ground-truth transformation.
Simultaneous pose and correspondence registration
Iterative closest point
The iterative closest point (ICP) algorithm was introduced by Besl and McKay.
The algorithm performs rigid registration in an iterative fashion by alternating in (i) given the transformation, finding the closest point in for every point in ; and (ii) given the correspondences, finding the best rigid transformation by solving the least squares problem (). As such, it works best if the initial pose of is sufficiently close to . In pseudocode, the basic algorithm is implemented as follows:
algorithm
θ := θ0
while not registered:
θ := least_squares(X)
return θ
Here, the function least_squares performs least squares optimization to minimize the distance in each of the pairs, using the closed-form solutions by Horn and Arun.
Because the cost function of registration depends on finding the closest point in to every point in , it can change as the algorithm is running. As such, it is difficult to prove that ICP will in fact converge exactly to the local optimum. In fact, empirically, ICP and EM-ICP do not converge to the local minimum of the cost function. Nonetheless, because ICP is intuitive to understand and straightforward to implement, it remains the most commonly used point set registration algorithm. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy.
For example, the expectation maximization algorithm is applied to the ICP algorithm to form the EM-ICP method, and the Levenberg-Marquardt algorithm is applied to the ICP algorithm to form the LM-ICP method.
Robust point matching
Robust point matching (RPM) was introduced by Gold et al. The method performs registration using deterministic annealing and soft assignment of correspondences between point sets. Whereas in ICP the correspondence generated by the nearest-neighbour heuristic is binary, RPM uses a soft correspondence where the correspondence between any two points can be anywhere from 0 to 1, although it ultimately converges to either 0 or 1. The correspondences found in RPM is always one-to-one, which is not always the case in ICP. Let be the th point in and be the th point in . The match matrix is defined as such:
The problem is then defined as: Given two point sets and find the Affine transformation and the match matrix that best relates them. Knowing the optimal transformation makes it easy to determine the match matrix, and vice versa. However, the RPM algorithm determines both simultaneously. The transformation may be decomposed into a translation vector and a transformation matrix:
The matrix in 2D is composed of four separate parameters , which are scale, rotation, and the vertical and horizontal shear components respectively. The cost function is then:
subject to , , . The term biases the objective towards stronger correlation by decreasing the cost if the match matrix has more ones in it. The function serves to regularize the Affine transformation by penalizing large values of the scale and shear components:
for some regularization parameter .
The RPM method optimizes the cost function using the Softassign algorithm. The 1D case will be derived here. Given a set of variables where . A variable is associated with each such that . The goal is to find that maximizes . This can be formulated as a continuous problem by introducing a control parameter . In the deterministic annealing method, the control parameter is slowly increased as the algorithm runs. Let be:
this is known as the softmax function. As increases, it approaches a binary value as desired in Equation (). The problem may now be generalized to the 2D case, where instead of maximizing , the following is maximized:
where
This is straightforward, except that now the constraints on are doubly stochastic matrix constraints: and . As such the denominator from Equation () cannot be expressed for the 2D case simply. To satisfy the constraints, it is possible to use a result due to Sinkhorn, which states that a doubly stochastic matrix is obtained from any square matrix with all positive entries by the iterative process of alternating row and column normalizations. Thus the algorithm is written as such:
while has not converged:
// update correspondence parameters by softassign
// apply Sinkhorn's method
// update pose parameters by coordinate descent
update using analytical solution
update using analytical solution
update using Newton's method
return and
where the deterministic annealing control parameter is initially set to and increases by factor until it reaches the maximum value . The summations in the normalization steps sum to and instead of just and because the constraints on are inequalities. As such the th and th elements are slack variables.
The algorithm can also be extended for point sets in 3D or higher dimensions. The constraints on the correspondence matrix are the same in the 3D case as in the 2D case. Hence the structure of the algorithm remains unchanged, with the main difference being how the rotation and translation matrices are solved.
Thin plate spline robust point matching
The thin plate spline robust point matching (TPS-RPM) algorithm by Chui and Rangarajan augments the RPM method to perform non-rigid registration by parametrizing the transformation as a thin plate spline.
However, because the thin plate spline parametrization only exists in three dimensions, the method cannot be extended to problems involving four or more dimensions.
Kernel correlation
The kernel correlation (KC) approach of point set registration was introduced by Tsin and Kanade.
Compared with ICP, the KC algorithm is more robust against noisy data. Unlike ICP, where, for every model point, only the closest scene point is considered, here every scene point affects every model point. As such this is a multiply-linked registration algorithm. For some kernel function , the kernel correlation of two points is defined thus:
The kernel function chosen for point set registration is typically symmetric and non-negative kernel, similar to the ones used in the Parzen window density estimation. The Gaussian kernel typically used for its simplicity, although other ones like the Epanechnikov kernel and the tricube kernel may be substituted. The kernel correlation of an entire point set is defined as the sum of the kernel correlations of every point in the set to every other point in the set:
The logarithm of KC of a point set is proportional, within a constant factor, to the information entropy. Observe that the KC is a measure of a "compactness" of the point set—trivially, if all points in the point set were at the same location, the KC would evaluate to a large value. The cost function of the point set registration algorithm for some transformation parameter is defined thus:
Some algebraic manipulation yields:
The expression is simplified by observing that is independent of . Furthermore, assuming rigid registration, is invariant when is changed because the Euclidean distance between every pair of points stays the same under rigid transformation. So the above equation may be rewritten as:
The kernel density estimates are defined as:
The cost function can then be shown to be the correlation of the two kernel density estimates:
Having established the cost function, the algorithm simply uses gradient descent to find the optimal transformation. It is computationally expensive to compute the cost function from scratch on every iteration, so a discrete version of the cost function Equation () is used. The kernel density estimates can be evaluated at grid points and stored in a lookup table. Unlike the ICP and related methods, it is not necessary to find the nearest neighbour, which allows the KC algorithm to be comparatively simple in implementation.
Compared to ICP and EM-ICP for noisy 2D and 3D point sets, the KC algorithm is less sensitive to noise and results in correct registration more often.
Gaussian mixture model
The kernel density estimates are sums of Gaussians and may therefore be represented as Gaussian mixture models (GMM). Jian and Vemuri use the GMM version of the KC registration algorithm to perform non-rigid registration parametrized by thin plate splines.
Coherent point drift
Coherent point drift (CPD) was introduced by Myronenko and Song.
The algorithm takes a probabilistic approach to aligning point sets, similar to the GMM KC method. Unlike earlier approaches to non-rigid registration which assume a thin plate spline transformation model, CPD is agnostic with regard to the transformation model used. The point set represents the Gaussian mixture model (GMM) centroids. When the two point sets are optimally aligned, the correspondence is the maximum of the GMM posterior probability for a given data point. To preserve the topological structure of the point sets, the GMM centroids are forced to move coherently as a group. The expectation maximization algorithm is used to optimize the cost function.
Let there be points in and points in . The GMM probability density function for a point is:
where, in dimensions, is the Gaussian distribution centered on point .
The membership probabilities is equal for all GMM components. The weight of the uniform distribution is denoted as . The mixture model is then:
The GMM centroids are re-parametrized by a set of parameters estimated by maximizing the likelihood. This is equivalent to minimizing the negative log-likelihood function:
where it is assumed that the data is independent and identically distributed. The correspondence probability between two points and is defined as the posterior probability of the GMM centroid given the data point:
The expectation maximization (EM) algorithm is used to find and . The EM algorithm consists of two steps. First, in the E-step or estimation step, it guesses the values of parameters ("old" parameter values) and then uses Bayes' theorem to compute the posterior probability distributions of mixture components. Second, in the M-step or maximization step, the "new" parameter values are then found by minimizing the expectation of the complete negative log-likelihood function, i.e. the cost function:
Ignoring constants independent of and , Equation () can be expressed thus:
where
with only if . The posterior probabilities of GMM components computed using previous parameter values is:
Minimizing the cost function in Equation () necessarily decreases the negative log-likelihood function in Equation () unless it is already at a local minimum. Thus, the algorithm can be expressed using the following pseudocode, where the point sets and are represented as and matrices and respectively:
while not registered:
// E-step, compute
// M-step, solve for optimal transformation
return θ
where the vector is a column vector of ones. The solve function differs by the type of registration performed. For example, in rigid registration, the output is a scale , a rotation matrix , and a translation vector . The parameter can be written as a tuple of these:
which is initialized to one, the identity matrix, and a column vector of zeroes:
The aligned point set is:
The solve_rigid function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper.
// diag(ξ)is the diagonal matrix formed from vector ξ
// is the trace of a matrix
For affine registration, where the goal is to find an affine transformation instead of a rigid one, the output is an affine transformation matrix and a translation such that the aligned point set is:
The solve_affine function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper.
It is also possible to use CPD with non-rigid registration using a parametrization derived using calculus of variations.
Sums of Gaussian distributions can be computed in linear time using the fast Gauss transform (FGT). Consequently, the time complexity of CPD is , which is asymptotically much faster than methods.
Bayesian coherent point drift (BCPD)
A variant of coherent point drift, called Bayesian coherent point drift (BCPD), was derived through a Bayesian formulation of point set registration.
BCPD has several advantages over CPD, e.g., (1) nonrigid and rigid registrations can be performed in a single algorithm, (2) the algorithm can be accelerated regardless of the Gaussianity of a Gram matrix to define motion coherence, (3) the algorithm is more robust against outliers because of a more reasonable definition of an outlier distribution. Additionally, in the Bayesian formulation, motion coherence was introduced through a prior distribution of displacement vectors, providing a clear difference between tuning parameters that control motion coherence. BCPD was further accelerated by a method called BCPD++, which is a three-step procedure composed of (1) downsampling of point sets, (2) registration of downsampled point sets, and (3) interpolation of a deformation field.
The method can register point sets composed of more than 10M points while maintaining its registration accuracy.
Coherent point drift with local surface geometry (LSG-CPD)
An variant of coherent point drift called CPD with Local Surface Geometry (LSG-CPD) for rigid point cloud registration. The method adaptively adds different levels of point-to-plane penalization on top of the point-to-point penalization based on the flatness of the local surface. This results in GMM components with anisotropic covariances, instead of the isotropic covariances in the original CPD. The anisotropic covariance matrix is modeled as:
where
is the anisotropic covariance matrix of the m-th point in the target set; is the normal vector corresponding to the same point; is an identity matrix, serving as a regularizer, pulling the problem away from ill-posedness. is penalization coefficient (a modified sigmoid function), which is set adaptively to add different levels of point-to-plane penalization depending on how flat the local surface is. This is realized by evaluating the surface variation within the neighborhood of the m-th target point. is the upper bound of the penalization.
The point cloud registration is formulated as a maximum likelihood estimation (MLE) problem and solve it with the Expectation-Maximization (EM) algorithm. In the E step, the correspondence computation is recast into simple matrix manipulations and efficiently computed on a GPU. In the M step, an unconstrained optimization on a matrix Lie group is designed to efficiently update the rigid transformation of the registration. Taking advantage of the local geometric covariances, the method shows a superior performance in accuracy and robustness to noise and outliers, compared with the baseline CPD. An enhanced runtime performance is expected thanks to the GPU accelerated correspondence calculation. An implementation of the LSG-CPD is open-sourced here.
Sorting the Correspondence Space (SCS)
This algorithm was introduced in 2013 by H. Assalih to accommodate sonar image registration. These types of images tend to have high amounts of noise, so it is expected to have many outliers in the point sets to match. SCS delivers high robustness against outliers and can surpass ICP and CPD performance in the presence of outliers. SCS doesn't use iterative optimization in high dimensional space and is neither probabilistic nor spectral. SCS can match rigid and non-rigid transformations, and performs best when the target transformation is between three and six degrees of freedom.
See also
Point feature matching
Point-set triangulation
Normal distributions transform
References
External links
Reference implementation of thin plate spline robust point matching
Reference implementation of kernel correlation point set registration
Reference implementation of coherent point drift
Reference implementation of ICP variants
Reference implementation of Bayesian coherent point drift
Reference implementation of LSG-CPD
Computer vision
Pattern matching
Point (geometry)
Robotics engineering | Point-set registration | Mathematics,Technology,Engineering | 6,487 |
70,762,697 | https://en.wikipedia.org/wiki/Boudabousia | Boudabousia is a genus of bacteria from the family of Actinomycetaceae.
References
Actinomycetales
Bacteria genera
Taxa described in 2018 | Boudabousia | Biology | 35 |
16,796,925 | https://en.wikipedia.org/wiki/HD%20212301%20b | HD 212301 b is an extrasolar planet located approximately 172 light-years (53 parsecs) away in the constellation of Octans, orbiting the star HD 212301. It has an orbital period of 2.25 Earth days. The orbital distance is 0.0341 astronomical units or 5.10 gigameters.
On August 22, 2005, taking place in ESO La Silla Observatory in Chile, the planet was discovered by Lo Curto who used the HARPS spectrometer.
See also
HD 213240 b
References
External links
Hot Jupiters
Octans
Exoplanets discovered in 2005
Exoplanets detected by radial velocity
de:HD 212301 b | HD 212301 b | Astronomy | 142 |
59,117,156 | https://en.wikipedia.org/wiki/NGC%20720 | NGC 720 is an elliptical galaxy located in the constellation Cetus. It is located at a distance of circa 80 million light years from Earth, which, given its apparent dimensions, means that NGC 720 is about 110,000 light years across. It was discovered by William Herschel on October 3, 1785. The galaxy is included in the Herschel 400 Catalogue. It lies about three and a half degrees south and slightly east from zeta Ceti.
Characteristics
NGC 720 is an elliptical galaxy with elongated shape in the northwest to southeast axis as seen from Earth. Observations by the Hubble Space Telescope of the core of NGC 720 did not reveal the presence of dust, disk, or inner spiral. As observed in X-rays by the Chandra X-ray Observatory in 2000, the galaxy features a slightly flattened, or ellipsoidal triaxial halo of hot gas that has an orientation different from that of the optical image of the galaxy. Its shape cannot be accounted for based on the observed mass, even when using the Modified Newtonian dynamics theory of gravity, which excludes the need for dark matter. The observations by Chandra X-ray Observatory fit predictions of a cold dark matter model. The galaxy lacks emission in radio waves, meaning it does not host an active galactic nucleus. The total mass of the galaxy with its dark matter halo is estimated to be , with the total gas mass exceeding the stellar one. The observations of hot gas fit models that are nearly hydrostatic. 42 X-ray point sources were detected in the galaxy, including a possible central source. Twelve of them are located within 2 arcsec of globular cluster candidates. NGC 720 features nine ultraluminous X-ray sources, the most found in an early type galaxy as of 2003.
Observations made in 1996 suggested the galaxy had 660 ±190 globular clusters in the central 30kpc, a number considered small for such a galaxy. The allocation of the clusters resembled the ellipticity, position angle and surface brightness of the galaxy. However, in 2012 it was observed that the blue globular clusters subpopulation had a similar slope with the X-ray surface brightness profile. Further observations by the SLUGGS Survey (2016), with wider field data, raised the number of globular clusters in the galaxy to 1489 ± 96 and their distribution was less elliptical than the surface profile of the galaxy. The clusters have bimodal distribution as far as color is concerned, with the clusters characterised as red or blue, with the blue clusters having a stronger connection with the galactic halo.
Optical long slit spectrography of the galaxy showed a strong age gradient along the semimajor axis of NGC 720, which has been explained on the grounds of two distinct population components. At the centre of the galaxy lie stars whose age is estimated to be 13 billion years (Gyrs) old up to 0.73 kpc, where stars with solar metallicity age (5 Gyrs) dominate. These older stars form a small bulge-like spheroid. At distances over 1 kpc dominate stars with age at 2,5 Gyrs. Based on the Mg2 gradient and its mass, it is proposed that NGC 720 underwent an unequal mass galaxy merger about 4 Gyrs ago.
Nearby galaxies
NGC 720 is the foremost galaxy in a small galaxy group, the NGC 720 group, which also includes the galaxy Arp 4. NGC 720 lies at the centre of the group and the rest of the galaxies of the group are dwarf galaxies, which are at least 2 mag fainter than NGC 720. There is extended intragroup X-ray emission. The high fraction of early-type galaxies suggest that NGC 720 may be a fossil group, despite its low mass. Further away lie the edge-on spiral galaxy NGC 681, NGC 701, and NGC 755.
Gallery
References
External links
NGC 720 on SIMBAD
Elliptical galaxies
Cetus
0720
006983
Astronomical objects discovered in 1785
Discoveries by William Herschel
Galaxies discovered in 1785
-02-05-068 | NGC 720 | Astronomy | 821 |
25,367,107 | https://en.wikipedia.org/wiki/Quokkapox%20virus | Quokkapox virus (QPV), also known as quokka poxvirus, marsupial papillomavirus, or marsupialpox virus, is a dsDNA virus that causes quokkapox. It is unclear whether this virus is its own species or a member of another species. It primarily infects the quokka, which is one of only four macropodid marsupials to get pox lesions. The lesions can mainly be seen on the tail, and can be up to in diameter. The biological behavior of this virus has yet to be identified; these lesions seem to be species-specific. The papilloma- like lesion in humans showcase many differences from the marsupial papillomata.
Because the quokka host primarily lives on isolated islands in Western Australia, the range of the virus is limited as well. It was first described in 1972 from samples taken on Rottnest Island.
References
External links
Poxviruses
Chordopoxvirinae
Animal viral diseases
Virus-related cutaneous conditions
Species described in 1972
Infraspecific virus taxa
Marsupial diseases | Quokkapox virus | Biology | 234 |
2,548,300 | https://en.wikipedia.org/wiki/Handle-o-Meter | The Handle-o-Meter is a testing machine developed by Johnson & Johnson and now manufactured by Thwing-Albert that measures the "handle" of sheeted materials: a combination of its surface friction and flexibility. Originally, it was used to test the durability and flexibility of toilet paper and paper towels. It is also used to measure the stiffness of packaging film.
The test sample is placed over an adjustable slot. The resistance encountered by the penetrator blade as it is moved into the slot by a pivoting arm is measured by the machine.
Details
The data collected when such nonwovens, tissues, toweling, film and textiles are tested has been shown to correlate well with the actual performance of these specific material's performance as a finished product.
Materials are placed over a slot that extends across the instrument platform, and then the tester hits test. A beam then protrudes through the slot and a strain gauge measures the force that the material exerts on the beam. Stiff materials offer greater resistance to the movement of the beam. Machine direction and transverse stiffness are measured separately.
There are three different test modes which can be applied to the material: single, double, and quadruple. The average is automatically calculated for double or quadruple tests.
Restrictions on measuring the friction between the platform and the material limit the instrument's accuracy.
Features
Adjustable slot openings
Interchangeable beams
Auto-ranging
2 x 40 LCD display
Statistical Analysis
RS-232 Output and Serial Port
Industry Standards:
ASTM D2923, D6828-02
TAPPI T498
INDA IST 90.3
References
External links
Manufacturer's datasheet
Textiles
Machines
Quality control
Toilet paper | Handle-o-Meter | Physics,Technology,Engineering | 349 |
7,584,141 | https://en.wikipedia.org/wiki/Developmental%20psychopathology | Developmental psychopathology is the study of the development of psychological disorders (e.g., psychopathy, autism, schizophrenia and depression) with a life course perspective. Researchers who work from this perspective emphasize how psychopathology can be understood as normal development gone awry. Developmental psychopathology focuses on both typical and atypical child development in an effort to identify genetic, environmental, and parenting factors that may influence the longitudinal trajectory of psychological well-being.
Theoretical basis
Developmental psychopathology is a sub-field of developmental psychology and child psychiatry characterized by the following (non-comprehensive) list of assumptions:
Atypical development and typical development are mutually informative. Therefore, developmental psychopathology is not the study of pathological development, but the study of the basic mechanisms that cause developmental pathways to diverge toward pathological or typical outcomes;
Development leads to either adaptive or maladaptive outcomes. However, development that is adaptive in one context may be maladaptive in another context, and vice versa;
Developmental change is influenced by many variables. Research designs in developmental psychopathology should incorporate multivariate designs to examine the mechanisms underlying development;
Development occurs within nested contexts (see Urie Bronfenbrenner);
This field requires that development arises from a dynamic interplay of physiological, genetic, social, cognitive, emotional, and cultural influences across time.
Origins of the academic field
In 1974, Thomas M. Achenbach authored a book entitled, "Developmental Psychopathology", which laid the foundations for the discipline of Developmental psychopathology. The book was an outgrowth of his research on relations between development and psychopathology.
Dante Cicchetti is acknowledged to have played a pivotal role in defining and shaping the field of developmental psychopathology. While at Harvard University, Cicchetti began publishing on the development of conditions such as depression and borderline personality disorder, in addition to his own work on child maltreatment and mental retardation. In 1984, Cicchetti edited both a book and a special issue of Child Development on developmental psychopathology. In that special issue he himself wrote, "The emergence of developmental psychopathology".
These efforts launched developmental psychopathology, a subfield of developmental science. In 1989, nine volumes of the Rochester Symposium on Developmental Psychopathology were published, as was the first issue of the journal Development and Psychopathology.
Homotypic and heterotypic continuity
One central concept of developmental psychopathology is homotypic and heterotypic continuity. Some children will develop different symptoms across development (heterotypic continuity), while others will develop similar types of problems (homotypic continuity). While homotypic continuity of emotional and behavioural problems tends to be the norm across development, the transitions between early childhood and late childhood, and between preadolescence and adolescence are associated with higher heterotypic continuity.
Development of conduct problems
Gerald R. Patterson and colleagues take a functionalist view of conduct problems in line with a behavior analysis of child development. They have found considerable evidence that the improper use of reinforcement in childhood can lead to this form of pathology.
See also
Child psychopathology
Psychopathology
Child development for behavioral models of antisocial behavior
Child psychiatry
Management of domestic violence
Social neuroscience
Victimology
References
Psychopathology
Developmental psychology | Developmental psychopathology | Biology | 674 |
4,318,918 | https://en.wikipedia.org/wiki/Doors%20Open%20Days | Doors Open Days (also known as Open House or Open Days in some communities) provide free access to buildings not normally open to the public. The first Doors Open Day took place in France in 1984, and the concept has spread to other places in Europe (see European Heritage Days), North America, Australia and elsewhere.
Doors Open Days promotes architecture and heritage sites to a wider audience within and beyond the country's borders. It is an opportunity to discover hidden architectural gems and to see behind doors that are rarely open to the public for free.
Open Doors Days trace their origin to the 1990 Door Open Day held as part of Glasgow's year as European City of Culture.
Heritage Open Days in England
Heritage Open Days established in 1994 celebrate English architecture and culture allowing visitors free access to historical landmarks that are either not usually open to the public, or would normally charge an entrance fee.
List of Doors Open events in England
Open House London
Scotland
Doors Open Days is organised by the Scottish Civic Trust. Alongside Scottish Archaeology Month, the open days form Scotland's contribution to European Heritage Days. This joint initiative between the Council of Europe and the European Union aims to give people a greater understanding of each other through sharing and exploring cultural heritage. 49 countries across Europe take part annually, in September.
During Glasgow's year as European City of Culture in 1990, organisers ran an Open Doors event, an event credited with popularizing the Doors Open concept and spreading it to other countries. Its popularity encouraged other areas to take part the following year and were coordinated by the Scottish Civic Trust. Doors Open Days now take place throughout Scotland thanks to a team of area coordinators. These coordinators work for a mixture of organisations: local councils, civic trusts, heritage organisations and archaeological trusts.
Scotland is one of the few participating countries where events take place every weekend in September, with different areas choosing their own dates. More than 900 buildings now take part. In 2008, over 225,000 visits were made generating £2 million for the Scottish economy. It is estimated that 5,000 or more volunteers give their time to run activities and open doors for members of the public.
Doors Open Days was supported in 2009 by Homecoming Scotland 2009, a year-long initiative that marked the 250th anniversary of the birth of Scotland's national poet, Robert Burns. It was funded by the Scottish Government and part financed by the European Union through the European Regional Development Fund. Its aim was to engage Scots at home, as well as motivate people of Scottish descent and those who simply love Scotland, to take part in an inspirational celebration of Scottish culture and heritage.
Open Doors events in Wales
Funded and organised by the Wales conservation organisation, Cadw, an Open Doors festival takes place every September, giving free access to many Cadw and non-Cadw sites. It was claimed to be "the largest annual free celebration of architecture and heritage" in the UK. In 2021 more than 150 historic sites took part in the event.
Open house in Australia
Open House events are organised in Australia in partnership with Open House Worldwide. The first Open House event took place in Melbourne in 2008 (the Melbourne Open House event has since changed into an event primarily focused on modern architecture). This was followed by Brisbane in 2010, and Adelaide and Perth in 2012.
Canada
Doors Open Canada began in 2000.
List of Doors Open events in Canada
Doors Open Newfoundland and Labrador
Doors Open Ottawa
Doors Open Toronto
Doors Open Saskatoon
Doors Open London
United States
List of Doors Open events in the U.S.
Doors Open Baltimore, first weekend in October
Doors Open Buffalo
Open House Chicago
Doors Open Lowell
Open House New York
Doors Open Milwaukee
Doors Open Minneapolis
Doors Open Pittsburgh, first weekend in October
Doors Open Rhode Island
Passport DC, embassy open houses in Washington, D.C., in May.
See also
Brisbane Open House
Open House Brno
Tourism in Scotland
External links
Notes
Events in Scotland
Recurring events established in 1984
Cultural events
Tourist attractions
Architecture festivals
Architectural conservation | Doors Open Days | Engineering | 794 |
2,391,239 | https://en.wikipedia.org/wiki/Revolving%20stage | A revolving stage is a mechanically controlled platform within a theatre that can be rotated in order to speed up the changing of a scene within a show.
Kabuki theatre development
Background
Kabuki theatre began in Japan around 1603 when Okuni, a Shinto priestess of the Izumi shrine, traveled with a group of priestesses to Kyoto to become performers. Okuni and her nuns danced sensualized versions of Buddhist and Shinto ritual dances, using the shows as a shop window for their services at night. They originally performed in the dry river bed of the River Kamo on a makeshift wooden stage, but as Okuni’s shows gained popularity they began to tour, performing at the imperial court at least once. Eventually, they were able to build a permanent theatre in 1604, modeled after Japan's aristocratic Nōh theatre which had dominated the previous era. Kabuki, with its origins in popular entertainment, drew crowds of common folk, along with high-class samurai looking to win their favorite performer for the night. This mixing of social classes troubled the Tokugawa Shogunate, who stressed the strict separation of different classes. When rivalries between Okuni’s samurai clients grew too intense, the shogunate took advantage of the conflict and banned women from performing onstage in 1629. The women were replaced by beautiful teenage boys who took part in the same after-dark activities, leading Kabuki to be banned from the stage completely in 1652. An actor-manager in Kyoto, Murayama Matabei, went to the authorities responsible and staged a hunger strike outside their offices. In 1654 Kabuki was allowed to return with restrictions. The shogunate declared that only adult men with “shaved pates” were allowed to perform, the shows must be fully acted plays and not variety shows, and actors had to remain in their own quarter of the city and refrain from mixing with the general public in their private life. With the dampened sensuality of Kabuki theatre, performers turned to exploiting art and spectacle to keep their audiences engaged.
The Genroku period of 1688 saw the solidification of the aesthetics of Kabuki under the new restrictions placed by the shogunate. Nōh theatre of the previous period was the theatre of aristocrats. After the embarrassment Kabuki brought to upper class society, it needed to develop into a more serious art form in order to survive. However, Kabuki theatre did not lose the influence of its origins as popular entertainment. A majority of the Kabuki repertoire was adapted from Bunraku puppet theatre, another popular entertainment of the same period. New innovations had to be made to adapt small scale puppet theatre into full scale plays, as well as elevate the source material to a higher class of art.
The Mawari-Butai
The revolving stage, called the mawari-butai, was invented by Edo playwright Nakimi Shozo in 1729 and solved the issue of moving heavy scenic properties quickly as Kabuki adopted Bunraku into full scale designs. The mawari-butai also served to capture the audience’s interest in the rambunctious theatre atmosphere. The mawari-butai was originally a raised mechanical platform that had to be operated manually by stage hands. The audience would have been able to see the stage hands turning the set as the action of the actors carried on continuously into the next scene. By the 1800s the mawari-butai had evolved to become flush with the stage, and to include an inner revolve and an outer revolve that could be spun simultaneously to achieve certain special effects. Stage hands now moved under stage, requiring the strength of at least four people to push the revolving stage to its next position. The mawari-butai in Kabuki theatre was always manually operated by stage hands.
The mawari-butai allowed great spectacle and ease of set changes, but it also provided a great opportunity for story and aesthetic choices. No more than two sets were constructed on the revolve. These sets could be entirely different settings or show a change in mood or time within one setting. By walking on the revolve in the opposite direction of its motion, actors could appear to go on long journeys through woods, down city streets, etc. The addition of the inner revolve allowed for set pieces to move in relation to each other. For example, two boats could sail past each other in an epic sea battle like in Chikamatsu Monzaemon’s The Girl From Hakata. The inner revolve sometimes was fitted with a lift that could be used to make set pieces rise from the floor, or to make buildings appear as if they are crashing down. The mawari-butai takes on a filmic effect and “fades the actor in and out of the realm of the performance”. Kabuki does not strive to be realistic, it strives to be a decorated space. Kabuki is first and foremost an actor’s theatre and asks the audience to suspend reality of setting, instead adapting to the conventions of Kabuki.
Japanese influence on the West
Following the Meiji Restoration in 1868, Japan ended a long period of isolation and reopened trade with European countries. After so long in isolation, Japanese art flooded the European market, sparking a great “Japonism” fever. The conventions of Japanese Kabuki theatre developed in isolation from the rest of the world, so the innovations quickly spread to European theatre. Karl Lautenschlager built the first revolving stage in western theatre in 1896 for Mozart’s Don Giovanni at the Residenz Theatre. This revolving stage was raised slightly above the stage level and was electrically powered by motors that turned wheels along a track. With the proscenium arch, only a quarter of the revolve was visible to the audience. Four sets were constructed on Lautenschlager’s revolving stage as opposed to Kabuki’s limit of two. In 1889 Lautenschlager was hired by the Munich court theatre to design an efficient revolving stage for productions of Shakespeare. Here marks the greatest role of the revolving stage in its western history as the new Shakespeare stage. The revolving stage trickled into the designs of Germany and Russia’s Reinhardt and Meyerhold as its popularity grew. Revolving stages are still a fixture of both Kabuki theatre and western theatre today. The automation of the revolving stage and lifts has allowed many more aesthetic possibilities in shows such as Cats and Les Miserable, as well as the automated double revolve, or concentric revolve, in Hamilton, further solidifying these Kabuki innovations into the western mainstream.
Ten years after Lautenschlager’s stage, Max Reinhardt employed it in the premiere of Frühlings Erwachen by Frank Wedekind. Soon this revolving stage was a trend in Berlin. Another adaptation of the Kabuki stage popular among German directors was the Blumensteg, a jutting extension of the stage into the audience. The European acquaintance with Kabuki came either from travels in Japan or from texts, but also from Japanese troupes touring Europe. In 1893, Kawakami Otojiro and his troupe of actors arrived in Paris, returning again in 1900 and playing in Berlin in 1902. Kawakami's troop performed two pieces, Kesa and Shogun, both of which were westernized and were performed without music and with the majority of the dialogue eliminated. This being the case, these performances tended toward pantomime and dance. Dramatists and critics quickly latched on to what they saw as a "re-theatricalization of the theater." Among the actors in these plays was Sada Yacco, first Japanese star in Europe, who influenced pioneers of modern dance such as Loie Fuller and Isadora Duncan, she performed for Queen Victoria in 1900, and enjoyed the status of a European star.
Present-day use
Revolving stages are still in use in theater, but benefit from the rise of automation in scenic design. The use of a revolving stage in the original staging of Cats was considered revolutionary at the time, with a section of the stalls mounted onto the revolve as well. The original London staging of Les Misérables is one of the most notable modern uses of a revolving stage, considered "iconic"; it made sixty-three rotations in each performance. Director Trevor Nunn's decision to use the feature was informed by the need for rapid changes of location, especially in light of scenes added to the musical in its adaptation from the original French version. The turntable also provided "cinematic" changes of perspective on a scene, and, crucially, permitted the cast to walk against the revolve for dramatic motion. Double-rotating stages, known as a concentric revolve, have also been used in theater productions such as Hamilton. Having one revolving stage inside of the other allows for more flexibility by allowing each to rotate in different directions or at different speeds. Some combine this technology with stage lifts to allow the concentric rings to not only rotate at different speeds, but at different heights, such as those used in Hadestown.
Other uses
Today revolving stages are primarily used in marketing and trade shows and constructed in a modular design that can be set up and taken down quickly in different types of venues. Driven from the central core or indirectly from an external hub, these stages take advantage of rotating ring couplers to provide rotating power to the stage deck so there is no twisting of power cords or need to reverse the stage. In many cases the stage is left rotating for days at a time, carrying a load up to an SUV.
The revolving stage is also sometimes used at concerts and music festivals, especially larger ones, to allow one band to set up and check their equipment while another opening band is performing. This allows for a much faster transition between an opening band and the next one on the lineup. One such example was the Goose Lake International Music Festival, held in Michigan in August, 1970.
A notable revolving stage show that is used for the concept for Walt Disney's Carousel of Progress in Tomorrowland at Magic Kingdom in Walt Disney World Resort in Bay Lake, Florida just outside of Orlando Florida, where the stage remains stationary while the auditorium revolves around it.
See also
Stagecraft
References
General references
The American Architect and Building News. Volume 53. Boston: American. Architect and Building News Co, 1896.
Ackermann, Friedrich Adolf. The Oberammergau Passion Play, 1890. Fifth Edition. Munich: Friedrich Adolf Ackermann, 1890.
Fuerst, Walter René and Hume, Samuel J. Twentieth-Century Stage Decoration. Volume 1. New York: Dover Publications, 1967.
Hoffer, Charles. Music Listening Today. Fourth Edition. Boston: Schirmer Cengage Learning, 2009.
Izenour, George C. Theater Technology. New Haven: Yale University Press, 1996.
MacGowan, Kenneth. The Theatre of Tomorrow. New York: Boni and Liveright, 1921. Print.
Ortolano, Benito. The Japanese theatre: from shamanistic ritual to contemporary pluralism. Princeton: Princeton University Press, 1990.
Randl, Chad. Revolving architecture. New York: Princeton Architectural Press, 2008. Print.
Sachs, Edwin. Modern Theatre Stages. New York: Engineering, 1897. Print.
Vermette, Margaret. The Musical World of Boublil and Schönberg: The Creators of Les Misérables, Miss Saigon, Martin *Guerre, and The Pirate Queen. New York: Applause Theatre & Cinema Books, 2006.
Williams, Simon. Shakespeare on the German stage: 1586–1914. Volume 1. Cambridge: Cambridge University Press, 1990.
WPG, . The Revolving Stage at the Munich Royal Residential and Court Theatre. New York: American Architect and Architecture, 1896. Print.
Stagecraft
Parts of a theatre
History of theatre
Rotation
Japanese inventions | Revolving stage | Physics,Technology | 2,421 |
5,902,060 | https://en.wikipedia.org/wiki/Directionality%20%28molecular%20biology%29 | Directionality, in molecular biology and biochemistry, is the end-to-end chemical orientation of a single strand of nucleic acid. In a single strand of DNA or RNA, the chemical convention of naming carbon atoms in the nucleotide pentose-sugar-ring means that there will be a 5′ end (usually pronounced "five-prime end"), which frequently contains a phosphate group attached to the 5′ carbon of the ribose ring, and a 3′ end (usually pronounced "three-prime end"), which typically is unmodified from the ribose -OH substituent. In a DNA double helix, the strands run in opposite directions to permit base pairing between them, which is essential for replication or transcription of the encoded information.
Nucleic acids can only be synthesized in vivo in the 5′-to-3′ direction, as the polymerases that assemble various types of new strands generally rely on the energy produced by breaking nucleoside triphosphate bonds to attach new nucleoside monophosphates to the 3′-hydroxyl (−OH) group, via a phosphodiester bond. The relative positions of structures along strands of nucleic acid, including genes and various protein binding sites, are usually noted as being either upstream (towards the 5′-end) or downstream (towards the 3′-end). (See also upstream and downstream.)
Directionality is related to, but different from, sense. Transcription of single-stranded RNA from a double-stranded DNA template requires the selection of one strand of the DNA template as the template strand that directly interacts with the nascent RNA due to complementary sequence. The other strand is not copied directly, but necessarily its sequence will be similar to that of the RNA. Transcription initiation sites generally occur on both strands of an organism's DNA, and specify the location, direction, and circumstances under which transcription will occur. If the transcript encodes one or (rarely) more proteins, translation of each protein by the ribosome will proceed in a 5′-to-3′ direction, and will extend the protein from its N-terminus toward its C-terminus. For example, in a typical gene a start codon (5′-ATG-3′) is a DNA sequence within the sense strand. Transcription begins at an upstream site (relative to the sense strand), and as it proceeds through the region it copies the 3′-TAC-5′ from the template strand to produce 5′-AUG-3′ within a messenger RNA (mRNA). The mRNA is scanned by the ribosome from the 5′ end, where the start codon directs the incorporation of a methionine (bacteria, mitochondria, and plastids use N-formylmethionine instead) at the N terminus of the protein. By convention, single strands of DNA and RNA sequences are written in a 5′-to-3′ direction except as needed to illustrate the pattern of base pairing.
5′-end
The 5′-end (pronounced "five prime end") designates the end of the DNA or RNA strand that has the fifth carbon in the sugar-ring of the deoxyribose or ribose at its terminus. A phosphate group attached to the 5′-end permits ligation of two nucleotides, i.e., the covalent binding of a 5′-phosphate to the 3′-hydroxyl group of another nucleotide, to form a phosphodiester bond. Removal of the 5′-phosphate prevents ligation. To prevent unwanted nucleic acid ligation (e.g. self-ligation of a plasmid vector in DNA cloning), molecular biologists commonly remove the 5′-phosphate with a phosphatase.
The 5′-end of nascent messenger RNA is the site at which post-transcriptional capping occurs, a process which is vital to producing mature messenger RNA. Capping increases the stability of the messenger RNA while it undergoes translation, providing resistance to the degradative effects of exonucleases. It consists of a methylated nucleotide (methylguanosine) attached to the messenger RNA in a rare 5′- to 5′-triphosphate linkage.
The 5′-flanking region of a gene often denotes a region of DNA which is not transcribed into RNA. The 5′-flanking region contains the gene promoter, and may also contain enhancers or other protein binding sites.
The 5′-untranslated region (5′-UTR) is a region of a gene which is transcribed into mRNA, and is located at the 5′-end of the mRNA. This region of an mRNA may or may not be translated, but is usually involved in the regulation of translation. The 5′-untranslated region is the portion of the DNA starting from the cap site and extending to the base just before the AUG translation initiation codon of the main coding sequence. This region may have sequences, such as the ribosome binding site and Kozak sequence, which determine the translation efficiency of the mRNA, or which may affect the stability of the mRNA.
3′-end
The 3′-end (three prime end) of a strand is so named due to it terminating at the hydroxyl group of the third carbon in the sugar-ring, and is known as the tail end. The 3′-hydroxyl is necessary in the synthesis of new nucleic acid molecules as it is ligated (joined) to the 5′-phosphate of a separate nucleotide, allowing the formation of strands of linked nucleotides.
Molecular biologists can use nucleotides that lack a 3′-hydroxyl (dideoxyribonucleotides) to interrupt the replication of DNA. This technique is known as the dideoxy chain-termination method or the Sanger method, and is used to determine the order of nucleotides in DNA.
The 3′-end of nascent messenger RNA is the site of post-transcriptional polyadenylation, which attaches a chain of 50 to 250 adenosine residues to produce mature messenger RNA. This chain helps in determining how long the messenger RNA lasts in the cell, influencing how much protein is produced from it.
The 3′-flanking region is a region of DNA that is not copied into the mature mRNA, but which is present adjacent to 3′-end of the gene. It was originally thought that the 3′-flanking DNA was not transcribed at all, but it was discovered to be transcribed into RNA and quickly removed during processing of the primary transcript to form the mature mRNA. The 3′-flanking region often contains sequences that affect the formation of the 3′-end of the message. It may also contain enhancers or other sites to which proteins may bind.
The 3′-untranslated region (3′-UTR) is a region of the DNA which is transcribed into mRNA and becomes the 3′-end of the message, but which does not contain protein coding sequence. Everything between the stop codon and the polyA tail is considered to be 3′-untranslated. The 3′-untranslated region may affect the translation efficiency of the mRNA or the stability of the mRNA. It also has sequences which are required for the addition of the poly(A) tail to the message, including the hexanucleotide AAUAAA.
See also
Sense (molecular biology)
Further reading
External links
A Molecular Biology Glossary
DNA
Molecular genetics
RNA | Directionality (molecular biology) | Chemistry,Biology | 1,574 |
59,814,480 | https://en.wikipedia.org/wiki/Directing%20group | In organic chemistry, a directing group (DG) is a substituent on a molecule or ion that facilitates reactions by interacting with a reagent. The term is usually applied to C–H activation of hydrocarbons, where it is defined as a "coordinating moiety (an 'internal ligand'), which directs a metal catalyst into the proximity of a certain C–H bond." In a well known example, the ketone group () in acetophenone is the DG in the Murai reaction.
The Murai reaction is related to directed ortho metalation, a reaction is typically applied to the lithiation of substituted aromatic rings.
A wide variety of functional groups can serve as directing groups.
Transient directing groups
Since directing groups are ligands, their effectiveness correlates with their affinities for metals. Common functional groups such as ketones usually are only weak ligands and thus often are poor DGs. This problem is solved by the use of a transient directing group. Transient DGs reversibly convert weak DGs (e.g., ketones) into strong DG's (e.g., imines) via a Schiff base condensation. Subsequent to serving their role as DGs, the imine can hydrolyze, regenerating the ketone and amine.
References
Organic reactions | Directing group | Chemistry | 283 |
2,739,669 | https://en.wikipedia.org/wiki/Surgical%20lubricant | Surgical lubricants, or medical lubricants, are substances used by health care providers to provide lubrication and lessen discomfort to the patient during certain medical and surgical procedures such as vaginal or rectal examinations. Some example of surgical compatible lubricants are:
Surgilube is a surgical lubricant made of natural water-soluble gums that also contains the antiseptic chlorhexidine gluconate.
K-Y Jelly was initially used as a surgical lubricant before it gained popularity as a personal lubricant.
Lignocaine gel containing the local anaesthetic lignocaine is a prime example of a non-irritating substances used as surgical lubricant
Medicinal castor oil was the original vegetable-based surgical lubricant.
Indications for medical lubricants: Sjögren syndrome, specifically for treating vaginal dryness, dyspareunia (painful sexual intercourse) and vulvodynia (vaginal pain)
References
Medical equipment | Surgical lubricant | Biology | 215 |
5,106,007 | https://en.wikipedia.org/wiki/Agonism | Agonism (from Greek 'struggle') is a political and social theory that emphasizes the potentially positive aspects of certain forms of conflict. It accepts a permanent place for such conflict in the political sphere, but seeks to show how individuals might accept and channel this conflict positively. Agonists are especially concerned with debates about democracy, and the role that conflict plays in different conceptions of it. The agonistic tradition to democracy is often referred to as agonistic pluralism. A related political concept is that of countervailing power. Beyond the realm of the political, agonistic frameworks have similarly been utilized in broader cultural critiques of hegemony and domination, as well as in literary and science fiction.
Theory of agonism
There are three elements shared by most theorists of agonism: constitutive pluralism, a tragic view of the world, and a belief in the value of conflict. Constitutive pluralism holds that there is no universal measure of adjudicating between conflicting political values. For example, Chantal Mouffe argues, following Carl Schmitt, that politics is built on the distinction of "us" and "them." Based on this, agonists also believe in "a tragic notion of the world without hope of final redemption from suffering and strife," which cannot find a lasting political solution for all conflicts. Instead, agonists see conflict as a political good. For example, Mouffe argues that "In a democratic polity, conflicts and confrontations, far from being a sign of imperfection, indicate that democracy is alive and inhabited by pluralism.”
Agonism is not simply the undifferentiated celebration of antagonism:
Bonnie Honig, an advocate of agonism, writes: "to affirm the perpetuity of the contest is not to celebrate a world without points of stabilization; it is to affirm the reality of perpetual contest, even within an ordered setting, and to identify the affirmative dimension of contestation." In her book Political Theory and the Displacement of Politics, she develops this notion through critiques of consensual conceptions of democracy. Arguing that every political settlement engenders remainders to which it cannot fully do justice, she draws on Nietzsche and Arendt, among others, to bring out the emancipatory potential of political contestation and of the disruption of settled practices. Recognizing, on the other hand, that politics involves the imposition of order and stability, she argues that politics can neither be reduced to consensus, nor to pure contestation, but that these are both essential aspects of politics.
William E. Connolly is one of the founders of this school of thought in political theory. He promotes the possibility of an "agonistic democracy," where he finds positive ways to engage certain aspects of political conflict. Connolly proposes a positive ethos of engagement, which could be used to debate political differences. Agonism is based on contestation, but in a political space where the discourse is one of respect, rather than violence. Unlike toleration, agonistic respect actively engages adversaries in political contests over meaning and power. Unlike antagonism, it shows respect by admitting the ultimate contestability of even one's own deepest held commitments. Agonism is a practice of democratic engagement that destabilizes appeals to authoritative identities and fixed universal principles. Connolly's critical challenges to John Rawls's theory of justice and Jürgen Habermas's theory on deliberative democracy have spawned a host of new literature in this area. His work Identity\Difference (1991) contains an exhaustive look at positive possibilities via democratic contestation.
Liberalism
Agonistic pluralism
Agonistic pluralism, also referred to as "agonistic democracy," is primarily framed as an agonistic alternative to Habermasian models of deliberative democracy. Theorists of agonistic pluralism, including post-modernist thinkers Chantal Mouffe, Ernesto Laclau, and William E. Connolly, reject the Habermasian notion of a rational universal consensus that can be reached through deliberation alone. In order for a singular rational consensus to be reached, this would require that all parties endorse the same starting ethico-political principles. Yet, in multicultural pluralist societies, agonistic pluralists contend that this will never truly be the case, since divergent social identities will create irreconcilable differences between individuals. It is argued that Habermasian models of deliberative democracy are ill-equipped for pluralist societies, since they simply purport new paradigms of liberal democratic theory, which rely on the same rationalistic, universalistic, and individualistic theoretical frameworks.
Furthermore, agonistic pluralists argue that power cannot be relegated solely to the private sphere, and power hierarchies will necessarily be replicated in public deliberative processes. This makes it such that any "consensus" relies on forms of social domination and necessitates the exclusion of certain interests. Many of these agonistic thinkers point to the ideological entrenchment of global neoliberalism as evidence of how presumed consensus can reinforce hegemony and preclude opposition. The strong influence of Antonio Gramsci in agonistic theory can be seen here, primarily with his theory of cultural hegemony and his claim that any established consensus or norm is reflective of broader structures of power. Thus, for agonistic pluralists, if reason alone cannot yield a legitimate uniform consensus, and power imbalances can never truly be removed from the public sphere, then one must accept the inevitability of conflict in the political realm.
Rather than attempting to wholly eliminate conflict in the political, which agonistic pluralists maintain is conceptually impossible, agonistic pluralism is the model of democracy which attempts to mobilize these passions "towards the promotion of democratic designs." Agonistic pluralists emphasize how the construction of group identities relies on a continuous "other"; this us/them conflict is inherent to politics, and it should be the role of democratic institutions to mitigate such conflicts. The role of agonistic pluralism is to transform antagonistic sentiments into agonistic ones. As Mouffe writes, "this presupposes that the 'other' is no longer seen as an enemy to be destroyed, but somebody with whose ideas we are going to struggle but whose right to defend those ideas we will not put into question." Agonistic pluralists view this conversion of "enemies" into "adversaries" as being fundamental to well-functioning democracies and the only way to properly limit domination.
Criticisms of agonistic pluralism
One criticism of agonistic pluralism is that, in its rejection of deliberative democracy, it inadvertently relies on the same fundamental presuppositions of rational consensus. Andrew Knops argues that agonistic pluralists, such as Chantal Mouffe, assert a "single, universal characterization of the political" in their depiction of the political as a realm of ineradicable antagonism and conflict. For Knops, this universalistic description of the political undermines agonistic pluralists' post-structuralist critiques of rational argumentation. Others build on this criticism, arguing that agonists' focus on passions, power, and the limits of reason ultimately reduces the persuasive capacity of their political and social theories, which remain largely reliant on the process of rationalization.
Another criticism of agonistic pluralism is its failure to provide a real avenue through which antagonism can be transformed into agonism, or enemies into adversaries. Agonistic pluralists maintain that, in order to mediate antagonism, all parties must share some ethico-political principles. For instance, a successful agonistic pluralism requires that all parties share commitments to democratic ideals such as "equality" and "liberty," although the contents of these normative conceptions can vary greatly across groups. Yet, it is argued by critics of agonistic pluralism that, on the one hand, if parties share the same ethico-political principles, then a consensus need not be prohibited through ineradicable conflict. On the other hand, if individuals do not share the ethico-political principles needed to reach a consensus, then critics argue there is little reason to conceive that antagonism can be reduced into anything less. Under a framework under which there are no shared ethico-political commitments, there is also no normative basis for prohibiting the use of political violence. Finally, critics contend that this lack of common understanding not only problematizes the transformation of antagonism into something else, but it further contradicts the essence of antagonism itself. It is argued that deliberation is constitutive of conflict, insofar as antagonism requires a certain degree of understanding of the "other" and an ability to use shared speech acts to explain points of divergence with opposing parties; this becomes difficult to do under an agonistic framework.
Critical conceptions
Other works have invoked conceptions of agonism and the agon in a more critical sense beyond that of political counter-hegemony. This usage of agonism has been explored at some length by Claudio Colaguori in his book Agon Culture: Competition, Conflict and the Problem of Domination. According to Colaguori, "the agon is literally the arena of competition, the scene of contest, and the locus of adversarial conflict." He continues, writing "The philosophy of agonism affirms the idea that transcendence, truth, and growth are generated from the outcome of the contest...the concept of agonism is often understood in an affirmative sense as the generative principle of economy, society and even natural ecology and personal growth... The ambivalent character of agonism is that it is often seen as a mode of transcendence, while its instrumental relation to the mode of destruction is rarely acknowledged."
For Theodor Adorno, agonism is also about the "theodicy of conflict" where opponents "want to annihilate one another... to enter the agon, each the mortal enemy of each." Agonism forms part of the instituted social order where society "produces and reproduces itself precisely from the interconnection of the antagonistic interests of its members." Adorno also sees agonism as the underlying principle in Hegel's dialectic of history where "dialectics" (i.e., growth through conflict) is the ontology of the wrong state of things. The right state of things would be free of them: "neither a system nor a contradiction." Colaguori reconstructs the concept of the agon to invoke this critical, destructive aspect as a way of extending Adorno's critique of modern domination and to identify how the normalization and naturalization of conflict is used as an ideology to justify various forms of domination and subjugation. The agonistic ideology that has been appropriated by popular culture for example makes use of agonistic themes to celebrate competition as the wellspring of life in such a way as to normalize "a military definition of reality."
The critical conception of agonism developed by Colaguori and Adorno emphasizes how aspects of competition can be utilized to reinforce the project of domination that is evident in the geopolitics of modernity. Colaguori suggests that a critical conception of agonism can be applied to the study of "numerous forms of social conflict in gender, class and race relations where the competitive mode of interaction prevails in the formation of social hierarchies based on competition as a form of exclusion." Colaguori further states that, "after 100 years of technological progress, human societies are trapped in a perpetual dynamic of conflict and crisis, with modernization at a standstill. While this dialectic of development and destruction has been analysed from political and economic perspectives, Agon Culture offers an analysis of the human condition through an examination of the way in which the cultural ideology of competition operates as a mode of rationality that underpins the order of domination."
Agonism in fiction
The science fiction novel Lady of Mazes by Karl Schroeder depicts a post-human future where "agonistics" is the ruling principle of the solar system. The story explains agonistics by writing, "You can compete, and you can win, but you can never win once-and-for-all." A character gives two examples of agonism: a presidency with term limits, and laws aimed at preventing corporate monopolies.
See also
Moral relativism
Pluralism
Value pluralism
Radical democracy
Gaetano Mosca
References
Political theories
Conflict (process)
Pluralism (philosophy) | Agonism | Biology | 2,647 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.