id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
67,575,629 | https://en.wikipedia.org/wiki/British%20high-tech%20architecture | British high-tech architecture is a form of high-tech architecture, also known as structural expressionism, a type of late modern architectural style that emerged in the 1970s, incorporating elements of high tech industry and technology into building design. High-tech architecture grew from the modernist style, using new advances in technology and building materials.
Clarification
British high-tech architecture is a term applied principally to the work of a group of London-based architects, British High-Tech Architects, who, by following the teachings of the Architectural Association's futuristic programmes, created an architectural style best characterised by cultural and design ideals of: component-based, light weight, easily transportable, factory-finished using standardised interchangeable highly engineered parts, fun, popular and spontaneous Pop-up buildings.
Within the Architectural Association were a number of overlapping spheres of influence – the most notable being Archigram, a loosely arranged group including Peter Cook (responsible for Plug-in City and Instant City), Mike Webb (Sin Centre) and Ron Herron (Walking City). Alongside Archigram were the mechanistic schemes of Cedric Price, who, with engineer Frank Newby, designed a number of unbuilt projects, most notably Fun Palace, a community theatre to the brief of Joan Littlewood, and Potteries Think-belt, a scheme which would re-use decommissioned railway routes to create a university on wheels. Price also promoted the idea of architecture having a fourth dimension: Time. In addition to the aforementioned was the Independent Group (art movement), which influenced the British side of the pop art movement, through architectural luminaries Peter Smithson a Head of the Architectural Association and Colin St John Wilson.
The British high-tech movement remained in the ascendency from the 1960s until 1984, when an intervention by HRH Charles Prince of Wales over a competition-winning design by ABK Architects (previously Ahrends, Burton and Koralek) for an extension to the National Gallery in London signalled an end to High Tech architecture in the UK. More, from that date, the group of leading proponents of British High Tech architecture distanced themselves from the High Tech style to endear themselves to sponsors. By such action, they would continue to design buildings of national and international significance. In satisfying the demands of conservative clients, planners, conservationists and funding organisations, the essence of High Tech was lost.
This article, British high-tech architecture, traces the development of technological advances and industrial innovations that went hand-in-hand with the emergence of the High Tech style, and without which British high-tech architecture would have remained where it started – as the pop art imagery of Archigram, the most influential of the Architectural Association visionary groups.
Background
The history of light-weight, mass-produced, component-based dry construction, which, as a means of assembly differentiates system building from traditional building methods, dates back to the 19th C. It started in the UK with Sir Joseph Paxton's newly created building methods at Chatsworth House's conservatory completed in 1840, and later at The Great Exhibition of 1851, when he used steam-powered woodworking machines to manufacture batches of identical components. At the same time (1829), Henry Robinson Palmer patented corrugated iron, using his invention to construct a shed roof for the London Dock Company the following year.
Progress continued in another industry entirely, the lattice framed trusses required for airships developed by Barnes Wallis at Howden, Yorkshire during his work in the 1920s on the R100 Airship resulted in the development of light weight tubes made from spiral-wound duralumin strip in a helical fashion.
Later, solutions to housing shortage and replacement of other war-decimated facilities required fresh thinking about factory rather than site based building, such as the post-war building of Arcon prefabs in the United Kingdom in large numbers, and of system-built schools such as Consortium of Local Authorities Special Programme CLASP, filtered through to building design in the form of High Tech System Building. Generally, it has been an engineering innovation that has given rise to architectural opportunity.
Between 1961 and 1967 in California, the SCSD (School Construction Systems Development ) project offered architects and educationalists more options than had been available previously - providing greater column-free floor space by using longer spans, and flexible room layouts below. A deep structural zone into which power, H&V, lighting and concertina partition tracks could be accommodated reduced the need for the rigid restrictive planning grids that had hampered the earlier systems.
Further innovations: space frame roof structures derived from WWII aircraft hangar roofs, Rectangular Hollow Section (RHS) (to include Square Hollow Section) steel, known in the US as Hollow Structural Section (HSS) developed in the UK by Stewarts & Lloyds Ltd in the late 1950s and early 1960s, and advances in 'Patent Glazing' during the same period of time, which allowed greater freedom in both wall and roof glazing – presented architects and their clients with near-unlimited flexibility in a building's planning, layout of accommodation and use patterns.
The trend for light weight dry construction also had its roots in military fast-response use, when administration, storage or workshop buildings might be required at short notice. The Nissen hut from WWI, and later the Quonset hut (a derivative of the Nissen design) developed during WWII were both produced in large quantities. However, notwithstanding its origins for military use, light weight design principles were seized upon by American architect and philosopher Richard Buckminster Fuller, who advocated the use of slender or tensile structural components as they would be less wasteful of Earth's scarce resources than would be their bulkier traditional counterparts. His message became something of a creed for the generation of High tech architects. Fuller's designs used well-engineered batch produced components in designs for his renowned Geodesic Domes, although the term 'geodesic' is attributed to Barnes Wallis in his fuselage design for the WWII Lancaster bomber aircraft. German-born Konrad Wachsmann also taught the principles of this type of component-based building design at USC School of Architecture-SAFA.
Proponents of British high-tech architecture
Most architects associated with British high-tech emerged from the Architectural Association; others worked in London at the offices of those that had. Some, like-minded, had come through the offices of modernists such as Ove Arup and Felix Samuely, who believed in 'total design' an earlier term for 'multi-disciplinary' design. In addition, a small group of sympathetic structural engineers, including Frank Newby, Anthony Hunt, Ted Happold, Mark Whitby and Peter Rice, became essential to the development of the movement. As a result of the symbiotic association between architects and engineers, a freedom of design evolved away from the constraints of the everyday. Aside from the architectural and engineering impetus, there was a wider cultural involvement as the principal proponents shared friendships centred upon art, writing and industrial design. Most operated as freelancers working in small studio home offices which became their calling-cards identifying with the High Tech style.
Michael Aukett (1938-2020)
Reyner Banham (1922-1988) Writer and critic
John Batchelor (illustrator) (1936–2019) Technical Illustrator – aircraft and other – Subjects include work by Foster
Misha Black (1910–1977) Contributor to patronage of 1951 Festival of Britain and to Design Research Unit (DRU)
Hugh Broughton (architect) (b. 1965) Formed Hugh Broughton Architects in 1995
Cuno Brullmann (b. 1945) Worked in association with Piano + Rogers and Ove Arup and Partners
Marcus Brumwell (1901–1977), a founder of Design Research Unit (DRU)
Richard Buckminster Fuller (1895–1983)
Hugh Casson (1910–1999) Director of Architecture for the 1951 Festival of Britain
Warren Chalk (1927–1988) Founding member of Archigram
Peter Cook (architect) (b. 1936) founding member of Archigram
Dennis Crompton (b. 1935) founding member and archivist of Archigram
Charles and Ray Eames (1907–1978, 1912–1988)
Ezra Ehrenkrantz (1932–2001) architect of the SCSD (School Construction Systems Development) project
Norman Foster (b. 1935) co-founder (1963) of Team 4
Wendy Foster (1937–1989) co-founder (1963) of Team 4
David Greene (architect) (b. 1937) Founding member of Archigram
Nicholas Grimshaw (b. 1939) Grimshaw Architects founded in 1980
Fritz Haller USM Modular Furniture
Ted Happold (1930–1996) Founded Buro Happold in 1976
Ron Herron (1930–1994) Founding member of Archigram
Andrew Holmes (b. 1947)
Michael Hopkins (architect) (b. 1935) Former partner at Foster Associates, set up Michael Hopkins Architects in 1976
Patty Hopkins (b. 1942) Cofounder of Michael Hopkins Architects in 1976, completed Hopkins House, Hampstead in the same year
Richard Horden (1944–2018)
John Howard (architect)
Anthony Hunt (b. 1932) Formed Anthony Hunt Associates in 1962
Ben Johnson (artist) (b. 1946) Subjects include architectural works by Foster and Rogers
Jan Kaplický (1937–2009) Drawings of Neo futuristic Architecture
Ian Liddell (b. 1938)
Syd Mead (1933–2019) Artist specialising in Neo futuristic imagery – subjects include concept work for1982 movie Blade Runner
Max Mengeringenhausen, Founder (1948) of Mero Structures now named Mero-Schmidlin
John Miller (b. c1930) Formed partnership with Alan Colquhoun in 1961
Hidalgo Moya (1920–1994) Formed partnership with Philip Powell (architect) in 1948
Edric Neel (1914–1952) Through Arcon sought better links between architects and industry
Brendan Neiland (artist) (b. 1941) Subjects include architectural works by Grimshaw and Rogers
Frank Newby (1926–2001)
Constant Nieuwenhuys (1920–2005)
David Nixon (architect) (b. 1947) Future Systems 1979 founded by Kaplický and Nixon while working at Foster Associates
Frei Otto (1925-2015)
Renzo Piano (b.1937) Formed partnership Piano + Rogers in 1971
Jean Prouvé (1901-1984)
Cedric Price (1934–2003) "Unconventional and visionary architect best-known for buildings which never saw the light of day"
Peter Rice (1935–1992) Joined Ove Arup & Partners in 1956
Ian Ritchie (architect) (b. 1947) Worked for Foster Associates and with Hopkins/Hunt on SSSALU (short span structures in aluminium)
Richard Rogers (1933–2021) Co-founder (1963) of Team 4 Partnership with Piano before founding Richard Rogers
Su Rogers (b. 1939) Co-founder (1963) of Team 4 Partner in Miller & Colquhoun Architects later John Miller & Partners
Walter Segal (1907–1985) Pioneer of self-build housing to the Segal self-build method
Rod Sheard (b. 1951) In 1998 Sheard's firm LOBB Sports Architecture (formerly Howard V Lobb & Partners) merged with HOK Sport.
Alison and Peter Smithson (1928-1993) and (1923-2003) Pioneers of Industrial Aesthetic
Basil Spence (1907-1976) Designer of bolt-together pavilion for Festival of Britain
Colin Stansfield Smith (1932-2013) Hampshire County Architect and Patron
Ralph Tubbs (1912–1996) Designer of bolt-together pavilion for Festival of Britain
Konrad Wachsmann (1901–1980)
Derek Walker (1929–2015) Architect and Patron for Milton Keynes Development Corporation
Michael Webb (architect) (b. 1937) Co-founder of Archigram
Mark Whitby (b. 1950) Worked, early in his career, for Anthony Hunt Associates and Buro Happold
John Winter (architect) (1930–2012) Writer and critic
Georgina Wolton (−2021)
Noteworthy architectural practices
Powell & Moya (architectural practice formed 1948)
Howard V Lobb & Partners (architectural practice formed 1950) merged with HOK (firm) (architectural practice founded 1955) renamed Populous (architectural practice renamed 2009)
Building Design Partnership (BDP) (architectural practice founded 1961)
Williamson Faulkner Brown (architectural practice) now named FaulknerBrowns Architects (architectural practice from 2013)
Gillinson Barnett & Partners (architectural practice formed 1970) now named Barnett & Partners (architectural practice)
Contemporary imagery
In the austere post World War II Britain, illustrations associated with the comic-book heroes, science fiction writing, aircraft and aerospace industries and military hardware such as the Bailey Bridge provided inspirational imagery for the British High Tech architects.
Furthermore, in 1951, the Festival of Britain intended to lift the spirits of the nation following the austerity of WWII, brought together under the architectural Directorship of Hugh Casson a group of leading architects and engineers to create a series of mainly temporary exhibition buildings located primarily on South Bank area of London.
Most of all in 1969, Apollo 11 and its Lunar Module pointed the way towards light weight exoskeletal transient structures free from conventional building limitations. Science Fiction images from Paolo Soleri, Georgii Krutikov, Buckminster Fuller, Robert McCall, Syd Mead, and, of significance, British author Arthur C. Clarke, (who in 1948 wrote the short story, first published in 1951, "Sentinel of Eternity", which was used as a starting point for the 1968 novel and film 2001: A Space Odyssey), provided a rich source of inspiration for the High Tech movement.
High Tech Buildings for leisure
Wide span column-free dome, cuboid and pyramid-shaped building envelopes provided flexibility for internal layout and use patterns. Dutch architect Constant Nieuwenhuys in New Babylon, his long work including drawings and writings of 1959–1974 (not yet called High Tech), foresaw a fictitious world in which the pursuit of pleasure and play, rather than work, had become the mainstay of everyday life for the élite of society.
UK Local Authorities in the 1970s, both at seaside locations and as a part of urban regeneration initiatives, sought to recreate the fun attractions of sun-bathing and swimming in artificially-created waves. Out-of-London UK architects Gillinson Barnett & Partners (Leeds), and Williamson Faulkner Brown Architects (Newcastle upon Tyne) were leaders in this form of design with schemes including Summerland in the Isle of Man (destroyed by fire two years after opening), Sun Centre Rhyl North Wales (now demolished), Oasis Leisure Centre Swindon, and Bletchley Leisure Centre in Milton Keynes (now demolished). Only the Oasis Leisure Centre remains as an example of this building type, although it is itself presently under threat of demolition.
Industrial aesthetic (US also esthetic)
Factory-finished components, brought to site and bolted together, provided uniformity in appearance and standardisation which would allow components to be replaced or reconfigured. Typical of this design trend was the use of a Braithwaite water tank by the Smithsons in their designs for Hunstanton Secondary Modern School in Norfolk UK.
Industrialisation
Industrial components, batch-produced in factories using newly invented materials or new manufacturing processes allowed the construction/assembly of High Tech buildings to move forward.
Technology transfer
Using 'component-based, light weight, factory-finished using standardised interchangeable highly engineered parts' as a template for High Tech Building, in due course technologies developed in allied industries such as boatbuilding, vehicle manufacture or cold storage were transferred to British High Tech architecture.
Selected works and projects
Use of computer-aided design
The use of computer-aided design (CAD) for 3D modelling, and therefore as a basic tool for architectural design, emerged during the 1990s Prior to that date, CAD had been used to a limited extent in structural analysis and as a means of managing and recording traditional drawings. 1983 saw the first 2D Autocad software designed for PC use. Earlier (c1975), "the architects (Gillinson Barnett & Partners) had to devise a computer programme to deal with the large number of components (in the Oasis Leisure Centre Dome roof, Swindon), and the ‘frame analysis’ was reportedly handled by a NASA computer at Houston". In c.1984, Ove Arup and Partners produced computer-generated 3D modelling of the Schlumberger Gould Research Centre, Cambridge, roof membrane.
With the widespread advance of IT, (the use of computers to store or retrieve data) CAD quickly became the essential tool of architectural and engineering design. Anthony Hunt is on record as saying: "... that it was only possible to design and construct the huge biodomes of the Eden Project... because of advances made in computer modelling techniques".
Equality of opportunity
In the Sex Discrimination Act 1975, which led the way to establishment of the Equal Opportunities Commission (United Kingdom), parity between men and women in pay and opportunity became enshrined in law. This coincided with a group of women such as Alison Smithson, Wendy Foster, Su Rogers, Georgina Wolton and Patty Hopkins establishing themselves as equals in what had been up until then a predominately male-oriented profession.
The post mid 1980s reversion to technological modernism
The mid 1980s saw not only the damning "is like a monstrous carbuncle on the face of a much-loved and elegant friend" speech by HRH Charles Prince of Wales, but also the death of several key proponents of British High Tech architecture – among which were Buckminster Fuller (1983), Jean Prouvé (1984), Walter Segal (1985) and Reyner Banham (1988), each of whom were significant for their teachings as well as for their building designs.
Use of high-tech methodology for sports stadia
Following the Taylor Report, a Home Office report, the result of a public inquiry into "The Hillsborough Stadium Disaster 15 April 1989", recommendations were such that a new generation of all-seater football stadia became the norm for top division football clubs in the UK. Architects the Lobb Partnership (formerly Howard V Lobb & Partners) in conjunction with The Sports Council promoted designs for "A Stadium for the Nineties" giving rise to a new generation of UK football grounds, the first of which was Kirklees Stadium Huddersfield. Rod Sheard, Principal of Lobb Partnership (later known as Lobb Sports Architecture) designed a series of sports venues using High Tech methodology such as retractable roofs and flamboyant exposed steel structures.
Sustainability
As an adjunct to Richard Buckminster Fuller's question "How much does your building weigh?", that expressed his philosophy of light weight building which in turn reduced wastefulness and therefore conserved Earth's precious resources, he backed up this concept with his Dymaxion Map launched as "World Game: a unique experiment to develop a computer coordinated model of planet earth to research world resources and develop ways of running the future for the benefit of mankind".
The Legacy of British High Tech
In both the worlds of science fiction, space travel and in areas of extreme climatic conditions on Earth, the imagery of British High Tech architecture endures in real projects as well as those imagined. A series of buildings and design competition entries for the Halley Research Station at Halley Bay, Antarctica and Ski Haus by Richard Horden/Anthony Hunt derive solutions for extremes of climate from High Tech imagery. David Nixon promotes similar interests "Design, Construction and Operation of Buildings and Habitats in Extreme Environments" and in "a book entitled 'Architecture of the International Space Station' – the first book to examine the Station from an architectural viewpoint" Hugh Broughton, one of the world's leading designers of polar research facilities including Halley VI, takes the High Tech concept further with designs for 'Building a Martian House' – an exhibition in Bristol led by local artists Ella Good and Nicki Kent.
In 2015 Foster + Partners were shortlisted finalists for the 3D Printed Habitat Challenge, organized by America Makes and NASA – submitting designs for a Mars settlement. Concept art for The Martian (2015) Steve Burg supposes accommodation modules on supporting legs (stilts) reminiscent of their light weight component-based bolt-together counterparts of the 1970s and 1980s such as the Rogers' Zip-Up House designed between 1967 and 1969 for The House of Today competition, and the aforementioned Hugh Broughton polar research station designs.
Archigram were awarded RIBA Royal Gold Medal in 2002. Other recipients of this prestigious award relevant to this article are (in reverse date order): Sir Nicholas Grimshaw (2019), Frei Otto (2005), Michael and Patricia Hopkins (1994), Peter Rice (1992), Colin Stansfield Smith (1991), Renzo Piano (1989), Sir Richard Rogers (1985), Sir Norman Foster (1983), Charles and Ray Eames (1979), Powell and Moya (1974) and Buckminster Fuller (1968), demonstrating that the legacy of Proponents of British high-tech architecture has remained at the forefront of architectural pioneering work well into the twenty-first century.
References
High-tech architecture
Prefabricated buildings | British high-tech architecture | Engineering | 4,321 |
12,509,537 | https://en.wikipedia.org/wiki/Dogeza | is an element of traditional Japanese etiquette which involves kneeling directly on the ground and bowing to prostrate oneself while touching one's head to the floor. It is used to show deference to a person of higher status, as a deep apology or to express the desire for a favor from said person.
The term is used in Japanese politics such as which is translated to "kowtow diplomacy" or "kowtow foreign policy". In general, dogeza is translated into English as "prostration" or "kowtow".
The meaning of performing dogeza
In the Japanese social consciousness, the act of sitting on the ground and creating a scene (dogeza), is an uncommon deference only used when one is deviating greatly from expected behavior. It is seen as part of etiquette and is an expression of remorse for troubling the other person. By performing dogeza and apologizing to someone, usually the other person will be inclined to forgive.
History
In the Gishiwajinden (魏志倭人伝), the oldest Chinese record of encounters with the Japanese, it was mentioned that commoners of the ancient Yamataikoku would, upon meeting noblemen along the road, fall prostrate on the spot, clapping their hands as in prayer (柏手 read: kashiwade), and this is believed to be an old Japanese custom.
The haniwa of the Kofun period can be seen prostrating themselves in dogeza.
In the early modern period, popularly as the daimyōs procession passed by, it is believed that it was mandatory for the commoners present to perform dogeza, but that is incorrect. It was normal for common people to perform dogeza in modern times when being interviewed by higher-ups.
Even now, as a method of self-protection and apology in which damage to one's image is neglected, its idea of feeling shame remains firmly rooted.
See also
Kowtow
Sujud
Japanese culture
Genuflection
Prostration
Prostration (Buddhism)
Bowing in Eastern Orthodox Church tradition
References
Culture of Japan
Human positions
Gestures of respect
Kneeling | Dogeza | Biology | 440 |
58,640,005 | https://en.wikipedia.org/wiki/ECOSTRESS | ECOSTRESS (Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station) is an ongoing scientific experiment in which a radiometer mounted on the International Space Station (ISS) measures the temperature of plants growing in specific locations on Earth over the course of a solar year. These measurements give scientists insight into the effects of events like heat waves and droughts on crops.
ECOSTRESS radiometer
The instrument that collects this data is a multispectral thermal infrared radiometer. It measures temperatures on the surface of the Earth, rather than surface air temperature. Dr. Simon Hook is the principal investigator of the ECOSTRESS mission and Dr. Joshua Fisher is the Science lead; both are located at NASA's Jet Propulsion Laboratory (JPL). ECOSTRESS data is archived at the Land Processes Distributed Active Archive Center (LP DAAC), which is a data center managed by the United States Geological Survey (USGS). ECOSTRESS data is discoverable through various platforms including through LP DAAC's AppEEARS (Application for Extracting and Exploring Analysis Ready Samples) tool, which allows users to quickly subset and reproject data into a geographic lat/lot format. The data collected is also published via the open-access TERN Data Discovery Portal in Australia.
The ECOSTRESS radiometer was built at JPL and consisted of 5 spectral bands in the thermal infrared (8-12 micron) and 1 band in the shortwave infrared, which is used for geolocation. ECOSTRESS was delivered to the ISS by the SpaceX Dragon after a launch out of Cape Canaveral, Florida on 29 June 2018 The Dragon arrived at the space station on 3 July 2018. The radiometer was mounted on the station's Kibo module. The radiometer constituted about of the of cargo on board the Dragon. Other cargo included spare parts for the Canadarm2 robotic arm, as well as other equipment and supplies.
The high-resolution images have a pixel size of 70 meters by 38 meters (225 feet by 125 feet).
Key science questions
The key science questions that ECOSTRESS is addressing include:
How is the terrestrial biosphere responding to changes in water availability?
How do changes in diurnal vegetation water stress impact the global carbon cycle?
Can agricultural vulnerability be reduced through advanced monitoring of agricultural water consumptive use and improved drought estimation?
Other uses
Image data helps capture and quantify the temperature differences between man-made and natural surfaces. JPL released a report highlighting a 10 June 2022 record high air temperature in Las Vegas, NV of 43 C (109 F) and the corresponding ground temperatures. For instance, asphalt surfaces reached 50 C (122 F), while suburban neighborhood surfaces reached 42 C (108 F) and green spaces measured 37 C (99 F).
Team Members
The original ECOSTRESS Science Team included Dr. Glynn Hulley (JPL) and scientists at the U.S. Department of Agriculture, including Dr. Andrew French and Dr. Martha Anderson. Other science team members include Drs. Eric Wood (Princeton), Rick Allen (University of Idaho), and Chris Hain (NASA Marshall Space Flight Center). ECOSTRESS is the first Earth Venture mission to establish an Early Adopters Program, which provided its members with early access to provisional data and opportunities to collaborate with other ECOSTRESS users in a Slack channel. As of August 2019, the Early Adopters Program has transitioned to the ECOSTRESS Community of Practice, with over 250 members.
Science data products
Science data products produced by ECOSTRESS include:
See also
Effects of climate change on plant biodiversity
Effects of global warming
Hardiness (plants)
Scientific research on the International Space Station
Water scarcity
External links
JPL ECOSTRESS
References
Biology experiments
Electromagnetic radiation meters
International Space Station experiments
Radiometry | ECOSTRESS | Physics,Technology,Engineering | 775 |
20,424 | https://en.wikipedia.org/wiki/Lunar%20phase | A lunar phase or Moon phase is the apparent shape of the Moon's directly sunlit portion as viewed from the Earth. Because the Moon is tidally locked with the Earth, the same hemisphere is always facing the Earth. In common usage, the four major phases are the new moon, the first quarter, the full moon and the last quarter; the four minor phases are waxing crescent, waxing gibbous, waning gibbous, and waning crescent. A lunar month is the time between successive recurrences of the same phase: due to the eccentricity of the Moon's orbit, this duration is not perfectly constant but averages about 29.5 days.
The appearance of the Moon (its phase) gradually changes over a lunar month as the relative orbital positions of the Moon around Earth, and Earth around the Sun, shift. The visible side of the Moon is sunlit to varying extents, depending on the position of the Moon in its orbit, with the sunlit portion varying from 0% (at new moon) to nearly 100% (at full moon).
Phases of the Moon
There are four principal (primary, or major) lunar phases: the new moon, first quarter, full moon, and last quarter (also known as third or final quarter), when the Moon's ecliptic longitude is at an angle to the Sun (as viewed from the center of the Earth) of 0°, 90°, 180°, and 270° respectively. Each of these phases appears at slightly different times at different locations on Earth, and tabulated times are therefore always geocentric (calculated for the Earth's center).
Between the principal phases are intermediate phases, during which the apparent shape of the illuminated Moon is either crescent or gibbous. On average, the intermediate phases last one-quarter of a synodic month, or 7.38 days.
The term is used for an intermediate phase when the Moon's apparent shape is thickening, from new to a full moon; and when the shape is thinning. The duration from full moon to new moon (or new moon to full moon) varies from approximately to about .
Due to lunar motion relative to the meridian and the ecliptic, in Earth's northern hemisphere:
A new moon appears highest at the summer solstice and lowest at the winter solstice.
A first-quarter moon appears highest at the spring equinox and lowest at the autumn equinox.
A full moon appears highest at the winter solstice and lowest at the summer solstice.
A last-quarter moon appears highest at the autumn equinox and lowest at the spring equinox.
Non-Western cultures may use a different number of lunar phases; for example, traditional Hawaiian culture has a total of 30 phases (one per day).
Lunar libration
As seen from Earth, the Moon's eccentric orbit makes it both slightly change its apparent size, and to be seen from slightly different angles. The effect is subtle to the naked eye, from night to night, yet somewhat obvious in time-lapse photography.
Lunar libration causes part of the back side of the Moon to be visible to a terrestrial observer some of the time. Because of this, around 59% of the Moon's surface has been imaged from the ground.
Principal and intermediate phases of the Moon
Waxing and waning
When the Sun and Moon are aligned on the same side of the Earth (conjunct), the Moon is "new", and the side of the Moon facing Earth is not illuminated by the Sun. As the Moon waxes (the amount of illuminated surface as seen from Earth increases), the lunar phases progress through the new moon, crescent moon, first-quarter moon, gibbous moon, and full moon phases. The Moon then wanes as it passes through the gibbous moon, third-quarter moon, and crescent moon phases, before returning back to new moon.
The terms old moon and new moon are not interchangeable. The "old moon" is a waning sliver (which eventually becomes undetectable to the naked eye) until the moment it aligns with the Sun and begins to wax, at which point it becomes new again. Half moon is often used to mean the first- and third-quarter moons, while the term quarter refers to the extent of the Moon's cycle around the Earth, not its shape.
When an illuminated hemisphere is viewed from a certain angle, the portion of the illuminated area that is visible will have a two-dimensional shape as defined by the intersection of an ellipse and circle (in which the ellipse's major axis coincides with the circle's diameter). If the half-ellipse is convex with respect to the half-circle, then the shape will be gibbous (bulging outwards), whereas if the half-ellipse is concave with respect to the half-circle, then the shape will be a crescent. When a crescent moon occurs, the phenomenon of earthshine may be apparent, where the night side of the Moon dimly reflects indirect sunlight reflected from Earth.
Orientation by latitude
In the Northern Hemisphere, if the left side of the Moon is dark, then the bright part is thickening, and the Moon is described as waxing (shifting toward full moon). If the right side of the Moon is dark, then the bright part is thinning, and the Moon is described as waning (past full and shifting toward new moon). Assuming that the viewer is in the Northern Hemisphere, the right side of the Moon is the part that is always waxing. (That is, if the right side is dark, the Moon is becoming darker; if the right side is lit, the Moon is getting brighter.)
In the Southern Hemisphere, the Moon is observed from a perspective inverted, or rotated 180°, to that of the Northern and to all of the images in this article, so that the opposite sides appear to wax or wane.
Closer to the Equator, the lunar terminator will appear horizontal during the morning and evening. Since the above descriptions of the lunar phases only apply at middle or high latitudes, observers moving towards the tropics from northern or southern latitudes will see the Moon rotated anti-clockwise or clockwise with respect to the images in this article.
The lunar crescent can open upward or downward, with the "horns" of the crescent pointing up or down, respectively. When the Sun appears above the Moon in the sky, the crescent opens downward; when the Moon is above the Sun, the crescent opens upward. The crescent Moon is most clearly and brightly visible when the Sun is below the horizon, which implies that the Moon must be above the Sun, and the crescent must open upward. This is therefore the orientation in which the crescent Moon is most often seen from the tropics. The waxing and waning crescents look very similar. The waxing crescent appears in the western sky in the evening, and the waning crescent in the eastern sky in the morning.
Earthshine
When the Moon (seen from Earth) is a thin crescent, Earth (as viewed from the Moon) is almost fully lit by the Sun. Often, the dark side of the Moon is dimly illuminated by indirect sunlight reflected from Earth, but is bright enough to be easily visible from Earth. This phenomenon is called earthshine, sometimes picturesquely described as "the old moon in the new moon's arms" or "the new moon in the old moon's arms".
Timekeeping
Archaeologists have reconstructed methods of timekeeping that go back to prehistoric times, at least as old as the Neolithic. The natural units for timekeeping used by most historical societies are the day, the solar year and the lunation. The first crescent of the new moon provides a clear and regular marker in time and pure lunar calendars (such as the Islamic Hijri calendar) rely completely on this metric. The fact, however, that a year of twelve lunar months is ten or eleven days shorter than the solar year means that a lunar calendar drifts out of step with the seasons. Lunisolar calendars resolve this issue with a year of thirteen lunar months every few years, or by restarting the count at the first new (or full) moon after the winter solstice. The Sumerian calendar is the first recorded to have used the former method; Chinese calendar uses the latter, despite delaying its start until the second or even third new moon after the solstice. The Hindu calendar, also a lunisolar calendar, further divides the month into two fourteen day periods that mark the waxing moon and the waning moon.
The ancient Roman calendar was broadly a lunisolar one; on the decree of Julius Caesar in the first century BCE, Rome changed to a solar calendar of twelve months, each of a fixed number of days except in a leap year. This, the Julian calendar (slightly revised in 1582 to correct the leap year rule), is the basis for the Gregorian calendar that is almost exclusively the civil calendar in use worldwide today.
Calculating phase
Each of the four intermediate phases lasts approximately seven days (7.38 days on average), but varies ±11.25% due to lunar apogee and perigee.
The number of days counted from the time of the new moon is the Moon's "age". Each complete cycle of phases is called a "lunation".
The approximate age of the Moon, and hence the approximate phase, can be calculated for any date by calculating the number of days since a known new moon (such as 1 January 1900 or 11 August 1999) and reducing this modulo 29.53059 days (the mean length of a synodic month). The difference between two dates can be calculated by subtracting the Julian day number of one from that of the other, or there are simpler formulae giving (for instance) the number of days since 31 December 1899. However, this calculation assumes a perfectly circular orbit and makes no allowance for the time of day at which the new moon occurred and therefore may be incorrect by several hours. (It also becomes less accurate the larger the difference between the required date and the reference date.) It is accurate enough to use in a novelty clock application showing lunar phase, but specialist usage taking account of lunar apogee and perigee requires a more elaborate calculation. Also, due to lunar libration it is not uncommon to see up to 101% of the full moon or even up to 5% of the lunar backside.
Effect of parallax
The Earth subtends an angle of about two degrees when seen from the Moon. This means that an observer on Earth who sees the Moon when it is close to the eastern horizon sees it from an angle that is about 2 degrees different from the line of sight of an observer who sees the Moon on the western horizon. The Moon moves about 12 degrees around its orbit per day, so, if these observers were stationary, they would see the phases of the Moon at times that differ by about one-sixth of a day, or 4 hours. But in reality, the observers are on the surface of the rotating Earth, so someone who sees the Moon on the eastern horizon at one moment sees it on the western horizon about 12 hours later. This adds an oscillation to the apparent progression of the lunar phases. They appear to occur more slowly when the Moon is high in the sky than when it is below the horizon. The Moon appears to move jerkily, and the phases do the same. The amplitude of this oscillation is never more than about four hours, which is a small fraction of a month. It does not have any obvious effect on the appearance of the Moon. It does however affect accurate calculations of the times of lunar phases.
Misconceptions
Orbital period
It can be confusing that the Moon's orbital sidereal period is 27.3 days while the phases complete a cycle once every 29.5 days (synodic period). This is due to the Earth's orbit around the Sun. The Moon orbits the Earth 13.4 times a year, but only passes between the Earth and Sun 12.4 times.
Eclipses
It might be expected that once every month, when the Moon passes between Earth and the Sun during a new moon, its shadow would fall on Earth causing a solar eclipse, but this does not happen every month. Nor is it true that during every full moon, the Earth's shadow falls on the Moon, causing a lunar eclipse. Solar and lunar eclipses are not observed every month because the plane of the Moon's orbit around the Earth is tilted by about 5° with respect to the plane of Earth's orbit around the Sun (the plane of the ecliptic). Thus, when new and full moons occur, the Moon usually lies to the north or south of a direct line through the Earth and Sun. Although an eclipse can only occur when the Moon is either new (solar) or full (lunar), it must also be positioned very near the intersection of Earth's orbital plane about the Sun and the Moon's orbital plane about the Earth (that is, at one of its nodes). This happens about twice per year, and so there are between four and seven eclipses in a calendar year. Most of these eclipses are partial; total eclipses of the Moon or Sun are less frequent.
Mechanism
The phases are not caused by the Earth's shadow falling on the moon, as some people believe.
See also
. (Also known as a "lunation".)
, who tried to explain lunar phases
Footnotes
References
Citations
Sources
External links
Six Millennium Catalog of Phases of the Moon: Moon Phases from -1999 to +4000 (2000 BCE to 4000 CE).
Observational astronomy
Technical factors of astrology
Articles containing video clips | Lunar phase | Astronomy | 2,837 |
20,081,270 | https://en.wikipedia.org/wiki/Muscodor%20albus | Muscodor albus (frequently spelled "muscador albus") is a plant-dwelling fungus in the family Xylariaceae. It was first discovered in the bark of a cinnamon tree in Honduras. It has the ability to produce a mixture of volatile compounds, including alcohols and esters, which can kill pathogens like molds and bacteria such as listeria and salmonella and many plant pathogens. It also acts as an insecticide, killing potato tuber moths, codling moths and their larvae.
Researchers at the Agricultural Research Service investigated the antimicrobial effects of Muscodor albus on Botrytis cinerea, which causes the common grey mold found on table grapes. Researchers found that Muscodor albus reduces the occurrence of Botrytis cinerea up to 85% on table grapes. Utilizing Muscodor albus''' antimicrobial effects is ideal for organic farmers who suffer a loss in yield due to the grey mold, which is usually treated with sulfur dioxide.
Other isolates considered to be varieties of M. albus have been identified in Thailand, on Myristica fragrans, and in Australia's Northern Territory, on plants such as Grevillea pterifolia (fern-leafed grevillea), Kennedia nigriscans (snakevine) and Terminalia prostrata (nanka bakarra).
References
General
AgraQuest asks EPA to OK mold-killer Sacramento Business Journal'', Monday, August 11, 2003
Scientists Pit Fungus Against Potato Pest by Jan Suszkiw, United States Department of Agriculture Agricultural Research Service, May 15, 2007
Muscodor albus QST 20799 (006503) Fact Sheet; United States Environmental Protection Agency factsheet, issued 09/17/05
New endophytic isolates of Muscodor albus, a volatile-antibiotic-producing fungus
Specific
Xylariales
Fungi of North America
Fungus species | Muscodor albus | Biology | 407 |
1,703,421 | https://en.wikipedia.org/wiki/Methyl%20nitrite | Methyl nitrite is an organic compound with the chemical formula . It is a gas, and is the simplest alkyl nitrite.
Structure
At room temperature, methyl nitrite exists as a mixture of cis and trans conformers. The cis conformer is 3.13 kJ mol−1, more stable than the trans form, with an energy barrier to rotation of 45.3 kJ mol−1. The cis and trans structure have also been determined by microwave spectroscopy (see external links).
Synthesis
Methyl nitrite can be prepared by the reaction of silver nitrite with iodomethane: Silver nitrite (AgNO2) exists in solution as the silver ion, Ag+ and the nitrite ion, NO2−. One of the lone pairs on an oxygen from nitrite ion attacks the methyl group (—CH3), releasing the iodide ion into solution. Unlike silver nitrite, silver iodide is highly insoluble in water and thus forms a solid. Note that nitrogen is a better nucleophile than oxygen and most nitrites would react via an SN2-like mechanism and the major product would be nitromethane. For example, sodium and potassium nitrite reacting with iodomethane would produce mostly nitromethane, with methyl nitrite as the minor product. However, the presence of the silver ion in solution has a stabilizing effect on the formation of carbocation intermediates, increasing the percent yield of methyl nitrite. In either case, some nitromethane and methyl nitrite are both formed.
The figure shows the two gas-phase structures of methyl nitrite, as determined by IR and microwave spectroscopy.
Methyl nitrite free of nitromethane can be made by reacting iodomethane with nitrogen dioxide:
2CH3I + 2NO2-> 2CH3ONO + I2
Properties and uses
Methyl nitrite is a precursor and intermediate, e.g. during production of phenylpropanolamine.
Methyl nitrite is also present in aged cigarette smoke. Here it is presumably formed from nitrogen dioxide (itself formed by oxidation of nitric oxide) and methanol.
Environmental impact
As one product of the combustion of unleaded petrol in air, methyl nitrite has been proposed as a cause of the decline of insects, and hence that of songbirds in Europe.
Safety
Methyl nitrite is a toxic asphyxiating gas, a potent cyanotic agent. Exposure may result in methemoglobinemia.
Methyl nitrite is an oxidizing agent and a heat-sensitive explosive; its sensitivity increases in presence of metal oxides. With inorganic bases it forms explosive salts. It forms explosive mixtures with air. It is used as a rocket propellant, a monopropellant. It explodes more violently than ethyl nitrite. Lower alkyl nitrites may decompose and burst the container even when stored under refrigeration.
See also
Nitromethane
Organic chemistry
Nucleophilic substitution
References
Cited sources
External links
WebBook page for CH3NO2
Determination of cis and trans structures of methyl nitrite by microwave spectroscopy.
Antianginals
Antidotes
Methyl esters
Alkyl nitrites
Explosive gases
Explosive chemicals | Methyl nitrite | Chemistry | 695 |
36,705,726 | https://en.wikipedia.org/wiki/Sylvia%20Fedoruk%20Canadian%20Centre%20for%20Nuclear%20Innovation | The Sylvia Fedoruk Canadian Centre for Nuclear Innovation (Fedoruk Centre) is an institute located in Saskatoon, Saskatchewan, Canada that was established by the University of Saskatchewan in 2011 as the Canadian Centre for Nuclear Innovation (CCNI). The Fedoruk Centre does not have a mandate to conduct research itself. Instead, it acts as a conduit to fund nuclear released research projects in Saskatchewan and to oversee the operation of nuclear facilities on the university campus such as the universities cyclotron facility. The Fedoruk Centre is involved in funding research in the nuclear medicine, materials science, nuclear energy systems including small reactor design, and environmental and social topics related to nuclear technology. On October 3, 2012, the name of the organization was changed from the Canadian Centre for Nuclear Innovation to the Sylvia Fedoruk Canadian Centre for Nuclear Innovation in honour of Sylvia Fedoruk who did pioneering work in the treatment of cancer using cobalt-60 radiation therapy in the 1950s.
The centre builds on other nuclear and accelerator related facilities already on the university campus that include the Saskatchewan Accelerator Laboratory, Canadian Light Source, SLOWPOKE reactor operated by the Saskatchewan Research Council, and the STOR-M tokamak.
The centre received an initial $30 million (CDN) in funding to advance research, innovation and training in four areas:
Advance nuclear medicine and knowledge,
Develop better materials for widespread applications (energy, health, environment, manufacturing, etc.),
Improve safety and other engineering of nuclear energy systems, and
Managing the risks and benefits of nuclear technology for society and our environment.
The Fedoruk Centre will be responsible for the operations of a $25 million cyclotron facility being installed in a renovated a building between the Canadian Light Source and the Western College of Veterinary Medicine to be completed in 2014. The 24 MeV cyclotron will produce radioisotopes for medical imaging research and clinical use, including the province's PET-CT scanners.
When the centre was formed some controversy existed over the governance and independence of the organization with only two board members appointed by the university while the other members have strong ties to the nuclear industry. In fact, all members of the Fedoruk Centre's Board of Directors are appointed by the University of Saskatchewan's Board of Governors. Two members are nominated by the Province of Saskatchewan, two by the University of Saskatchewan, with the remainder sought out by the Fedoruk Centre's Board and elected by the Board of Governors.
External links
Sylvia Fedoruk Canadian Centre for Nuclear Innovation
In 2017 World Energy TV produced a film about the Sylvia Fedoruk Canadian Centre for Nuclear Innovation
References
Research institutes in Canada
University of Saskatchewan
2011 establishments in Saskatchewan
Particle physics facilities
Nuclear research institutes
Nuclear technology in Canada
Organizations established in 2011 | Sylvia Fedoruk Canadian Centre for Nuclear Innovation | Engineering | 559 |
28,690,619 | https://en.wikipedia.org/wiki/Calvin%20Blignault | Calvin Blignault (4 September 1979 – 21 August 2010) was a South African mechanical engineer.
Life and work
Blignault attended the Kabega Park Primary and Hoërskool Framesby Secondary schools. He earned his NDip, BTech, MTech and DTech qualifications as a mechanical engineer. He completed both his master's and doctorate degrees in mechanical engineering at the Nelson Mandela Metropolitan University (NMMU) in Port Elizabeth, South Africa.
At Port Elizabeth Technikon (PE Technikon) he conducted world-class research as a master's student. In 2002, he made the first friction stir weld on South African soil from 6mm thick aluminium alloy plate at the PE Technikon together with Grant Kruger using a milling machine. He was scientifically supervised by Danie Hattingh and Theo van Niekerk, who were National Research Foundation grant holders. He obtained his PhD as a result of these studies.
He subsequently worked at The Welding Institute (TWI) in the UK, the birthplace of friction stir welding, where he conducted groundbreaking R&D on this process from January 2006 to July 2008. He was a project leader in the friction and forge process group at TWI in Cambridge, UK. He conducted high level research in friction welding, linear friction welding and related processes for international and national aerospace companies, such as Boeing (US), Rolls-Royce (UK) and Embraer (Brazil). He has also undertaken work for German-based companies in the automotive sector. During his research and professional career he authored and co-authored a number of journal and conference publications.
Starting in March 2007 he launched a group sponsored project at TWI on the development of a new variant of friction stir welding for high temperature, low conductivity materials including titanium alloys.
He developed procedures for stationary shoulder friction stir welding (SSFSW). Titanium alloys are particularly difficult to join by friction stir welding due to their high strength at high temperatures and their low thermal conductivity. A previous group sponsored project at TWI on conventional friction stir welding of these alloys had concluded that the approach was feasible, but there were still problems to be solved. An internal TWI project had previously been carried out to address some of the issues and led to the invention of SSFSW.
In 2008 he designed and built an advanced process monitoring system to assist with process investigation and quality control in friction stir welding. This allowed a practical assessment of friction stir welding quality control by means of in-process monitoring of the temperature and the three dimensional forces within the rotating tool. He recommended that FSW users and researchers consider the use of dedicated friction stir welding monitoring equipment for in-process verification of weld quality. Researchers can also analyse process response data to reduce the empiricism associated with initial tool and parameter development.
In May 2008 he published an article on "Friction Stir Welding for the Fabrication of Aluminium Rolling Stock". Friction stir welded structures are revolutionising the way in which trains, metro cars and trams are built. Friction stir welding has been widely recognised for its ability to provide high weld quality and low distortion in a wide variety of aluminium structures. The technical and economic benefits of the FSW process have led to rapid development and international use of the technology in many industrial applications. New standards are being implemented in Europe, and the Welding Fabricator Certification Scheme is designed, to allow welding fabricators to demonstrate compliance with ISO 3834 on quality requirements for fusion welding of metallic materials. In July 2008 he moved back to Port Elizabeth, where he worked as a senior lecturer at NMMU.
Fatal motorcycle accident
Blignault died in a hit-and-run motorcycle accident on 21 August 2010. He was killed, when a car crashed into him from behind, while he was stationary at a red traffic light in Port Elizabeth on his return from a motorcycle event. Captain Sandra Janse van Rensburg, a police spokesperson, said information received by the police suggested that two vehicles had been involved in the accident. She said witnesses reported that two cars were racing along Cape Road when the accident took place. "It is alleged that one of the vehicles hit the stationary motorbike at the intersection", she said. "The other vehicle then allegedly rode over the biker when he fell to the ground".
On 6 October 2010, police constable Ziyaad Domingo was arrested for allegedly stealing the credit card of the hit-and-run accident victim and buying jewelry, clothes and petrol. His internal disciplinary hearing had to be postponed, because he had been admitted to Hunterscraig Psychiatric Hospital in Port Elizabeth. Domingo pleaded guilty to stealing a credit card and fraud, relating to purchases made with the card. He was convicted in Port Elizabeth Regional Court in February 2011.
References
1979 births
2010 deaths
Friction stir welding experts
Mechanical engineers
South African engineers
Motorcycle road incident deaths
Nelson Mandela University alumni
People from Gqeberha
South African people of German descent
Road incident deaths in South Africa | Calvin Blignault | Engineering | 1,006 |
64,170,708 | https://en.wikipedia.org/wiki/Value-freedom | Value-freedom is a methodological position that the sociologist Max Weber offered that aimed for the researcher to become aware of their own values during their scientific work, to reduce as much as possible the biases that their own value-judgements could cause.
The demand developed by Max Weber is part of the criteria of scientific neutrality.
The aim of the researcher in the social sciences is to make research about subjects structured by values, while offering an analysis that will not be, itself, based on a value-judgement. According to this concept, the researcher should make of these values an “object”, without passing on them a prescriptive judgement.
In this way, Weber developed a distinction between "value-judgement" and "link to the values". The "link to the values" describes the action of analysis of the researcher who, by respecting the principle of the value-freedom, makes of cultural values several facts to analyse without venturing a prescriptive judgement on them, i. e. without passing a value judgement.
The original term comes from the German werturteilsfreie Wissenschaft, and was introduced by Max Weber.
Bibliography
Max Weber, Max Weber on the Methodology of the Social Sciences , 1949
See also
Fact-value distinction
Empirical research
Epistemology
Ethnology
Ethnocentrism
Scientific method
Objectivity (philosophy)
Philosophy of science
References
Max Weber
Social science methodology
Research ethics
1949 introductions | Value-freedom | Technology | 290 |
24,424,280 | https://en.wikipedia.org/wiki/C20H31NO | {{DISPLAYTITLE: C20H31NO}}
The molecular formula C20H31NO (molar mass: 301.46 g/mol, exact mass: 301.2406 u) may refer to:
Trihexyphenidyl, also known as benzhexol
Deramciclane (EGIS-3886)
Molecular formulas | C20H31NO | Physics,Chemistry | 77 |
27,455,487 | https://en.wikipedia.org/wiki/Iron%20oxychloride | Iron oxychloride is the inorganic compound with the formula FeOCl. This purple solid adopts a layered structure, akin to that of cadmium chloride. The material slowly hydrolyses in moist air. The solid intercalates electron donors such as tetrathiafulvalene and even pyridine to give mixed valence charge-transfer salts. Intercalation is accompanied by a marked increase in electrical conductivity and a color change to black.
Production
FeOCl is prepared by heating iron(III) oxide with ferric chloride at over the course of several days:
Fe2O3 + FeCl3 → 3 FeOCl
Alternatively, FeOCl may be prepared by the thermal decomposition of FeCl3⋅6H2O at over the course of one hour:
FeCl3 ⋅ 6H2O → FeOCl + 5 H2O + 2 HCl
References
Chlorides
Iron(III) compounds
Metal halides
Oxychlorides | Iron oxychloride | Chemistry | 201 |
31,585,265 | https://en.wikipedia.org/wiki/Clock%20position | A clock position, or clock bearing, is the direction of an object observed from a vehicle, typically a vessel or an aircraft, relative to the orientation of the vehicle to the observer. The vehicle must be considered to have a front, a back, a left side and a right side. These quarters may have specialized names, such as bow and stern for a vessel, or nose and tail for an aircraft. The observer then measures or observes the angle made by the intersection of the line of sight to the longitudinal axis, the dimension of length, of the vessel, using the clock analogy.
In this analogy, the observer imagines the vessel located on a horizontal clock face with the front at 12:00. Neglecting the length of the vessel, and presuming that he is at the bow, he observes the time number lying on the line of sight. For example, 12 o'clock means directly ahead, 3 o'clock means directly to the right, 6 o'clock means directly behind, and 9 o'clock means directly to the left.
The clock system is not confined to transportation. It has general application to circumstances in which the location of one object with respect to another must be systematized.
Uses
As a relative bearing
This is a system of denoting impromptu relative bearing widely used in practical navigation to give the position of an observed object readily and comprehensibly. "Relative" means that it does not state or imply any compass directions whatsoever. The vessel can be pointed in any direction. The clock numbers are relative to the direction in which the vessel points. The angular distance between adjacent clock numbers is 30 degrees, a round unit that simplifies mathematical juggling. A quick clock number can be shouted by a lookout, whereas after a calculation and comparison of compass points, which might be unknown anyway, it might be too late for the vessel to avoid danger.
As an example of a standard use, the clock position of every approaching vessel is monitored. If the clock number for the observed vessel does not change, it is on a collision course for the observer vessel, as vessels that pass by must change relative bearing. In warfare the clock system is especially useful in drawing attention to enemy locations.
The clock system is easily converted into a 360 degree system for more precise denotation. One bearing, or point, is termed an azimuth. The convention is that of analytic geometry: the y-axis at zero degrees is the longitudinal axis of the vehicle. Angles grow larger in the clockwise direction. Thus, directly to port is at 270 degrees. Negative angles are not used. In navigational contexts, the bearing must be stated as 3 digits: 010 (not so in other contexts). These circles are not to be confused with latitude and longitude, or with any sort of compass reading, which are not relative to the vehicle, but to the magnetic and spin axes of the Earth.
As a true bearing
For maritime and aviation applications, the clock bearing is almost always a relative bearing; i.e., the angle stated or implied is angular distance from the longitudinal axis of the vessel or imaginary vessel to the bearing. However, if the 12:00 position is associated with a true bearing, then the observed position is also.
For example, clock position on a 12-hour analog watch can be used to find the approximate bearing of true north or south on a day clear enough for the sun to cast a shadow. The technique takes a line of sight (LOS) on the visible sun, or on the direction pointed to by a shadow stick, through the hour hand of the watch. It exploits the one true bearing of the sun in its course across the sky: the LOS from the observer to the zenith of its course. There the sun is seen mid-way between sunrise and sunset. A vertical plane including sun and observer is perpendicular to the plane of the sun's course. Its intersection with the surface of the earth is a meridian, a line passing through a geographical pole. If the sun is in the southern half of the sky, the zenith bearing points true south; if northern, north. The time at that moment is 12:00 P.M., solar time. The clock position to the observer is 12.
If the watch is set to uncorrected solar time, both hands point to the sun. In a 12-hour watch, the sun and the hour hand both advance, but not at the same rate; the sun covers 15 degrees per hour, and watch 30. To keep the hour hand on the sun, 12:00 must recede from the zenith at the same rate the hour hand advances. Thus when the observer takes an arbitrary LOS, the zenith LOS – true north or south – is to be found at half the angle between 12 and the LOS. On a 24-hour watch, the sun and the hour hand advance at the same rate. There is no need to half the angle.
The zenith LOS is only an approximation due to changes in the time kept by the watch. That time is based on mean solar time rather than observed solar time. Also, time changes with longitude, and the institution of daylight saving time. The time generally available for watch settings in the observer's region is called civil time. It can be corrected to solar time, but LOS on a watch is generally too imprecise to make the trouble worth the effort.
Examples
From aviation
In World War II aircraft pilots needed a quick method of communicating the relative position of threats, for which the clock system was ideal. The gunners of a bomber, or the other aircraft in the squadron, had to be kept informed for purposes of immediate response. However, in aviation, a clock position refers to a horizontal direction. The pilots needed a vertical dimension, so they supplemented the clock position with the word high or low to describe the vertical direction; e.g., 6 o'clock high means behind and above the horizon, while 12 o'clock low means ahead and below the horizon.
The horizon line was only visible in clear weather in daylight, and was only useful as a reference line in straight and level flight, when it appeared on the nose of the aircraft. The vocabulary therefore was only of use during daylight patrols or missions. The reference line and reference clock positions did not exist during combat aerobatics, at night, or during cloudy weather, when other means had to be found for locating the combatants, such as radar.
For airplanes in rapid maneuvers, air traffic controllers will issue the eight cardinal compass points instead.
From community planning
In 1916, J.B. Plato devised a clock system to identify farms around reference points in rural areas. A clock face was imagined centered on a rural community with 12:00 pointing true north. The circle was divided into concentric numbered bands at each mile of radius. The bands were divided into 12 segments at each position of the clock numbered after the clock hour. Within a segment, every building was assigned a letter. For instance, Alton 3-0 L meant house L in segment 3 of the central circle of 1 mile radius at Alton, where 3 was at 3:00.
From medicine
Medical pathology uses the clock system to describe the location of breast tumors. A clock face is considered imposed over each breast, left and right, centered on the alveolar region, with the positions shown around it. Tumors are located at one or more subsites, or clock positions, identified by one or more clock numbers. In addition the numbers are arranged in quadrants: Upper Outer Quadrant (UOQ), Lower Inner Quadrant (LIQ), and so on. Codes are assigned to the quadrants, the alveolar region, and the whole breast.
From golf
Golf players use the clock system to study the course of the ball in putting situations. For holes that are on a slope, the hole is imagined to be the center of a clock face with 12:00 at the high point and 6:00 at the low point. The ball will only run true when hit from the high or low points; otherwise, its course will break, or bend on the slope. Some golfers practice clock drill – hitting the ball from all the positions of the clock – to learn how it breaks.
From microscopy
An article in the Journal of Applied Microscopy for 1898 recommends the use of a polar coordinate system in the form of a clockface for recording the positions of microscopic objects on a slide. The face is conceived centered on the circle visible under the lens. The pole is the center. Angle is given as a clock number, and distance as a decimal percentage of the radius through the object. For example, “3,9” means 3:00 o'clock at 9 tenths of the radius.
Errors
Air traffic controls can only infer the aircraft heading from the ground track of an aircraft, which may not reflect the aircraft's actual heading due to drift angle caused by the wind. As a result, pilots should give due consideration and apply drift correction when an air traffic controller provides traffic advisories. Additionally, the error could also occur when the radar traffic information is issued while the aircraft is changing course.
Instrumentation
Although the raw clock position is invaluable or indispensable in many circumstances requiring rapid response, for ordinary careful navigation it is not sufficiently precise. It can be made precise by various methods requiring the use of instruments.
Origin of the clock positions
The clock face with its clock positions is a heritage of Roman civilization, as is suggested by the survival of Roman numerals on old clocks and their cultural predecessors, sundials. The mechanical clock supplanted the sundial as the major timekeeper, while the Hindu–Arabic numeral system replaced the Roman as the number system in Europe in the High Middle Ages. The Romans, however, had adapted their timekeeping system from the Ancient Greek. The historical trail leads from there to ancient Mesopotamia through the ancient Greek colonies placed on the coast of Anatolia in the 1st millennium BC. The first known historian, Herodotus of Halicarnassus, who was a native of that border region, made the identification:
” the sunclock (polon) and the sundial (gnomon), and the twelve divisions of the day, came to Hellas not from Egypt but from Babylonia.”
The polos (“pole”) was a sundial of a concave face resembling the concavity of the universe (named a “pole” in this case). The gnomon was the pointer.
The Mesopotamian system
The Babylonian time system is documented by thousands of Mesopotamian cuneiform tablets. The Babylonians inherited the better part of their system from the Sumerians, whose culture they absorbed. Tablets of different periods reveal the development of a sexagesimal numbering system from decimal and duodecimal systems, which reveals itself in the construction of unique symbols for numerals 1-59 from natural finger decimals (ten fingers, ten symbols). Why they developed this system is a matter for academic debate, but there are multiple advantages, including division by several factors, offering several possible subdivisions, one of which is by 12's. Classical civilization adopted and adapted the Mesopotamian time system, and modern civilization adapted it still further. The modern system retains much of the sexagesimalism of the Sumerians, but typically not with the same detail.
Time today and generally in ancient Mesopotamia is given mainly in three digits. Today's state the hours, minutes, and seconds. In a strict sexagesimal system these three would be expressed in a single, three-digit sexagesimal number: h,m,s with values on each of the three letters of 0-59; that is, hours up to 60, minutes up to 60, and seconds up to 60. Because integer numbers are expressed as sums, in this case
h times 602 + m times 60 + s
for the number of seconds, h, m, and s can be broken out and treated as separate numbers. Each number, however, implies the other two; e.g., a minute implies 60 seconds. m and s are straightforward, but h is different. There are no explicit 60 hours; the number instead is 24, and yet they are part of an implied sexagesimal system. 60 minutes is implied by one of the 24 hours, not one of the 60. The system is not strictly sexagesimal but is based on the sexagesimal.
A full Babylonian time determination also had three digits. Zeros were blank spaces, causing some difficulty of discerning them from character separators. For reasons that are not clear, the Mesopotamians adopted a standard of 12 hours per day for their first-order digit. Their day, however, was designed for measurement on their most ancient and widely used timepiece, the sundial, which showed only daylight hours. Daylight was the time between sunrise and sunset, each of those being defined as the appearance or disappearance of the top rim of the sun on the horizon. Daylight hours problematically were seasonal; that is, due to the variation of the length of the day with time of year, hour length was variable also. The Mesopotamians had discovered, however, that if the darkness was divided into 12 hours also, and each run of 12 was matched number for number: 1st to 1st, 2nd to 2nd, etc., the sum of each match was constant.
The 12-hour, seasonal day was one of many metrological arrangements that had developed during the 3rd millennium BC. It was in use in the Ur III period, at the end of the 3rd millennium. The vocabulary of time was not yet set. For example, the 60-hour day existed as the time-shekel, 1/60 of a working day, presumably so named from the labor cost of one hexagesimal hour. This was a time of strong kings and continuing administrations that took responsibility for weights and standards. Englund distinguishes two main types of system: the cultic, in which the events of the seasonal calendar assume religious significance, and are perpetuated for religious reasons, and a second, new type, the state, defined by an administration that needed to standardize its time units.
The state system came to predominate in the subsequent Old Babylonian period. The state administrators had perceived that the sun advances at a uniform rate no matter what the season. One sun cycle is always the same. Moreover, it matches the cycle of rotation of the stars around the pole star, the real reason being that the Earth rotates at a constant angular velocity. If hours were to represent divisions of the uniform rotation, they must also be uniform, and not be variable. There were two days of the year when all 24 hours were of the same length: the two equinoxes. The standard double hour (beru), of equinoctial length, representing two modern hours, of which there were 12 in the standard day (umu), was not conceived as being one of day and one of night, but as being just two consecutive equal-length hours. One standard day thus went on to become two consecutive equal 12-hour clockfaces in modern clock time. 30 standard days were a standard month, and 12 of those a standard year of 360 days. Some juggling of month lengths to make the 12 months fit the year was still required.
Within a day, single hours were unreliable. They came in all sizes. The double hour, however, originally the sum of a daylight hour and the corresponding night hour, was always the same. The statists therefore chose to use double units in definition. The 12-hour daytime had been divided into three seasonal watches. These were matched to three seasonal night watches, 1st to 1st, 2nd to 2nd, etc. One double watch (8 hours) was four double hours. One single watch (four hours) was two double hours.
To produce a second-order digit of a Babylonian time, the statists changed from solar to stellar time. The stars moved in visible circles at a fixed rate, which could be measured by the constant escape of water from a water clock. The single standard watch of 4 hours (two double hours) was divided into 60 time-degrees (ush). One double hour had 30, and one complete stellar day, 360 (12 times 30). This assignment was the creation of the 360-degree circle, as the degree went from being a time division to an angular distance of rotation. Time-degrees were all the same (one is about 4 minutes of modern time). The second-order digit counted the degrees that had gone by in the hour, notwithstanding the fact that its number of degrees were seasonal.
The third and last order digit divided the time-degree into 60 parts (the gar), which appears to be sexagesimal. In modern time it is 4 seconds. There are not 60 time-degrees in an hour, nor 60 hours in a day. The Babylonian time was thus three different numbers, only one of which was sexagesimal. Only its general features are modern: the 12-hour day followed by a 12-hour night, the 60-division 3rd-order digit, and the 360-degree circle.
In media and culture
The 1949 movie Twelve O'Clock High takes its title from the system. In this case, the position would be ahead and above the horizon, an advantageous position for the attacker.
The phrase "on your six" refers to the six o'clock or the adjacent positions; that is, the expression cautions that someone is behind you or on your tail. Likewise, "check your six" or "check six" means "watch your back" or "watch out for yourself".
See also
Body relative direction
Clock angle problem
Port and starboard
Hour angle
Relative bearing
References
Reference bibliography
External links
Clocks
Orientation (geometry)
Units of plane angle
Encodings | Clock position | Physics,Mathematics,Technology,Engineering | 3,651 |
2,454,472 | https://en.wikipedia.org/wiki/Gypsum%20concrete | Gypsum concrete is a building material used as a floor underlayment used in wood-frame and concrete construction for fire ratings, sound reduction, radiant heating, and floor leveling. It is a mixture of gypsum plaster, Portland cement, and sand.
Gypsum concrete is sometimes called gypcrete by construction professionals, as a generic name in common usage (but not in law), but that is an alteration of , a Maxxon trademark for its brand of gypsum concrete. Other common brands of gypsum concrete include Levelrock (from US Gypsum) and .
Composition
US patent 4,444,925 lists the components of Gyp-Crete as atmospheric calcined gypsum, sand, water, and small amounts of various additives. Additives listed include polyvinyl alcohol, an extender such as sodium citrate or fly ash, a surfactant such as Colloid defoamer 1513 DD made by Colloids, Inc., and a fluidizer based on sodium or potassium derivatives of naphthalene sulfonate formaldehyde condensate. One example mix is shown below.
The purpose of the polyvinyl alcohol is to prevent the surface of the concrete from becoming dusty. While the exact mechanism is not known, it is thought that as the concrete sets, water migrates to the surface, bringing with it fine, dusty particles. When the water evaporates, the dusty particles are deposited on the surface. It is thought that the polyvinyl alcohol prevents the dusty particles from migrating upwards with the water.
The mix is prepared on site using a specialized truck. The truck contains a tank for water, a mixing tank, a holding tank, a pump, and a conveyor for the sand and calcined gypsum. A hopper for the sand and gypsum is mounted externally on the vehicle.
To prepare the mix, the sand and calcined gypsum are added to the hopper and mixed. Most of the required water is added to the mixing tank, then the sand and calcined gypsum are mixed in. Once all the sand and calcined gypsum have been mixed in, the rest of the water is added until the proper consistency is attained. Finally, the additives are mixed in and the whole batch of concrete is moved to the holding tank to be pumped out into the required area via long hoses. A small sample is taken from the batch and set aside so that the set-up time can be observed and adjustments can be made to the amount of additives so that the timing is correct.
Once the mix has been poured, little leveling, if any, is needed. The mix should be smoothed gently with a flat board, such as a 40” 1x4. This helps to concentrate the calcined gypsum at the surface.
Previous formulations
US patent 4,075,374 lists the by-weight formulation as 10 parts pressure calcined gypsum, 38-48 parts sand, and 4-10 parts water. 0.03 to 0.1 parts of a latex emulsion, such as Dow Latex 460, were also added. To prevent foaming, a defoamer such as WEX was added to the latex at a concentration of 0.2%. It was stated that gypsum calcined at atmospheric pressure produced poor results due to it having flaky particles, and that gypsum calcined under a pressure of 15-17 psi produced better results because it had denser, crystalline particles.
Later it was found that this original formulation expanded too much and in some instances floors cracked. US patent 4,159,912 describes changes made so that the expansion was greatly reduced. In that formulation, 5-8% of Portland cement was added to reduce the expansion. The latex emulsion and antifoaming agent were no longer necessary as the concrete was strengthened by the Portland cement. It was found that atmospheric calcined gypsum could be used for the majority of the calcined gypsum if it was ball milled to change the texture. The proportion of sand was also changed, so that it was in a 1:1.3 to 1:3 ratio with the calcined gypsum. This resulted in a runnier mix, but the set up time was not changed.
Advantages and disadvantages
Gypsum concrete is lightweight and fire-resistant. A 1.5-inch slab of gypsum concrete weighs 13 pounds per square foot versus 18 pounds per square foot for regular concrete.
Even though gypsum concrete weighs less, it still has the same compressive strength as regular concrete, based on its application as underlayment or top coat flooring.
A 7-man work crew can lay 4–6 times as much gypsum concrete in a work day as regular poured Portland cement. This is due to the ease of leveling the very runny gypsum concrete versus normal concrete. In addition, if the wooden subfloor is first coated in a film of latex, the adhesion between the subfloor and the concrete is much better than the adhesion obtained with “normal” concrete. A further benefit is that nails can be driven through the cement into the subfloor without it chipping.
The cost of gypsum concrete is comparable to regular concrete, ranging from $1.75 per square foot to $6.00 per square foot.
Regular concrete ranges from $2.50 to $4.50 per square foot.
History
In the late 1940s, copper tubing and Portland concrete were used to install radiant heat flooring. The copper tubes would be laid out around the ground and then the Portland concrete could be poured to cover the tubing and make an even base for the floor. However, this practice fell out of use in the United States within 15–20 years because the Portland concrete was too corrosive on the copper tubing. In the 1980s Gypsum concrete again became widely used in the United States for radiant heat flooring as cross-linked polyethylene (PEX) tubing could be used with Gypsum concrete for radiant heat flooring without concern for corrosion on the PEX tubing.
Notes
1. The table in the patent lists the PVA content as 0.45 grains (0.00002%). Later on, it is stated that the PVA should be in a 1:0.005625 ratio with the calcined gypsum. This yields a PVA content of 0.45 lbs (0.16%).
References
Concrete
Soil-based building materials | Gypsum concrete | Engineering | 1,374 |
35,419,067 | https://en.wikipedia.org/wiki/Pita%20skate | The pita skate (Raja pita) is a medium-sized skate in the family Rajidae. The holotype and only known specimen was found in the northern Persian Gulf, in Iraqi waters. It was collected at a depth of less than .
It is around in length, and the coloration pattern (described from a specimen preserved in alcohol) is light brown dorsal body with irregular darker brown blotches, and a light brown ventral body. The tail is slightly darker than the body. The name derives from the coloration pattern resembling pita bread.
Description
The species is known from only one female specimen, so variation and sex differences are not known. The holotype specimen is in total length, about half being body and half tail. The disc is at the widest point.
The disc forms an approximate quadrangle. The dorsal surface and basal portions of pelvic fins have brown spots on a light brown background. Three brown streaks are noted on the posterior distal pectoral fins. Ventral surface is pale brown, with no variation noted.
Dorsal areas are covered in dermal denticles except for two symmetric patches on the disc and along the posterior margin of the pectoral and pelvic fins. Ventral areas smooth except for a few patches of denticles of varying density. The dorsal surface has a row of 31 thorns along the midline, and a parallel row of three on either side of the midline approximately mid-disc. It has additional thorns around the eyes and snout. The tail has an additional 29 thorns on the midline, and other thorns in regular and irregular rows.
Biology and life history
Information is very limited for this species. It likely lives in muddy bottoms, where it was collected, but the distribution and specifics of habitat preferences are unknown. The eggs of this species are unknown, but like other skates, it is likely oviparous, laying eggs in pairs in open waters, and not guarding the eggs.
Okamejei pita was originally classified into the genus Raja and subgenus Raja (Okamejei). This subgenus was promoted to full genus in 1999. The taxonomic validity of this species was reconfirmed by Compangno in 2007. This species is placed as incertae sedis in the genus Raja.
Status
Despite being known from only one specimen, the species was initially classified critically endangered by the IUCN. However it was reevaluated and was reclassified as data deficient. It is not the target of any fishery, but it may be unrecognized bycatch in several other fisheries, including those using longlines, fish traps, stake nets, and trawls. Like other elasmobrachs, this species is not likely consumed by the Shia population who fish in the area.
In addition to fishing threats, pollution and loss of water quality are possible concerns for this species. Threats include hydrocarbons and biotic threats.
Multiple fisheries surveys of this region failed to find this species, which is taken to indicate this species is rare. No surveys were conducted after 1980, due to the start of the Iran–Iraq War and subsequent military conflicts.
References
Pita skate
Fish of the Persian Gulf
Endemic fauna of Iraq
Pita skate
Species known from a single specimen
Enigmatic fish taxa | Pita skate | Biology | 663 |
43,507,260 | https://en.wikipedia.org/wiki/Quantifier%20%28logic%29 | In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable.
The most commonly used quantifiers are and . These quantifiers are standardly defined as duals; in classical logic, they are interdefinable using negation. They can also be used to define more complex quantifiers, as in the formula which expresses that nothing has the property . Other quantifiers are only definable within second order logic or higher order logics. Quantifiers have been generalized beginning with the work of Mostowski and Lindström.
In a first-order logic statement, quantifications in the same type (either universal quantifications or existential quantifications) can be exchanged without changing the meaning of the statement, while the exchange of quantifications in different types changes the meaning. As an example, the only difference in the definition of uniform continuity and (ordinary) continuity is the order of quantifications.
First order quantifiers approximate the meanings of some natural language quantifiers such as "some" and "all". However, many natural language quantifiers can only be analyzed in terms of generalized quantifiers.
Relations to logical conjunction and disjunction
For a finite domain of discourse , the universally quantified formula is equivalent to the logical conjunction .
Dually, the existentially quantified formula is equivalent to the logical disjunction .
For example, if is the set of binary digits, the formula abbreviates , which evaluates to true.
Infinite domain of discourse
Consider the following statement (using dot notation for multiplication):
1 · 2 = 1 + 1, and 2 · 2 = 2 + 2, and 3 · 2 = 3 + 3, ..., and 100 · 2 = 100 + 100, and ..., etc.
This has the appearance of an infinite conjunction of propositions. From the point of view of formal languages, this is immediately a problem, since syntax rules are expected to generate finite words.
The example above is fortunate in that there is a procedure to generate all the conjuncts. However, if an assertion were to be made about every irrational number, there would be no way to enumerate all the conjuncts, since irrationals cannot be enumerated. A succinct, equivalent formulation which avoids these problems uses universal quantification:
For each natural number n, n · 2 = n + n.
A similar analysis applies to the disjunction,
1 is equal to 5 + 5, or 2 is equal to 5 + 5, or 3 is equal to 5 + 5, ... , or 100 is equal to 5 + 5, or ..., etc.
which can be rephrased using existential quantification:
For some natural number n, n is equal to 5+5.
Algebraic approaches to quantification
It is possible to devise abstract algebras whose models include formal languages with quantification, but progress has been slow and interest in such algebra has been limited. Three approaches have been devised to date:
Relation algebra, invented by Augustus De Morgan, and developed by Charles Sanders Peirce, Ernst Schröder, Alfred Tarski, and Tarski's students. Relation algebra cannot represent any formula with quantifiers nested more than three deep. Surprisingly, the models of relation algebra include the axiomatic set theory ZFC and Peano arithmetic;
Cylindric algebra, devised by Alfred Tarski, Leon Henkin, and others;
The polyadic algebra of Paul Halmos.
Notation
The two most common quantifiers are the universal quantifier and the existential quantifier. The traditional symbol for the universal quantifier is "∀", a rotated letter "A", which stands for "for all" or "all". The corresponding symbol for the existential quantifier is "∃", a rotated letter "E", which stands for "there exists" or "exists".
An example of translating a quantified statement in a natural language such as English would be as follows. Given the statement, "Each of Peter's friends either likes to dance or likes to go to the beach (or both)", key aspects can be identified and rewritten using symbols including quantifiers. So, let X be the set of all Peter's friends, P(x) the predicate "x likes to dance", and Q(x) the predicate "x likes to go to the beach". Then the above sentence can be written in formal notation as , which is read, "for every x that is a member of X, P applies to x or Q applies to x".
Some other quantified expressions are constructed as follows,
for a formula P. These two expressions (using the definitions above) are read as "there exists a friend of Peter who likes to dance" and "all friends of Peter like to dance", respectively.
Variant notations include, for set X and set members x:
All of these variations also apply to universal quantification.
Other variations for the universal quantifier are
Some versions of the notation explicitly mention the range of quantification. The range of quantification must always be specified; for a given mathematical theory, this can be done in several ways:
Assume a fixed domain of discourse for every quantification, as is done in Zermelo–Fraenkel set theory,
Fix several domains of discourse in advance and require that each variable have a declared domain, which is the type of that variable. This is analogous to the situation in statically typed computer programming languages, where variables have declared types.
Mention explicitly the range of quantification, perhaps using a symbol for the set of all objects in that domain (or the type of the objects in that domain).
One can use any variable as a quantified variable in place of any other, under certain restrictions in which variable capture does not occur. Even if the notation uses typed variables, variables of that type may be used.
Informally or in natural language, the "∀x" or "∃x" might appear after or in the middle of P(x). Formally, however, the phrase that introduces the dummy variable is placed in front.
Mathematical formulas mix symbolic expressions for quantifiers with natural language quantifiers such as,
For every natural number x, ...
There exists an x such that ...
For at least one x, ....
Keywords for uniqueness quantification include:
For exactly one natural number x, ...
There is one and only one x such that ....
Further, x may be replaced by a pronoun. For example,
For every natural number, its product with 2 equals to its sum with itself.
Some natural number is prime.
Order of quantifiers (nesting)
The order of quantifiers is critical to meaning, as is illustrated by the following two propositions:
For every natural number n, there exists a natural number s such that s = n2.
This is clearly true; it just asserts that every natural number has a square. The meaning of the assertion in which the order of quantifiers is reversed is different:
There exists a natural number s such that for every natural number n, s = n2.
This is clearly false; it asserts that there is a single natural number s that is the square of every natural number. This is because the syntax directs that any variable cannot be a function of subsequently introduced variables.
A less trivial example from mathematical analysis regards the concepts of uniform and pointwise continuity, whose definitions differ only by an exchange in the positions of two quantifiers. A function f from R to R is called
Pointwise continuous if
Uniformly continuous if
In the former case, the particular value chosen for δ can be a function of both ε and x, the variables that precede it.
In the latter case, δ can be a function only of ε (i.e., it has to be chosen independent of x). For example, f(x) = x2 satisfies pointwise, but not uniform continuity (its slope is unbound).
In contrast, interchanging the two initial universal quantifiers in the definition of pointwise continuity does not change the meaning.
As a general rule, swapping two adjacent universal quantifiers with the same scope (or swapping two adjacent existential quantifiers with the same scope) doesn't change the meaning of the formula (see Example here), but swapping an existential quantifier and an adjacent universal quantifier may change its meaning.
The maximum depth of nesting of quantifiers in a formula is called its "quantifier rank".
Equivalent expressions
If D is a domain of x and P(x) is a predicate dependent on object variable x, then the universal proposition can be expressed as
This notation is known as restricted or relativized or bounded quantification. Equivalently one can write,
The existential proposition can be expressed with bounded quantification as
or equivalently
Together with negation, only one of either the universal or existential quantifier is needed to perform both tasks:
which shows that to disprove a "for all x" proposition, one needs no more than to find an x for which the predicate is false. Similarly,
to disprove a "there exists an x" proposition, one needs to show that the predicate is false for all x.
In classical logic, every formula is logically equivalent to a formula in prenex normal form, that is, a string of quantifiers and bound variables followed by a quantifier-free formula.
Quantifier elimination
Range of quantification
Every quantification involves one specific variable and a domain of discourse or range of quantification of that variable. The range of quantification specifies the set of values that the variable takes. In the examples above, the range of quantification is the set of natural numbers. Specification of the range of quantification allows us to express the difference between, say, asserting that a predicate holds for some natural number or for some real number. Expository conventions often reserve some variable names such as "n" for natural numbers, and "x" for real numbers, although relying exclusively on naming conventions cannot work in general, since ranges of variables can change in the course of a mathematical argument.
A universally quantified formula over an empty range (like ) is always vacuously true. Conversely, an existentially quantified formula over an empty range (like ) is always false.
A more natural way to restrict the domain of discourse uses guarded quantification. For example, the guarded quantification
For some natural number n, n is even and n is prime
means
For some even number n, n is prime.
In some mathematical theories, a single domain of discourse fixed in advance is assumed. For example, in Zermelo–Fraenkel set theory, variables range over all sets. In this case, guarded quantifiers can be used to mimic a smaller range of quantification. Thus in the example above, to express
For every natural number n, n·2 = n + n
in Zermelo–Fraenkel set theory, one would write
For every n, if n belongs to N, then n·2 = n + n,
where N is the set of all natural numbers.
Formal semantics
Mathematical semantics is the application of mathematics to study the meaning of expressions in a formal language. It has three elements: a mathematical specification of a class of objects via syntax, a mathematical specification of various semantic domains and the relation between the two, which is usually expressed as a function from syntactic objects to semantic ones. This article only addresses the issue of how quantifier elements are interpreted.
The syntax of a formula can be given by a syntax tree. A quantifier has a scope, and an occurrence of a variable x is free if it is not within the scope of a quantification for that variable. Thus in
the occurrence of both x and y in C(y, x) is free, while the occurrence of x and y in B(y, x) is bound (i.e. non-free).
An interpretation for first-order predicate calculus assumes as given a domain of individuals X. A formula A whose free variables are x1, ..., xn is interpreted as a Boolean-valued function F(v1, ..., vn) of n arguments, where each argument ranges over the domain X. Boolean-valued means that the function assumes one of the values T (interpreted as truth) or F (interpreted as falsehood). The interpretation of the formula
is the function G of n-1 arguments such that G(v1, ..., vn-1) = T if and only if F(v1, ..., vn-1, w) = T for every w in X. If F(v1, ..., vn-1, w) = F for at least one value of w, then G(v1, ..., vn-1) = F. Similarly the interpretation of the formula
is the function H of n-1 arguments such that H(v1, ..., vn-1) = T if and only if F(v1, ..., vn-1, w) = T for at least one w and H(v1, ..., vn-1) = F otherwise.
The semantics for uniqueness quantification requires first-order predicate calculus with equality. This means there is given a distinguished two-placed predicate "="; the semantics is also modified accordingly so that "=" is always interpreted as the two-place equality relation on X. The interpretation of
then is the function of n-1 arguments, which is the logical and of the interpretations of
Each kind of quantification defines a corresponding closure operator on the set of formulas, by adding, for each free variable x, a quantifier to bind x. For example, the existential closure of the open formula n>2 ∧ xn+yn=zn is the closed formula ∃n ∃x ∃y ∃z (n>2 ∧ xn+yn=zn); the latter formula, when interpreted over the positive integers, is known to be false by Fermat's Last Theorem. As another example, equational axioms, like x+y=y+x, are usually meant to denote their universal closure, like ∀x ∀y (x+y=y+x) to express commutativity.
Paucal, multal and other degree quantifiers
None of the quantifiers previously discussed apply to a quantification such as
There are many integers n < 100, such that n is divisible by 2 or 3 or 5.
One possible interpretation mechanism can be obtained as follows: Suppose that in addition to a semantic domain X, we have given a probability measure P defined on X and cutoff numbers 0 < a ≤ b ≤ 1. If A is a formula with free variables x1,...,xn whose interpretation is
the function F of variables v1,...,vn
then the interpretation of
is the function of v1,...,vn-1 which is T if and only if
and F otherwise. Similarly, the interpretation of
is the function of v1,...,vn-1 which is F if and only if
and T otherwise.
Other quantifiers
A few other quantifiers have been proposed over time. In particular, the solution quantifier, noted § (section sign) and read "those". For example,
is read "those n in N such that n2 ≤ 4 are in {0,1,2}." The same construct is expressible in set-builder notation as
Contrary to the other quantifiers, § yields a set rather than a formula.
Some other quantifiers sometimes used in mathematics include:
There are infinitely many elements such that...
For all but finitely many elements... (sometimes expressed as "for almost all elements...").
There are uncountably many elements such that...
For all but countably many elements...
For all elements in a set of positive measure...
For all elements except those in a set of measure zero...
History
Term logic, also called Aristotelian logic, treats quantification in a manner that is closer to natural language, and also less suited to formal analysis. Term logic treated All, Some and No in the 4th century BC, in an account also touching on the alethic modalities.
In 1827, George Bentham published his Outline of a New System of Logic: With a Critical Examination of Dr. Whately's Elements of Logic, describing the principle of the quantifier, but the book was not widely circulated.
William Hamilton claimed to have coined the terms "quantify" and "quantification", most likely in his Edinburgh lectures c. 1840. Augustus De Morgan confirmed this in 1847, but modern usage began with De Morgan in 1862 where he makes statements such as "We are to take in both all and some-not-all as quantifiers".
Gottlob Frege, in his 1879 Begriffsschrift, was the first to employ a quantifier to bind a variable ranging over a domain of discourse and appearing in predicates. He would universally quantify a variable (or relation) by writing the variable over a dimple in an otherwise straight line appearing in his diagrammatic formulas. Frege did not devise an explicit notation for existential quantification, instead employing his equivalent of ~∀x~, or contraposition. Frege's treatment of quantification went largely unremarked until Bertrand Russell's 1903 Principles of Mathematics.
In work that culminated in Peirce (1885), Charles Sanders Peirce and his student Oscar Howard Mitchell independently invented universal and existential quantifiers, and bound variables. Peirce and Mitchell wrote Πx and Σx where we now write ∀x and ∃x. Peirce's notation can be found in the writings of Ernst Schröder, Leopold Loewenheim, Thoralf Skolem, and Polish logicians into the 1950s. Most notably, it is the notation of Kurt Gödel's landmark 1930 paper on the completeness of first-order logic, and 1931 paper on the incompleteness of Peano arithmetic.
Peirce's approach to quantification also influenced William Ernest Johnson and Giuseppe Peano, who invented yet another notation, namely (x) for the universal quantification of x and (in 1897) ∃x for the existential quantification of x. Hence for decades, the canonical notation in philosophy and mathematical logic was (x)P to express "all individuals in the domain of discourse have the property P," and "(∃x)P" for "there exists at least one individual in the domain of discourse having the property P." Peano, who was much better known than Peirce, in effect diffused the latter's thinking throughout Europe. Peano's notation was adopted by the Principia Mathematica of Whitehead and Russell, Quine, and Alonzo Church. In 1935, Gentzen introduced the ∀ symbol, by analogy with Peano's ∃ symbol. ∀ did not become canonical until the 1960s.
Around 1895, Peirce began developing his existential graphs, whose variables can be seen as tacitly quantified. Whether the shallowest instance of a variable is even or odd determines whether that variable's quantification is universal or existential. (Shallowness is the contrary of depth, which is determined by the nesting of negations.) Peirce's graphical logic has attracted some attention in recent years by those researching heterogeneous reasoning and diagrammatic inference.
See also
Absolute generality
Almost all
Branching quantifier
Conditional quantifier
Counting quantification
Eventually (mathematics)
Generalized quantifier — a higher-order property used as standard semantics of quantified noun phrases
Lindström quantifier — a generalized polyadic quantifier
Quantifier shift
References
Bibliography
Barwise, Jon; and Etchemendy, John, 2000. Language Proof and Logic. CSLI (University of Chicago Press) and New York: Seven Bridges Press. A gentle introduction to first-order logic by two first-rate logicians.
Frege, Gottlob, 1879. Begriffsschrift. Translated in Jean van Heijenoort, 1967. From Frege to Gödel: A Source Book on Mathematical Logic, 1879-1931. Harvard University Press. The first appearance of quantification.
Hilbert, David; and Ackermann, Wilhelm, 1950 (1928). Principles of Mathematical Logic. Chelsea. Translation of Grundzüge der theoretischen Logik. Springer-Verlag. The 1928 first edition is the first time quantification was consciously employed in the now-standard manner, namely as binding variables ranging over some fixed domain of discourse. This is the defining aspect of first-order logic.
Peirce, C. S., 1885, "On the Algebra of Logic: A Contribution to the Philosophy of Notation, American Journal of Mathematics, Vol. 7, pp. 180–202. Reprinted in Kloesel, N. et al., eds., 1993. Writings of C. S. Peirce, Vol. 5. Indiana University Press. The first appearance of quantification in anything like its present form.
Reichenbach, Hans, 1975 (1947). Elements of Symbolic Logic, Dover Publications. The quantifiers are discussed in chapters §18 "Binding of variables" through §30 "Derivations from Synthetic Premises".
Westerståhl, Dag, 2001, "Quantifiers," in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell.
Wiese, Heike, 2003. Numbers, language, and the human mind. Cambridge University Press. .
External links
. From College of Natural Sciences, University of Hawaii at Manoa.
Stanford Encyclopedia of Philosophy:
Shapiro, Stewart (2000). "Classical Logic" (Covers syntax, model theory, and metatheory for first order logic in the natural deduction style.)
Westerståhl, Dag (2005). "Generalized quantifiers"
Peters, Stanley; Westerståhl, Dag (2002). "Quantifiers"
Logic
Predicate logic
Quantifier (logic)
Philosophical logic
Semantics | Quantifier (logic) | Mathematics | 4,873 |
71,390,999 | https://en.wikipedia.org/wiki/Leucocoprinus%20austrofragilis | Leucocoprinus austrofragilis is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1992 by the Australian mycologist John Errol Chandos Aberdeen who classified it as Leucocoprinus austrofragilis.
Description
Leucocoprinus austrofragilis is a cream or very pale brown dapperling mushroom known from Australia.
Cap: 1-2.5cm wide, convex and flattening with membranous flesh. The surface is cream or very light brown with a dark brown umbo at the centre and minute brown scales across the entire surface which quickly vanish. The cap edges have striations. Stem: 3-3.5cm long and 1.5-2mm thick tapering upwards from the 2-3mm thick base. It is smooth and whitish with a slightly brown tint. The membranous stem ring is located below the middle of the stem (inferior) but is not persistent and may vanish. Gills: Free, crowded and white but discolouring slightly when dry. Spore print: Pale whitish, nearly white. Spores: Elliptical with a pore. Dextrinoid. 7-9 x 5.5-6 μm.
Habitat and distribution
L. austrofragilis is scarcely recorded and little known. The specimens studied by Aberdeen were collected by A.B. Cribb in March 1963 who found them growing in grass during wet weather in Brisbane, Queensland, Australia. The GBIF and the Atlas of Living Australia only have the single record submitted by Aberdeen as well as some unconfirmed observations from iNaturalist.
References
Leucocoprinus
Fungi of Australia
Fungi described in 1992
Fungus species | Leucocoprinus austrofragilis | Biology | 355 |
74,612,310 | https://en.wikipedia.org/wiki/1st%20Guards%20Fortified%20Region | The 1st Guards Fortified Region (, also translated as 1st Guards Fortified District) was a field fortified region of the Red Army during World War II. It was formed in early 1942 as the 76th Fortified Region and became the only fortified region to receive elite Guards status for its performance in the Rostov Offensive of 1943.
History
The 76th Fortified Region was formed at Kuznetsk in the Volga Military District between 24 April and 5 May 1942. Colonel Pyotr Sakseyev, who had previously held logistics posts, was selected to command the new unit. It included the 42nd, 45th, 46th, 47th, 48th, and 49th Separate Machine Gun Artillery Battalions. The region remained in the district until relocating to the Southeastern Front between 31 July and 12 August. It was assigned to the front's 57th Army on arrival. During the Battle of Stalingrad, the region fought in defensive battles to the south of the city. The army was shifted to the Stalingrad Front on 30 September. The region was transferred to the front's 51st Army on 6 November. It took part in the Kotelnikovo Offensive during the Soviet counteroffensive at Stalingrad. The 51st Army was shifted to the Southern Front on 1 January 1943, taking part in the Rostov Offensive in the first months of the new year. On 14 March the region was withdrawn to the front reserve. On 25 March it was assigned to the 2nd Guards Mechanized Corps of the front's 2nd Guards Army. On 13 April the region was placed under direct front control.
On 4 May 1943, the 76th was reorganized into the elite 1st Guards Fortified Region for its performance in the Rostov Offensive. On 13 May it was assigned to the front's 28th Army. On 10 July it shifted to the front's 55th Rifle Corps, and then to the 44th Army on 11 August. The region took part in the Donbass Strategic Offensive during August and September, being involved in the liberation of Taganrog and Osipenko. The region returned to the 28th Army on 10 September. Colonel Sergey Ivanovich Nikitin, relieved of command of the 4th Guards Rifle Division, became acting fortified region commander on 25 September, after Sakseyev was promoted to command the 24th Guards Rifle Division.
The fortified region took part in the Melitopol Offensive, breaking through German defenses on the Molochnaya in the region of Melitopol. When the Southern Front became the 4th Ukrainian Front, the region was placed under the control of the 28th Army's 130th Rifle Division on 20 October. It returned to army control on 26 November, and shifted to the 51st Army on 5 November. It was transferred to the 2nd Guards Army on 15 November, and subordinated to the army's 13th Guards Rifle Corps on 20 November. On 12 December the fortified region returned to 2nd Guards Army direct control.
The fortified region was transferred to the front's 28th Army on 23 February 1944, and on 28 February the army shifted to the 3rd Ukrainian Front, with which the fortified region spent the rest of the war. The fortified region took part in the Bereznegovatoye–Snigirevka offensive and then the Odessa Offensive that began in late March. During these operations, the fortified region forced a crossing of the Dniester-Southern Bug estuary and took part in the liberation of Nikolayev and Ochakov. The fortified region was transferred to the 5th Shock Army on 29 March. The fortified region received the Nikolayev honorific on 1 April 1944 for its performance in the liberation of the city of Nikolayev. On 3 April it received the Order of the Red Banner for its performance in the liberation of Ochakov. On 6 April, much of its headquarters was killed and the unit Guards Banner lost when the trawler moving them forward was blown up by a mine. 23 officers and 30 sergeants and privates were killed with only two men saved.
On 19 April it was shifted to the front's 8th Guards Army. On 29 April it was shifted to the front's 46th Army. On 20 August it was subordinated to the army's Operational Group Bakhtin for the Second Jassy–Kishinev Offensive. On 29 August it was shifted to the front's 57th Army. It was relocated between 14 and 27 September before rejoining the 57th Army, taking part in the Belgrade Offensive. On 11 October it was subordinated to the army's 64th Rifle Corps, and to its 68th Rifle Corps on 3 November. The fortified region took part in the Budapest Offensive.
On 24 December it was transferred to the front's 4th Guards Army, and on 26 December subordinated to the army's 21st Guards Rifle Corps. The fortified region was shifted to the army's 135th Rifle Corps on 6 January 1945. On 17 January, the region was at 90 percent strength in personnel with 3,122 officers and men. Early on the morning of 18 January, the positions of the fortified region were broken through by German tanks in Operation Konrad III, and its men encircled. That day the fortified region was placed under direct army control. The fortified region reported its losses in these actions as 2,232 men by the end of 22 January. Among the losses were four battalion commanders, the chiefs of its operations department and artillery, and deputy chief for political affairs. The 1st Guards Fortified Region reported the loss of 38 76 mm guns, 31 45 mm guns, seven 120 mm mortars, 33 82 mm mortars, 128 heavy machine guns, 114 light machine guns, 1,093 rifles and carbines, and 237 PPSh submachine guns, as well as all of its communications and engineer equipment. By 23 January 751 men from the unit who made it out of the encirclement concentrated at Tamási. The 1st Guards Fortified Region was transferred to the front's 7th Mechanized Corps on 29 January. The fortified region was withdrawn to the front reserve on 3 February, and on 12 February assigned to the front's 26th Army. It was transferred to the 4th Guards Army on 23 February, and on 7 March to the 35th Guards Rifle Corps of the 27th Army. It took part in the Balaton Defensive Operation.
On 28 March it was transferred to the 27th Army's 37th Rifle Corps. The fortified region was subordinated to the corps' 316th Rifle Division on 1 April and then the 35th Guards Rifle Corps' 163rd Rifle Division on 2 April. On 3 April the fortified region came under the direct control of the corps, and on 4 April it was shifted to the control of the 57th Army. Nikitin was moved up to chief of the front's Combat and Physical Training Department and replaced by 113th Rifle Division commander Colonel Stepan Kiryan on 17 April. On 21 April it was shifted to the 4th Guards Army, and on 26 April subordinated to the army's 31st Guards Rifle Corps.
Postwar
After the end of the war, the fortified region was transferred to the direct control of the 3rd Ukrainian Front on 3 June, and transferred to the Southern Group of Forces on 15 June. The 1st Guards Fortified Region was withdrawn to the Odessa Military District on 16 August. The fortified region was disbanded in the Odessa Military District between 17 May and 4 July 1946.
Commanders
The following officers, holding the title of commandant, commanded the fortified region:
Colonel Pyotr Ivanovich Sakseyev (24 April 1942 – 24 September 1943)
Colonel Sergey Ivanovich Nikitin (25 September 1943–c. 15 April 1945)
Colonel Stepan Vasilyevich Kiryan (17 April–5 November 1945)
General-mayor Yeremey Zakharovich Karamanov (5 November 1945 – 30 June 1946)
The following officers served as chiefs of staff of the fortified region:
Major Nikolay Fyodorovich Likholetov (24 April–20 November 1942)
Colonel Vasily Ivanovich Argunov (20 November 1942 – 6 June 1946)
Order of battle
The following machine gun artillery battalions were assigned to the fortified region during its existence:
42nd Separate Machine Gun Artillery Battalion (5 May–5 November 1942)
45th Separate Machine Gun Artillery Battalion (5 May–20 November 1942)
46th Separate Machine Gun Artillery Battalion (5 May–30 July 1943)
47th Separate Machine Gun Artillery Battalion (5 May–5 November 1942)
48th Separate Machine Gun Artillery Battalion (5 May–5 November 1942)
49th Separate Machine Gun Artillery Battalion (5 May–5 November 1942)
51st Separate Machine Gun Artillery Battalion (5 November 1942 – 30 July 1943)
36th Separate Machine Gun Artillery Battalion (9 May–30 July 1943)
168th Separate Machine Gun Artillery Battalion (9 May–30 July 1943)
170th Separate Machine Gun Artillery Battalion (9 May–30 July 1943)
148th Separate Machine Gun Artillery Battalion (10 August–1 September 1943)
2nd Guards Separate Machine Gun Artillery Battalion (10 August 1943 – 20 June 1946), converted from the 46th Separate Machine Gun Artillery Battalion 1 July 1943
8th Guards Separate Machine Gun Artillery Battalion (10 August 1943 – 20 June 1946), converted from 51st Separate Machine Gun Artillery Battalion 4 May 1943
9th Guards Separate Machine Gun Artillery Battalion (10 August 1943 – 20 June 1946), converted from 36th Separate Machine Gun Artillery Battalion 4 May 1943
10th Guards Separate Machine Gun Artillery Battalion (10 August 1943 – 20 June 1946), converted from 170th Separate Machine Gun Artillery Battalion 23 May 1943
11th Guards Separate Machine Gun Artillery Battalion (10 August 1943 – 20 June 1946), converted from 168th Separate Machine Gun Artillery Battalion 4 May 1943
Support units included:
161st Separate Trench Flamethrower Company (25 April 1942 – 30 July 1943)
376th Separate Signals Company (25 April 1942 – 30 July 1943)
33rd Guards Separate Signals Company (30 July 1943 – 20 June 1946)
2356th Field Postal Station (20 December 1942 – 20 February 1943)
40202nd Field Postal Station (20 April 1943–Unknown)
156th Separate Motor Rifle Battalion (1 August 1942 – 20 February 1943)
Separate Training Machine Gun Artillery Battalion (1 January–20 June 1946)
41st Separate Engineer Company (1 January–20 June 1946)
867th Auto-Transport Company (1 January–20 June 1946)
670th Medical-Sanitary Company (1 January–20 June 1946)
835th Field Bakery (1 January–20 June 1946)
References
Citations
Bibliography
Fortified regions of the Soviet Union
Red Army units and formations of World War II
Military units and formations disestablished in 1946
Military units and formations awarded the Order of the Red Banner | 1st Guards Fortified Region | Engineering | 2,140 |
11,422,295 | https://en.wikipedia.org/wiki/SroH%20RNA | The bacterial sroH RNA is a non-coding RNA that is 160 nucleotides in length. The function of this family is unknown. An SroH gene deletion strain was shown to be sensitive to cell wall stress.
SroE and SroD were identified in the same bioinformatics search.
References
External links
Non-coding RNA | SroH RNA | Chemistry | 72 |
40,591,522 | https://en.wikipedia.org/wiki/Afzal%20Husain | Afzal Husain was born in 1975 and received B.E. and MTech degrees in mechanical engineering with specialization in Thermal Sciences from Aligarh Muslim University, Aligarh, India in 2003 and 2005, respectively. He received the PhD degree in Thermodynamics and Fluid Mechanics from Inha University, Incheon, South Korea, in 2010. He was a lecturer of Mechanical Engineering at Inha University from Mar. 2010 to Aug. 2012. Since Oct. 2012, he is Assistant Professor with Mechanical and Industrial Engineering Department. He has published more than 30 articles in peer-reviewed international journals and conferences proceedings apart from a number of papers in domestic journals/conferences and workshops. His research interests are computational fluid engineering (CFE), heat transfer, optimization techniques, optimization of fluid and heat transfer systems using CFD and surrogate models, genetic algorithms, development of heat transfer augmentation techniques for conventional- and micro-systems, fluid flow and thermal analysis of microelectromechanical systems (MEMS), and electronic cooling. Dr. Husain received Best Researcher Award at Inha University in 2009 and his profile is included in Marquis Who's Who 2011 as engineering educator.
References
1975 births
Living people
Mechanical engineers
Aligarh Muslim University alumni | Afzal Husain | Engineering | 258 |
41,787,575 | https://en.wikipedia.org/wiki/Haruko%20Obokata | is a former stem-cell biologist and research unit leader at Japan's Laboratory for Cellular Reprogramming, Riken Center for Developmental Biology. She claimed in 2014 to have developed a radical and remarkably easy way to generate stimulus-triggered acquisition of pluripotency (STAP) cells that could be grown into tissue for use anywhere in the body. In response to allegations of irregularities in Obokata's research publications involving STAP cells, Riken launched an investigation that discovered examples of scientific misconduct on the part of Obokata. Attempts to replicate Obokata's STAP cell results failed. The ensuing STAP cell scandal gained worldwide attention.
Early life, education and career
Obokata was born in Matsudo, Chiba, Japan, in 1983. She attended Toho Senior High School, which is attached to Toho University, and graduated from Waseda University with a B.S. degree in 2006, and an M.S. degree in applied chemistry in 2008. Obokata later joined the laboratory of Charles Vacanti at Harvard Medical School, where she was described as "a lab director’s dream" with "fanatical devotion". In 2011, Obokata completed her Ph.D. in Engineering at the Graduate School of Advanced Engineering and Science at Waseda University. Obokata became a guest researcher at the Riken Center for Developmental Biology in 2011, and in 2013 became head of the Lab for Cellular Reprogramming.
According to an Asahi Shimbun news report, Obokata offered to retract her doctoral dissertation following allegations that she plagiarized segments of her dissertation from publicly available documents from the U.S. National Institute of Health website. In October 2014, an investigative panel appointed by Waseda University gave Obokata one year to revise her dissertation or lose her degree. In 2015, Waseda University announced that it was revoking Obokata's doctoral degree.
STAP cell reports
At Riken, Obokata studied stem cells in collaboration with Vacanti, Teruhiko Wakayama, and Yoshiki Sasai, with two of her research papers accepted for publication in Nature in 2013. In a note to Vacanti, Sasai wrote that Obokata had discovered "a magic spell" that led to their experimental success, described later in The Guardian as "a surprisingly simple way of turning ordinary body cells…into something very much like embryonic stem cells" by soaking them in "a weak bath of citric acid." This procedure was reported to "wash away [the cells'] developmental past," transforming them into "cellular infants, able to multiply abundantly and grow into any type of cell in the body, a superpower known as pluripotency." Upon publication of the papers, Obokata "was hailed as a bright new star in the scientific firmament and a national hero."
STAP cell controversy
Within days of publication of the Nature articles, "disturbing allegations emerged [...] images looked doctored, and chunks of [...] text were lifted from other papers." Critics noted that images in the published articles were similar to those published in Obokata's doctoral thesis, the latter involving different experiments than those presented in the Nature publications.
In 2014 Riken launched an investigation into the issue, and announced on April 1 that Obokata was guilty of scientific misconduct on two of the six charges initially brought against her. The Riken investigation document reported:
Obokata apologised for her "insufficient efforts, ill-preparedness and unskillfulness", and claimed she had only made "benevolent mistakes"; she denied the charge that she had fabricated results, and denied that she lacked ethics, integrity, and humility. Obokata also reported that her STAP cells existed. The Guardian reported that although Obokata's collaborators initially supported her, "one by one they relented and asked Nature to retract the articles." In June 2014, Obokata agreed to retract both papers.
Near the time of retraction, "genetic analysis showed that the Stap cells didn’t match the mice from which they supposedly came." Although Obokata claimed not to know how this was possible, "the obvious, and rather depressing, explanation is that her so-called Stap cells were just regular embryonic stem cells that someone had taken from a freezer and relabelled." In July 2014, Obokata participated, with monitoring by a third party, in Riken's effort to experimentally reproduce the original STAP cell findings. Those efforts failed to replicate the results originally reported.
Although cleared of misconduct, Sasai was criticized for inadequate supervision of Obokata, and he described himself as "overwhelmed with shame". After spending a month in hospital, Sasai took his own life on August 5, 2014.
Obokata resigned from Riken in December 2014.
In a February 2015 article, The Guardian reported that Obokata was guilty of "unbelievable carelessness", having "manipulated images and plagiarised text." Obokata was also described as exhibiting hubris: "If Obokata hadn’t tried to be a world-beater, chances are her sleights of hand would have gone unnoticed and she would still be looking forward to a long and happy career in science. [...] By stepping into the limelight, she exposed her work to greater scrutiny than it could bear."
In 2016, Obokata's book Ano hi (あの日- That Day) was published by Kodansha. In her account of the controversy, Obokata relates her association with Wakayama, writing that "crucial parts of the STAP experiments were handled only by Wakayama", that she received the STAP cells from Wakayama, and that Wakayama "changed his accounts of how the STAP cells were produced." Obokata later wrote "I feel a strong sense of responsibility for the STAP papers [...], "I never wrote those papers to deceive anyone," and "STAP was real."
A short essay by Obokata appeared in the May 17, 2018, issue of Shukan Bunshun magazine, in which she described herself as "a person who has been hounded".
See also
Academic dishonesty
Scientific misconduct
Masayuki Yamato
List of scientific misconduct incidents
References
External links
STAP HOPE PAGE by Haruko Obokata, March 25, 2016
Center for Developmental Biology at RIKEN
Laboratory for Cellular Reprogramming at RIKEN
1983 births
2014 hoaxes
21st-century Japanese biologists
Academic scandals
Hoaxes in Japan
Hoaxes in science
Japanese women biologists
Japanese women scientists
Living people
People from Matsudo
People involved in scientific misconduct incidents
Riken personnel
Stem cell researchers
Waseda University alumni
Women biologists
21st-century women scientists | Haruko Obokata | Biology | 1,431 |
73,291,755 | https://en.wikipedia.org/wiki/Generative%20artificial%20intelligence | Generative artificial intelligence (generative AI, GenAI, or GAI) is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Improvements in transformer-based deep neural networks, particularly large language models (LLMs), enabled an AI boom of generative AI systems in the early 2020s. These include chatbots such as ChatGPT, Copilot, Gemini, and LLaMA; text-to-image artificial intelligence image generation systems such as Stable Diffusion, Midjourney, and DALL-E; and text-to-video AI generators such as Sora. Companies such as OpenAI, Anthropic, Microsoft, Google, and Baidu as well as numerous smaller firms have developed generative AI models.
Generative AI has uses across a wide range of industries, including software development, healthcare, finance, entertainment, customer service, sales and marketing, art, writing, fashion, and product design. However, concerns have been raised about the potential misuse of generative AI such as cybercrime, the use of fake news or deepfakes to deceive or manipulate people, and the mass replacement of human jobs. Intellectual property law concerns also exist around generative models that are trained on and emulate copyrighted works of art.
History
Early history
Since its inception, researchers in the field have raised philosophical and ethical arguments about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity. The concept of automated art dates back at least to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria were described as having designed machines capable of writing text, generating sounds, and playing music. The tradition of creative automations has flourished throughout history, exemplified by Maillardet's automaton created in the early 1800s. Markov chains have long been used to model natural languages since their development by Russian mathematician Andrey Markov in the early 20th century. Markov published his first paper on the topic in 1906, and analyzed the pattern of vowels and consonants in the novel Eugeny Onegin using Markov chains. Once a Markov chain is learned on a text corpus, it can then be used as a probabilistic text generator.
Academic artificial intelligence
The academic discipline of artificial intelligence was established at a research workshop held at Dartmouth College in 1956 and has experienced several waves of advancement and optimism in the decades since. Artificial Intelligence research began in the 1950s with works like Computing Machinery and Intelligence (1950) and the 1956 Dartmouth Summer Research Project on AI. Since the 1950s, artists and researchers have used artificial intelligence to create artistic works. By the early 1970s, Harold Cohen was creating and exhibiting generative AI works created by AARON, the computer program Cohen created to generate paintings.
The terms generative AI planning or generative planning were used in the 1980s and 1990s to refer to AI planning systems, especially computer-aided process planning, used to generate sequences of actions to reach a specified goal. Generative AI planning systems used symbolic AI methods such as state space search and constraint satisfaction and were a "relatively mature" technology by the early 1990s. They were used to generate crisis action plans for military use, process plans for manufacturing and decision plans such as in prototype autonomous spacecraft.
Generative neural nets (2014-2019)
Since its inception, the field of machine learning used both discriminative models and generative models, to model and predict data. Beginning in the late 2000s, the emergence of deep learning drove progress and research in image classification, speech recognition, natural language processing and other tasks. Neural networks in this era were typically trained as discriminative models, due to the difficulty of generative modeling.
In 2014, advancements such as the variational autoencoder and generative adversarial network produced the first practical deep neural networks capable of learning generative models, as opposed to discriminative ones, for complex data such as images. These deep generative models were the first to output not only class labels for images but also entire images.
In 2017, the Transformer network enabled advancements in generative models compared to older Long-Short Term Memory models, leading to the first generative pre-trained transformer (GPT), known as GPT-1, in 2018. This was followed in 2019 by GPT-2 which demonstrated the ability to generalize unsupervised to many different tasks as a Foundation model.
The new generative models introduced during this period allowed for large neural networks to be trained using unsupervised learning or semi-supervised learning, rather than the supervised learning typical of discriminative models. Unsupervised learning removed the need for humans to manually label data, allowing for larger networks to be trained.
Generative AI boom (2020-)
In March 2020, 15.ai, created by an anonymous MIT researcher, was a free web application that could generate convincing character voices using minimal training data. The platform is credited as the first mainstream service to popularize AI voice cloning (audio deepfakes) in memes and content creation, influencing subsequent developments in voice AI technology.
In 2021, the emergence of DALL-E, a transformer-based pixel generative model, marked an advance in AI-generated imagery. This was followed by the releases of Midjourney and Stable Diffusion in 2022, which further democratized access to high-quality artificial intelligence art creation from natural language prompts. These systems demonstrated unprecedented capabilities in generating photorealistic images, artwork, and designs based on text descriptions, leading to widespread adoption among artists, designers, and the general public.
In late 2022, the public release of ChatGPT revolutionized the accessibility and application of generative AI for general-purpose text-based tasks. The system's ability to engage in natural conversations, generate creative content, assist with coding, and perform various analytical tasks captured global attention and sparked widespread discussion about AI's potential impact on work, education, and creativity.
In March 2023, GPT-4's release represented another jump in generative AI capabilities. A team from Microsoft Research controversially argued that it "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." However, this assessment was contested by other scholars who maintained that generative AI remained "still far from reaching the benchmark of 'general human intelligence'" as of 2023. Later in 2023, Meta released ImageBind, an AI model combining multiple modalities including text, images, video, thermal data, 3D data, audio, and motion, paving the way for more immersive generative AI applications.
In December 2023, Google unveiled Gemini, a multimodal AI model available in four versions: Ultra, Pro, Flash, and Nano. The company integrated Gemini Pro into its Bard chatbot and announced plans for "Bard Advanced" powered by the larger Gemini Ultra model. In February 2024, Google unified Bard and Duet AI under the Gemini brand, launching a mobile app on Android and integrating the service into the Google app on iOS.
In March 2024, Anthropic released the Claude 3 family of large language models, including Claude 3 Haiku, Sonnet, and Opus. The models demonstrated significant improvements in capabilities across various benchmarks, with Claude 3 Opus notably outperforming leading models from OpenAI and Google. In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated improved performance compared to the larger Claude 3 Opus, particularly in areas such as coding, multistep workflows, and image analysis.
According to a survey by SAS and Coleman Parkes Research, China has emerged as a global leader in generative AI adoption, with 83% of Chinese respondents using the technology, exceeding both the global average of 54% and the U.S. rate of 65%. This leadership is further evidenced by China's intellectual property developments in the field, with a UN report revealing that Chinese entities filed over 38,000 generative AI patents from 2014 to 2023, substantially surpassing the United States in patent applications.
Modalities
A generative AI system is constructed by applying unsupervised machine learning (invoking for instance neural network architectures such as generative adversarial networks (GANs), variation autoencoders (VAEs), transformers, or self-supervised machine learning trained on a dataset. The capabilities of a generative AI system depend on the modality or type of the data set used. Generative AI can be either unimodal or multimodal; unimodal systems take only one type of input, whereas multimodal systems can take more than one type of input. For example, one version of OpenAI's GPT-4 accepts both text and image inputs.
Text
Generative AI systems trained on words or word tokens include GPT-3, GPT-4, GPT-4o, LaMDA, LLaMA, BLOOM, Gemini and others (see List of large language models). They are capable of natural language processing, machine translation, and natural language generation and can be used as foundation models for other tasks. Data sets include BookCorpus, Wikipedia, and others (see List of text corpora).
Code
In addition to natural language text, large language models can be trained on programming language text, allowing them to generate source code for new computer programs. Examples include OpenAI Codex and the VS Code fork Cursor.
Images
Producing high-quality visual art is a prominent application of generative AI. Generative AI systems trained on sets of images with text captions include Imagen, DALL-E, Midjourney, Adobe Firefly, FLUX.1, Stable Diffusion and others (see Artificial intelligence art, Generative art, and Synthetic media). They are commonly used for text-to-image generation and neural style transfer. Datasets include LAION-5B and others (see List of datasets in computer vision and image processing).
Audio
Generative AI can also be trained extensively on audio clips to produce natural-sounding speech synthesis and text-to-speech capabilities. An early pioneer in this field was 15.ai, launched in March 2020, which demonstrated the ability to clone character voices using as little as 15 seconds of training data. The website gained widespread attention for its ability to generate emotionally expressive speech for various fictional characters, though it was later taken offline in 2022 due to copyright concerns. Commercial alternatives subsequently emerged, including ElevenLabs' context-aware synthesis tools and Meta Platform's Voicebox.
Generative AI systems such as MusicLM and MusicGen can also be trained on the audio waveforms of recorded music along with text annotations, in order to generate new musical samples based on text descriptions such as a calming violin melody backed by a distorted guitar riff.
Music
Audio deepfakes of lyrics have been generated, like the song Savages, which used AI to mimic rapper Jay-Z's vocals. Music artist's instrumentals and lyrics are copyrighted but their voices aren't protected from regenerative AI yet, raising a debate about whether artists should get royalties from audio deepfakes.
Many AI music generators have been created that can be generated using a text phrase, genre options, and looped libraries of bars and riffs.
Video
Generative AI trained on annotated video can generate temporally-coherent, detailed and photorealistic video clips. Examples include Sora by OpenAI, Gen-1 and Gen-2 by Runway, and Make-A-Video by Meta Platforms.
Actions
Generative AI can also be trained on the motions of a robotic system to generate new trajectories for motion planning or navigation. For example, UniPi from Google Research uses prompts like "pick up blue bowl" or "wipe plate with yellow sponge" to control movements of a robot arm. Multimodal "vision-language-action" models such as Google's RT-2 can perform rudimentary reasoning in response to user prompts and visual input, such as picking up a toy dinosaur when given the prompt pick up the extinct animal at a table filled with toy animals and other objects.
3D modeling
Artificially intelligent computer-aided design (CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate 3D modeling. AI-based CAD libraries could also be developed using linked open data of schematics and diagrams. AI CAD assistants are used as tools to help streamline workflow.
Software and hardware
Generative AI models are used to power chatbot products such as ChatGPT, programming tools such as GitHub Copilot, text-to-image products such as Midjourney, and text-to-video products such as Runway Gen-2. Generative AI features have been integrated into a variety of existing commercially available products such as Microsoft Office (Microsoft Copilot), Google Photos, and the Adobe Suite (Adobe Firefly). Many generative AI models are also available as open-source software, including Stable Diffusion and the LLaMA language model.
Smaller generative AI models with up to a few billion parameters can run on smartphones, embedded devices, and personal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on a Raspberry Pi 4 and one version of Stable Diffusion can run on an iPhone 11.
Larger models with tens of billions of parameters can run on laptop or desktop computers. To achieve an acceptable speed, models of this size may require accelerators such as the GPU chips produced by NVIDIA and AMD or the Neural Engine included in Apple silicon products. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC.
The advantages of running generative AI locally include protection of privacy and intellectual property, and avoidance of rate limiting and censorship. The subreddit r/LocalLLaMA in particular focuses on using consumer-grade gaming graphics cards through such techniques as compression. That forum is one of only two sources Andrej Karpathy trusts for language model benchmarks. Yann LeCun has advocated open-source models for their value to vertical applications and for improving AI safety.
Language models with hundreds of billions of parameters, such as GPT-4 or PaLM, typically run on datacenter computers equipped with arrays of GPUs (such as NVIDIA's H100) or AI accelerator chips (such as Google's TPU). These very large models are typically accessed as cloud services over the Internet.
In 2022, the United States New Export Controls on Advanced Computing and Semiconductors to China imposed restrictions on exports to China of GPU and AI accelerator chips used for generative AI. Chips such as the NVIDIA A800 and the Biren Technology BR104 were developed to meet the requirements of the sanctions.
There is free software on the market capable of recognizing text generated by generative artificial intelligence (such as GPTZero), as well as images, audio or video coming from it. Potential mitigation strategies for detecting generative AI content include digital watermarking, content authentication, information retrieval, and machine learning classifier models. Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work.
Law and regulation
In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with the Biden administration in July 2023 to watermark AI-generated content. In October 2023, Executive Order 14110 applied the Defense Production Act to require all US companies to report information to the federal government when training certain high-impact AI models.
In the European Union, the proposed Artificial Intelligence Act includes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such.
In China, the Interim Measures for the Management of Generative AI Services introduced by the Cyberspace Administration of China regulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI must "adhere to socialist core values".
Copyright
Training with copyrighted content
Generative AI systems such as ChatGPT and Midjourney are trained on large, publicly available datasets that include copyrighted works. AI developers have argued that such training is protected under fair use, while copyright holders have argued that it infringes their rights.
Proponents of fair use training have argued that it is a transformative use and does not involve making copies of copyrighted works available to the public. Critics have argued that image generators such as Midjourney can create nearly-identical copies of some copyrighted images, and that generative AI programs compete with the content they are trained on.
As of 2024, several lawsuits related to the use of copyrighted material in training are ongoing.
Getty Images has sued Stability AI over the use of its images to train Stable diffusion. Both the Authors Guild and The New York Times have sued Microsoft and OpenAI over the use of their works to train ChatGPT.
Copyright of AI-generated content
A separate question is whether AI-generated works can qualify for copyright protection. The United States Copyright Office has ruled that works created by artificial intelligence without any human input cannot be copyrighted, because they lack human authorship. However, the office has also begun taking public input to determine if these rules need to be refined for generative AI.
Concerns
The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls to pause AI experiments, and actions by multiple governments. In a July 2023 briefing of the United Nations Security Council, Secretary-General António Guterres stated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale".
Job losses
From the early days of the development of AI, there have been arguments put forward by ELIZA creator Joseph Weizenbaum and others about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculations and qualitative, value-based judgements. In April 2023, it was reported that image generation AI has resulted in 70% of the jobs for video game illustrators in China being lost. In July 2023, developments in generative AI contributed to the 2023 Hollywood labor disputes. Fran Drescher, president of the Screen Actors Guild, declared that "artificial intelligence poses an existential threat to creative professions" during the 2023 SAG-AFTRA strike. Voice generation AI has been seen as a potential challenge to the voice acting sector.
The intersection of AI and employment concerns among underrepresented groups globally remains a critical facet. While AI promises efficiency enhancements and skill acquisition, concerns about job displacement and biased recruiting processes persist among these groups, as outlined in surveys by Fast Company. To leverage AI for a more equitable society, proactive steps encompass mitigating biases, advocating transparency, respecting privacy and consent, and embracing diverse teams and ethical considerations. Strategies involve redirecting policy emphasis on regulation, inclusive design, and education's potential for personalized teaching to maximize benefits while minimizing harms.
Racial and gender bias
Generative AI models can reflect and amplify any cultural bias present in the underlying data. For example, a language model might assume that doctors and judges are male, and that secretaries or nurses are female, if those biases are common in the training data. Similarly, an image model prompted with the text "a photo of a CEO" might disproportionately generate images of white male CEOs, if trained on a racially biased data set. A number of methods for mitigating bias have been attempted, such as altering input prompts and reweighting training data.
Deepfakes
Deepfakes (a portmanteau of "deep learning" and "fake") are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks. Deepfakes have garnered widespread attention and concerns for their uses in deepfake celebrity pornographic videos, revenge porn, fake news, hoaxes, health disinformation, financial fraud, and covert foreign election interference. This has elicited responses from both industry and government to detect and limit their use.
In July 2023, the fact-checking company Logically found that the popular generative AI models Midjourney, DALL-E 2 and Stable Diffusion would produce plausible disinformation images when prompted to do so, such as images of electoral fraud in the United States and Muslim women supporting India's Hindu nationalist Bharatiya Janata Party.
In April 2024, a paper proposed to use blockchain (distributed ledger technology) to promote "transparency, verifiability, and decentralization in AI development and usage".
Audio deepfakes
Instances of users abusing software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals have raised ethical concerns over voice generation AI. In response, companies such as ElevenLabs have stated that they would work on mitigating potential abuse through safeguards and identity verification.
Concerns and fandoms have spawned from AI-generated music. The same software used to clone voices has been used on famous musicians' voices to create songs that mimic their voices, gaining both tremendous popularity and criticism. Similar techniques have also been used to create improved quality or full-length versions of songs that have been leaked or have yet to be released.
Generative AI has also been used to create new digital artist personalities, with some of these receiving enough attention to receive record deals at major labels. The developers of these virtual artists have also faced their fair share of criticism for their personified programs, including backlash for "dehumanizing" an artform, and also creating artists which create unrealistic or immoral appeals to their audiences.
Cybercrime
Generative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, including phishing scams. Deepfake video and audio have been used to create disinformation and fraud. In 2020, former Google click fraud czar Shuman Ghosemajumder argued that once deepfake videos become perfectly realistic, they would stop appearing remarkable to viewers, potentially leading to uncritical acceptance of false information. Additionally, large language models and other forms of text-generation AI have been used to create fake reviews of e-commerce websites to boost ratings. Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT.
A 2023 study showed that generative AI can be vulnerable to jailbreaks, reverse psychology and prompt injection attacks, enabling attackers to obtain help with harmful requests, such as for crafting social engineering and phishing attacks. Additionally, other researchers have demonstrated that open-source models can be fine-tuned to remove their safety restrictions at low cost.
Reliance on industry giants
Training frontier AI models requires an enormous amount of computing power. Usually only Big Tech companies have the financial resources to make such investments. Smaller start-ups such as Cohere and OpenAI end up buying access to data centers from Google and Microsoft respectively.
Energy and environment
Scientists and journalists have expressed concerns about the environmental impact that the development and deployment of generative models are having: high CO2 emissions, large amounts of freshwater used for data centers, and high amounts of electricity usage. There is also concern that these impacts may increase as these models are incorporated into widely used search engines such as Google Search and Bing; as chatbots and other applications become more popular; and as models need to be retrained.
Proposed mitigation strategies include factoring potential environmental costs prior to model development or data collection, increasing efficiency of data centers to reduce electricity/energy usage, building more efficient machine learning models, minimizing the number of times that models need to be retrained, developing a government-directed framework for auditing the environmental impact of these models, regulating for transparency of these models, regulating their energy and water usage, encouraging researchers to publish data on their models' carbon footprint, and increasing the number of subject matter experts who understand both machine learning and climate science.
Content quality
The New York Times defines slop as analogous to spam: "shoddy or unwanted A.I. content in social media, art, books and ... in search results." Journalists have expressed concerns about the scale of low-quality generated content with respect to social media content moderation, the monetary incentives from social media companies to spread such content, false political messaging, spamming of scientific research paper submissions, increased time and effort to find higher quality or desired content on the Internet, the indexing of generated content by search engines, and on journalism itself.
A paper published by researchers at Amazon Web Services AI Labs found that over 57% of sentences from a sample of over 6 billion sentences from Common Crawl, a snapshot of web pages, were machine translated. Many of these automated translations were seen as lower quality, especially for sentences that were translated across at least three languages. Many lower-resource languages (ex. Wolof, Xhosa) were translated across more languages than higher-resource languages (ex. English, French).
In September 2024, Robyn Speer, the author of wordfreq, an open source database that calculated word frequencies based on text from the Internet, announced that she had stopped updating the data for several reasons: high costs for obtaining data from Reddit and Twitter, excessive focus on generative AI compared to other methods in the natural language processing community, and that "generative AI has polluted the data".
The adoption of generative AI tools led to an explosion of AI-generated content across multiple domains. A study from University College London estimated that in 2023, more than 60,000 scholarly articles—over 1% of all publications—were likely written with LLM assistance. According to Stanford University's Institute for Human-Centered AI, approximately 17.5% of newly published computer science papers and 16.9% of peer review text now incorporate content generated by LLMs.
Visual content follows a similar trend. Since the launch of DALL-E 2 in 2022, it is estimated that an average of 34 million images have been created daily. As of August 2023, more than 15 billion images had been generated using text-to-image algorithms, with 80% of these created by models based on Stable Diffusion.
If AI-generated content is included in new data crawls from the Internet for additional training of AI models, defects in the resulting models may occur. Training an AI model exclusively on the output of another AI model produces a lower-quality model. Repeating this process, where each new model is trained on the previous model's output, leads to progressive degradation and eventually results in a "model collapse" after multiple iterations. Tests have been conducted with pattern recognition of handwritten letters and with pictures of human faces. As a consequence, the value of data collected from genuine human interactions with systems may become increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.
On the other side, synthetic data is often used as an alternative to data produced by real-world events. Such data can be deployed to validate mathematical models and to train machine learning models while preserving user privacy, including for structured data. The approach is not limited to text generation; image generation has been employed to train computer vision models.
Misuse in journalism
In January 2023, Futurism.com broke the story that CNET had been using an undisclosed internal AI tool to write at least 77 of its stories; after the news broke, CNET posted corrections to 41 of the stories.
In April 2023, the German tabloid Die Aktuelle published a fake AI-generated interview with former racing driver Michael Schumacher, who had not made any public appearances since 2013 after sustaining a brain injury in a skiing accident. The story included two possible disclosures: the cover included the line "deceptively real", and the interview included an acknowledgment at the end that it was AI-generated. The editor-in-chief was fired shortly thereafter amid the controversy.
Other outlets that have published articles whose content and/or byline have been confirmed or suspected to be created by generative AI models – often with false content, errors, and/or non-disclosure of generative AI use - include:
NewsBreak
outlets owned by Arena Group
Sports Illustrated
TheStreet
Men's Journal
B&H Photo
outlets owned by Gannett
The Columbus Dispatch
Reviewed
USA Today
MSN
News Corp
outlets owned by G/O Media
Gizmodo
Jalopnik
A.V. Club
The Irish Times
outlets owned by Red Ventures
Bankrate
BuzzFeed
Newsweek
Hoodline
outlets owned by Outside Inc.
Yoga Journal
Backpacker
Clean Eating
Hollywood Life
Us Weekly
The Los Angeles Times
Cody Enterprise
Cosmos
outlets owned by McClatchy
Miami Herald
Sacramento Bee
Tacoma News Tribune
The Rock Hill Herald
The Modesto Bee
Fort Worth Star-Telegram
Merced Sun-Star
Ledger-Enquirer
The Kansas City Star
Raleigh News & Observer
outlets owned by Ziff Davis
PC Magazine
Mashable
AskMen
outlets owned by Hearst
Good Housekeeping
outlets owned by IAC Inc.
People
Parents
Food & Wine
InStyle
Real Simple
Travel + Leisure
Better Homes & Gardens
Southern Living
outlets owned by Street Media
LA Weekly
The Village Voice
Riverfront Times
Apple Intelligence
In May 2024, Futurism noted that a content management system video by AdVon Commerce, who had used generative AI to produce articles for many of the aforementioned outlets, appeared to show that they "had produced tens of thousands of articles for more than 150 publishers."
News broadcasters in Kuwait, Greece, South Korea, India, China and Taiwan have presented news with anchors based on Generative AI models, prompting concerns about job losses for human anchors and audience trust in news that has historically been influenced by parasocial relationships with broadcasters, content creators or social media influencers. Algorithmically generated anchors have also been used by allies of ISIS for their broadcasts.
In 2023, Google reportedly pitched a tool to news outlets that claimed to "produce news stories" based on input data provided, such as "details of current events". Some news company executives who viewed the pitch described it as "[taking] for granted the effort that went into producing accurate and artful news stories."
In February 2024, Google launched a program to pay small publishers to write three articles per day using a beta generative AI model. The program does not require the knowledge or consent of the websites that the publishers are using as sources, nor does it require the published articles to be labeled as being created or assisted by these models.
Many defunct news sites (The Hairpin, The Frisky, Apple Daily, Ashland Daily Tidings, Clayton County Register, Southwest Journal) and blogs (The Unofficial Apple Weblog, iLounge) have undergone cybersquatting, with articles created by generative AI.
United States Senators Richard Blumenthal and Amy Klobuchar have expressed concern that generative AI could have a harmful impact on local news. In July 2023, OpenAI partnered with the American Journalism Project to fund local news outlets for experimenting with generative AI, with Axios noting the possibility of generative AI companies creating a dependency for these news outlets.
Meta AI, a chatbot based on Llama 3 which summarizes news stories, was noted by The Washington Post to copy sentences from those stories without direct attribution and to potentially further decrease the traffic of online news outlets.
In response to potential pitfalls around the use and misuse of generative AI in journalism and worries about declining audience trust, outlets around the world, including publications such as Wired, Associated Press, The Quint, Rappler or The Guardian have published guidelines around how they plan to use and not use AI and generative AI in their work.
In June 2024, Reuters Institute published their Digital New Report for 2024. In a survey of people in America and Europe, Reuters Institute reports that 52% and 47% respectively are uncomfortable with news produced by "mostly AI with some human oversight", and 23% and 15% respectively report being comfortable. 42% of Americans and 33% of Europeans reported that they were comfortable with news produced by "mainly human with some help from AI". The results of global surveys reported that people were more uncomfortable with news topics including politics (46%), crime (43%), and local news (37%) produced by AI than other news topics.
See also
References
Artificial neural networks
Deep learning
Machine learning
2020s in computing
2023 in computing
2024 in computing
2025 in computing | Generative artificial intelligence | Engineering | 6,880 |
65,266,916 | https://en.wikipedia.org/wiki/Lark%20Health | Lark Health is an American digital health company based in Mountain View, California. It provides a 24/7 nursing platform for chronic conditions, powered by artificial intelligence (AI) and has a text-messaging type interface. Lark also provides AI nurses for type 2 diabetes care, hypertension care, tobacco cessation, stress management, obesity, and more for 1.5 million patients.
Lark is notable for being preloaded on all Samsung Galaxy S5 phones by 2014.
History
Lark was founded by Julia Hu and Jeff Zira. It first produced a sleep health monitor worn on a person's wrist. It was designed to wake up the individual wearing the device without disturbing anyone else who might be sleeping nearby. The product was soon sold in all Apple stores globally.
Lark eventually focused more on artificial intelligence and less on hardware. By 2014, Lark was preloaded on all Samsung Galaxy S5 phones.
The Lark apps focus on common chronic conditions such as obesity, diabetes prevention, diabetes, and hypertension. Lark Diabetes Prevention Program (DPP) is officially recognized by the Centers for Disease Control and Prevention (CDC) as an online DPP.
Lark's efficacy has been evaluated by a study published in the Journal of Medical Internet Research Diabetes,
Products and services
Lark has specialized health plans focusing on patients with diabetes, hypertension, prediabetes or at high risk for type 2 diabetes, and overall health. Lark's services are delivered and automatically syncs with certain bluetooth-enabled health monitors devices such as home blood pressure monitors, glucometers, activity trackers, and body weight scales. Some programs allow for both the app and one or more connected devices to be used.
References
External links
Companies based in Mountain View, California
Health informatics
Telehealth | Lark Health | Biology | 360 |
424,410 | https://en.wikipedia.org/wiki/Rudolph%20A.%20Marcus | Rudolph Arthur Marcus (born July 21, 1923) is a Canadian-born American chemist who received the 1992 Nobel Prize in Chemistry "for his contributions to the theory of electron transfer reactions in chemical systems". Marcus theory, named after him, provides a thermodynamic and kinetic framework for describing one electron outer-sphere electron transfer. He is a professor at Caltech, Nanyang Technological University, Singapore and a member of the International Academy of Quantum Molecular Science.
Education and early life
Marcus was born in Montreal, Quebec, the son of Esther (born Cohen) and Myer Marcus. His father was born in New York and his mother was born in England. His family background is from Ukmergė (Lithuania). He is Jewish and grew up mostly in a Jewish neighborhood in Montreal but also spent some of his childhood in Detroit, United States. His interest in the sciences began at a young age. He excelled at mathematics at Baron Byng High School. He then studied at McGill University under Carl A. Winkler, who had studied under Cyril Hinshelwood at the University of Oxford. At McGill, Marcus took more math courses than an average chemistry student, which would later aid him in creating his theory on electron transfer.
Marcus earned a B.Sc. in 1943 and a Ph.D. in 1946, both from McGill University. In 1958, he became a naturalized citizen of the United States.
Career and research
After graduating, in 1946, he took postdoctoral positions first at the National Research Council (Canada), followed by the University of North Carolina. He received his first faculty appointment at the Polytechnic Institute of Brooklyn. In 1952, at the University of North Carolina, he developed Rice–Ramsperger–Kassel–Marcus (RRKM) theory by combining the former RRK theory with the transition state theory. In 1964, he taught at the University of Illinois. His approach to solving a problem is to "go full tilt." Marcus moved to the California Institute of Technology in 1978.
Marcus theory of electron transfer
Electron transfer is one of the simplest forms of a chemical reaction. It consists of one outer-sphere electron transfer between substances of the same atomic structure likewise to Marcus’s studies between divalent and trivalent iron ions. Electron transfer may be one of the most basic forms of chemical reaction but without it life cannot exist. Electron transfer is used in all respiratory functions as well as photosynthesis. In the process of oxidizing food molecules, two hydrogen ions, two electrons, and half an oxygen molecule react to make an exothermic reaction as well as a water molecule:
2 H+ + 2 e− + ½ O2 → H2O + heat
Because electron transfer is such a broad, common, and essential reaction within nature, Marcus's theory has become vital within the field of chemistry and biochemistry.
A type of chemical reaction linked to his many studies of electron transfer would be the transfer of an electron between metal ions in different states of oxidation. An example of this type of chemical reaction would be one between a divalent and a trivalent iron ion in an aqueous solution. In Marcus's time chemists were astonished at the slow rate in which this specific reaction took place. This attracted many chemists in the 1950s and is also what began Marcus's interests in electron transfer. Marcus made many studies based on the principles that were found within this chemical reaction, and through his studies was able to elaborate his electron transfer theory. His approach gave way to new experimental programs that contributed to all branches within chemistry and biochemistry.
As of his 100th birthday, he is still active doing research.
Honors and awards
Marcus was awarded honorary degrees from the University of Chicago in 1983, the University of Goteborg in 1986, the Polytechnic Institute of Brooklyn in 1987, McGill University in 1988, Queen's University in 1993, the University of New Brunswick in 1993, the University of Oxford in 1995, the University of North Carolina at Chapel Hill in 1996, the Yokohama National University in 1996, the University of Illinois at Urbana–Champaign in 1997, the Technion – Israel Institute of Technology in 1998, the Technical University of Valencia in 1999, Northwestern University in 2000, the University of Waterloo in 2002, the Nanyang Technological University in 2010, the Tumkur University in 2012, the University of Hyderabad in 2012, and the University of Calgary in 2013. In addition, he was awarded an honorary doctorate from the University of Santiago, Chile in 2018.
Among the awards he received before the Nobel Prize in Chemistry in 1992, Marcus received the Irving Langmuir Prize in Chemical Physics in 1978, the Robinson Award of the Faraday Division of the Royal Society of Chemistry in 1982, Columbia University's Chandler Award in 1983, the Wolf Prize in Chemistry in 1984-1985, the Centenary Prize, the Willard Gibbs Award and the Peter Debye Award in 1988, the National Medal of Science in 1989, Ohio State's William Lloyd Evans Award in 1990, the Theodore William Richards Award (NESACS) in 1990, the Pauling Medal, the Remsen Award and the Edgar Fahs Smith Lecturer in 1991, the Golden Plate Award of the American Academy of Achievement and the Hirschfelder Prize in Theoretical Chemistry in 1993.
He also received a professorial fellowship at University College, Oxford, from 1975 to 1976.
He was elected to the National Academy of Sciences in 1970, the American Academy of Arts and Sciences in 1973, the American Philosophical Society in 1990, received honorary membership in the Royal Society of Chemistry in 1991, and in the Royal Society of Canada in 1993. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1987.
In 2019, he was awarded with the Fray International Sustainability award at SIPS 2019 by FLOGEN Star Outreach.
See also
Henry Taube, who was awarded the 1983 Nobel Prize in Chemistry for "his work on the mechanisms of electron-transfer reactions, especially in metal complexes"
List of Jewish Nobel laureates
References
External links
Marcus Rudolph, Nobel Luminaries Project, The Museum of the Jewish People at Beit Hatfutsot
1923 births
American men centenarians
Canadian men centenarians
Living people
Nobel laureates in Chemistry
American Nobel laureates
Canadian Nobel laureates
Members of the United States National Academy of Sciences
Canadian emigrants to the United States
Canadian chemists
Theoretical chemists
Jewish Canadian scientists
Canadian expatriate academics in the United States
McGill University alumni
Wolf Prize in Chemistry laureates
California Institute of Technology faculty
Academics from Montreal
Scientists from Montreal
University of Illinois Urbana-Champaign faculty
Members of the International Academy of Quantum Molecular Science
Anglophone Quebec people
National Medal of Science laureates
Foreign members of the Royal Society
Foreign members of the Chinese Academy of Sciences
Jewish American scientists
Jewish chemists
Jewish Nobel laureates
Polytechnic Institute of New York University faculty
American agnostics
Canadian agnostics
Canadian people of Lithuanian-Jewish descent
Canadian expatriates in the United Kingdom
Fellows of the American Physical Society
New York University Tandon School of Engineering alumni
Jewish centenarians | Rudolph A. Marcus | Chemistry | 1,414 |
5,972,029 | https://en.wikipedia.org/wiki/Microsoft%20Customer%20Care%20Framework | Microsoft Customer Care Framework (CCF) was a desktop-based framework which was used to address issues faced by service providers caused by multiple line of business (LOB) systems while interacting with their customers. It was discontinued though many of its core functions were moved to an add-in for the Microsoft Dynamics CRM product named the Unified Service Desk.
The Customer Care Framework provided a core set of functions for customer support avenues including voice call via call center agents and Internet portals. The framework used other Microsoft server products including the BizTalk Server, and SharePoint. CCF required the use of Microsoft SQL Server and Microsoft IIS for the server side, which it uses to provide a base core set of web services.
CCF is targeted at medium to large enterprises. CCF was originally developed for the large call center requirements of the telecommunication industry.
CCF is different from most products from Microsoft in that it is not an 'out of the box' solution but requires development and configuration to build a working customer solution. The framework allows for a SOA methodology on development on the server and agent desktop side, but this is not mandatory and non-SOA development can be done and is normally the case.
CCF Components
Agent Desktop
The primary user interface for CCF is the agent desktop. This is a desktop-based user interface (UI) that presents data aggregated from various Line of business (LOB) & OSS/BSS application front ends and presents them in a unified view. CCF does not include an Agent Desktop application, rather samples including source code are provided as part of the framework.
Application Integration Framework (AIF)
The AIF manages the loading of the applications, integration and event brokering. Through the use of adapters (see HAT below) applications can have custom integrations to account for both the technology of the hosted application as well as business processing.
Hosted Application Toolkit (HAT)
HAT allows for the separation of the business rules and the method used to integrate with the application. HAT uses Microsoft Windows Workflow Foundation (WF) to manage the business rules, Data Driven Adapters (DDAs) to manage the application directly, and Bindings written in XML to connect the two. CCF 2009 SP1 ships with 3 DDAs: Win32, Web, and Java (JDK 1.6). DDAs can be customized or extended for additional application types as needed.
Releases
Customer Care Framework 1.0: released early 2003
Customer Care Framework 1.1: released 2004; uses .NET Framework 1.1
Customer Care Framework 1.2: released 2004; uses .NET Framework 1.1
Customer Care Framework 2.0: released 2005; uses .NET Framework 1.1
Customer Care Framework 2005 (version 2.5.0): released Jan 2006, uses .NET Framework 1.1
Customer Care Framework 2005 (QFE 1, version 2.5.1): released April 2006, uses .NET Framework 1.1
Customer Care Framework 2005 (QFE 2, version 2.5.2): released 2006, uses .NET Framework 1.1
Customer Care Framework 2005 (QFE 3, version 2.5.3): released August 2006, uses .NET Framework 1.1
Customer Care Framework 2005 for .NET Framework 2.0 (version 2.6): built on a modified 2.5.3 base; requires .NET Framework 2.0. Contains significant bug fixes to those base areas of CCF where the code is not available.
Customer Care Framework 2008: released 21 September 2007, uses .NET Framework 3.0
Customer Care Framework 2009: released 28 October 2008.
Customer Care Framework 2009 Service Pack 1: released April 2009.
Customer Care Framework 2009 Service Pack 1 QFE: released August 2009, adds support for .NET Framework 3.5 SP1, IE8, dynamic positioning. Adds a shell API, which (amongst others) brings improved CTI support and possibility to develop WPF shells.
Any version of Customer Care Framework before CCF 2009 SP1 QFE will break when upgrading to .NET Framework 3.5 SP1.
Similar Products
Microsoft's Composite UI Application Block can be used to build composite applications within CCF, and a number of products similar to CCF are offered by other companies as well.
References
External links
Microsoft's Unified Service Desk documentation
Microsoft's CCF discussion board
Information technology management
Service-oriented architecture-related products
Microsoft software factories | Microsoft Customer Care Framework | Technology | 904 |
59,562,660 | https://en.wikipedia.org/wiki/Bota%C5%9F%20D%C3%B6rtyol%20LNG%20Storage%20Facility | Botaş Dörtyol LNG Storage Facility () is a floating storage and regasification unit (FSRU) for liquefied natural gas (LNG) in Hatay Province, southern Turkey. It is the country's second floating LNG storage facility after the Egegaz Aliağa LNG Storage Facility.
The floating LNG storage facility is the world's largest vessel MT MOL FSRU Challenger, which was chartered by the Turkish state-owned crude oil and natural gas pipelines and trading company BOTAŞ. The FSRU was delivered to its owner, the Mitsui O.S.K. Lines (MOL) LNG Transport (Europe) Ltd. in October 2017, and then sailed to Turkey, arriving at its Mediterranean seaport Dörtyol in November the same year. The FSRU terminal went into service on 7 February 2018.
The special vessel has an LNG storage capacity of 2, and features a regas discharge capacity of . With the use of the FSRU as an import terminal, minimization of the investment costs for transmission and distribution lines as well as of the transportation costs is aimed.
The chartered MT MOL FSRU Challenger was replaced by the new Turkish FSRU MT Botaş FSRU Ertuğrul Gazi commissioned on 25 June 2021.
See also
Egegaz Aliağa LNG Storage Facility,
Lake Tuz Natural Gas Storage,
Northern Marmara and Değirmenköy (Silivri) Depleted Gas Reservoir,
Marmara Ereğlisi LNG Storage Facility.
References
Natural gas storage
Floating production storage and offloading vessels
Energy infrastructure in Turkey
Natural gas in Turkey
2018 establishments in Turkey
Energy infrastructure completed in 2018
Buildings and structures in Hatay Province
Dörtyol District
Botaş
21st-century architecture in Turkey | Botaş Dörtyol LNG Storage Facility | Chemistry | 376 |
10,118,279 | https://en.wikipedia.org/wiki/List%20of%203D%20graphics%20libraries | 3D graphics have become so popular, particularly in video games, that specialized APIs (application programming interfaces) have been created to ease the processes in all stages of computer graphics generation. These APIs have also proved vital to computer graphics hardware manufacturers, as they provide a way for programmers to access the hardware in an abstract way, while still taking advantage of the special hardware of any specific graphics card.
The first 3D graphics framework was probably Core, published by the ACM in 1977.
Low-level 3D API
These APIs for 3D computer graphics are particularly popular:
ANGLE, web browsers graphics engine, a cross-platform translator of OpenGL ES calls to DirectX, OpenGL, or Vulkan API calls.
Direct3D (a subset of DirectX)
Glide a defunct 3D graphics API developed by 3dfx Interactive.
Mantle developed by AMD.
Metal developed by Apple.
OpenGL and the OpenGL Shading Language
OpenGL ES 3D API for embedded devices.
OptiX 7.0 and Latest developed by NVIDIA.
LibGCM
QuickDraw 3D developed by Apple Computer starting in 1995, abandoned in 1998.
Vulkan
Web-based API
WebGL is a JavaScript interface for OpenGL ES API, promoted by Khronos.
WebGPU an under-development web standard and JavaScript API for accelerated graphics and compute.
High-level 3D API
There are also higher-level 3D scene-graph APIs which provide additional functionality on top of the lower-level rendering API. Such libraries under active development include:
BGFX
ClanLib
Crystal Space
HOOPS 3D Graphics System
Horde3D
Irrlicht Engine
Java 3D
Java FX
JMonkey Engine
JT Open from Siemens Digital Industries Software
LibGDX
magnum
Mobile 3D Graphics API (M3G; JSR-184)
OGRE
OpenGL Performer
OpenSceneGraph (now obsolete OSG.JS for WebPlatforms)
OpenSG
QSDK
RAMSES
RenderWare
Panda3D
Zea Engine
Unigine
VTK
Phoenix Engine
ArkGraphics 3D
JavaScript-based engines
There is more interest in web browser based high-level API for 3D graphics engines. Some are:
A-Frame
Blend4Web
CopperLicht
O3D
Three.js
Babylon.js
Verge3D
X3DOM
Zea Engine
Flash-based engines
Stage3D in the 3D library in Flash version 11 and later
Papervision3D and its fork Away3D for Flash
See also
Graphics library
Game engine
3D computer graphics software
Computing-related lists
Lists of software | List of 3D graphics libraries | Technology | 511 |
11,422,149 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z101 | In molecular biology, Small nucleolar RNA Z101 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA Z101 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
Plant snoRNA Z101 was identified in a screen of Oryza sativa.
References
External links
Small nuclear RNA | Small nucleolar RNA Z101 | Chemistry | 197 |
38,145 | https://en.wikipedia.org/wiki/Los%20Alamos%20National%20Laboratory | Los Alamos National Laboratory (often shortened as Los Alamos and LANL) is one of the sixteen research and development laboratories of the United States Department of Energy (DOE), located a short distance northwest of Santa Fe, New Mexico, in the American southwest. Best known for its central role in helping develop the first atomic bomb, LANL is one of the world's largest and most advanced scientific institutions.
Los Alamos was established in 1943 as Project Y, a top-secret site for designing nuclear weapons under the Manhattan Project during World War II. Chosen for its remote yet relatively accessible location, it served as the main hub for conducting and coordinating nuclear research, bringing together some of the world's most famous scientists, among them numerous Nobel Prize winners. The town of Los Alamos, directly north of the lab, grew extensively through this period.
After the war ended in 1945, Project Y's existence was made public, and it became known universally as Los Alamos. In 1952, the Atomic Energy Commission formed a second design lab under the direction of the University of California, Berkeley, which became the Lawrence Livermore National Laboratory (LLNL). The two labs competed on a wide variety of bomb designs, but with the end of the Cold War, have focused increasingly on civilian missions. Today, Los Alamos conducts multidisciplinary research in fields such as national security, space exploration, nuclear fusion, renewable energy, medicine, nanotechnology, and supercomputing.
While owned by the federal government, LANL is privately managed and operated by Triad National Security, LLC.
History
The Manhattan Project
The laboratory was founded during World War II as a secret, centralized facility to coordinate the scientific research of the Manhattan Project, the Allied project to develop the first nuclear weapons. In September 1942, the difficulties encountered in conducting preliminary studies on nuclear weapons at universities scattered across the country indicated the need for a laboratory dedicated solely to that purpose.
General Leslie Groves wanted a central laboratory at an isolated location for safety, and to keep the scientists away from the populace. It should be at least 200 miles from international boundaries and west of the Mississippi. Major John Dudley suggested Oak City, Utah, or Jemez Springs, New Mexico, but both were rejected. Jemez Springs was only a short distance from the current site. Project Y director J. Robert Oppenheimer had spent much time in his youth in the New Mexico area and suggested the Los Alamos Ranch School on the mesa. Dudley had rejected the school as not meeting Groves' criteria, but as soon as Groves saw it he said in effect "This is the place". Oppenheimer became the laboratory's first director; from 19 October 1942.
During the Manhattan Project, Los Alamos hosted thousands of employees, including many Nobel Prize-winning scientists. The location was a total secret. Its only mailing address was a post office box, number 1663, in Santa Fe, New Mexico. Eventually two other post office boxes were used, 180 and 1539, also in Santa Fe. Though its contract with the University of California was initially intended to be temporary, the relationship was maintained long after the war. Until the atomic bombings of Hiroshima and Nagasaki, Japan, University of California president Robert Sproul did not know what the purpose of the laboratory was and thought it might be producing a "death ray". The only member of the UC administration who knew its true purpose—indeed, the only one who knew its exact physical location—was the Secretary-Treasurer Robert Underhill, who was in charge of wartime contracts and liabilities.
The work of the laboratory culminated in several atomic devices, one of which was used in the first nuclear test near Alamogordo, New Mexico, codenamed "Trinity", on July 16, 1945. The other two were weapons, "Little Boy" and "Fat Man", which were used in the attacks on Hiroshima and Nagasaki. The Laboratory received the Army-Navy "E" Award for Excellence in production on October 16, 1945.
Post-war
After the war, Oppenheimer retired from the directorship, and it was taken over by Norris Bradbury, whose initial mission was to make the previously hand-assembled atomic bombs "G.I. proof" so that they could be mass-produced and used without the assistance of highly trained scientists. Other founding members of Los Alamos left the laboratory and became outspoken opponents to the further development of nuclear weapons.
The name officially changed to the Los Alamos Scientific Laboratory (LASL) on January 1, 1947. By this time, Argonne had already been made the first National Laboratory the previous year. Los Alamos would not become a National Laboratory in name until 1981.
In the years since the 1940s, Los Alamos was responsible for the development of the hydrogen bomb, and many other variants of nuclear weapons. In 1952, Lawrence Livermore National Laboratory was founded to act as Los Alamos' "competitor", with the hope that two laboratories for the design of nuclear weapons would spur innovation. Los Alamos and Livermore served as the primary classified laboratories in the U.S. national laboratory system, designing all the country's nuclear arsenal. Additional work included basic scientific research, particle accelerator development, health physics, and fusion power research as part of Project Sherwood. Many nuclear tests were undertaken in the Marshall Islands and at the Nevada Test Site. During the late-1950s, a number of scientists including Dr. J. Robert "Bob" Beyster left Los Alamos to work for General Atomics (GA) in San Diego.
Three major nuclear-related accidents have occurred at LANL. Criticality accidents occurred in August 1945 and May 1946, and a third accident occurred during an annual physical inventory in December 1958.
Several buildings associated with the Manhattan Project at Los Alamos were declared a National Historic Landmark in 1965.
Post-Cold War
At the end of the Cold War, both labs went through a process of intense scientific diversification in their research programs to adapt to the changing political conditions that no longer required as much research towards developing new nuclear weapons and has led the lab to increase research for "non-war" science and technology. Los Alamos' nuclear work is currently thought to relate primarily to computer simulations and stockpile stewardship. The development of the Dual-Axis Radiographic Hydrodynamic Test Facility will allow complex simulations of nuclear tests to take place without full explosive yields.
The laboratory contributed to the early development of the flow cytometry technology. In the 1950s, researcher Mack Fulwyler developed a technique for sorting erythrocytes that combined the Coulter Principle of Coulter counter technologies, which measures the presence of cells and their size, with ink jet technology, which produces a laminar flow of liquid that breaks up into separate, fine drops. In 1969, Los Alamos reported the first fluorescence detector apparatus, which accurately measured the number and size of ovarian cells and blood cells.
As of 2017, other research performed at the lab included developing cheaper, cleaner biofuels and advancing scientific understanding around renewable energy.
Non-nuclear national security and defense development is also a priority at the lab. This includes preventing outbreaks of deadly diseases by improving detection tools and the monitoring the effectiveness of the United States' vaccine distribution infrastructure. Additional advancements include the ASPECT airplane that can detect bio threats from the sky.
Medical work
In 2008, development for a safer, more comfortable and accurate test for breast cancer was ongoing by scientists Lianjie Huang and Kenneth M. Hanson and collaborators. The new technique, called ultrasound-computed tomography (ultrasound CT), uses sound waves to accurately detect small tumors that traditional mammography cannot.
The lab has made intense efforts for humanitarian causes through its scientific research in medicine. In 2010, three vaccines for the Human Immunodeficiency Virus were being tested by lab scientist Bette Korber and her team. "These vaccines might finally deal a lethal blow to the AIDS virus", says Chang-Shung Tung, leader of the Lab's Theoretical Biology and Biophysics group.
Negative publicity
The laboratory has attracted negative publicity from a number of events. In 1999, Los Alamos scientist Wen Ho Lee was accused of 59 counts of mishandling classified information by downloading nuclear secrets—"weapons codes" used for computer simulations of nuclear weapons tests—to data tapes and removing them from the lab. After ten months in jail, Lee pleaded guilty to a single count and the other 58 were dismissed with an apology from U.S. District Judge James Parker for his incarceration. Lee had been suspected for having shared U.S. nuclear secrets with China, but investigators were never able to establish what Lee did with the downloaded data. In 2000, two computer hard drives containing classified data were announced to have gone missing from a secure area within the laboratory, but were later found behind a photocopier.
Science mission
Los Alamos National Laboratory's mission is to "solve national security challenges through simultaneous excellence". The laboratory's strategic plan reflects U.S. priorities spanning nuclear security, intelligence, defense, emergency response, nonproliferation, counterterrorism, energy security, emerging threats, and environmental management. This strategy is aligned with priorities set by the Department of Energy (DOE), the National Nuclear Security Administration (NNSA), and national strategy guidance documents, such as the Nuclear Posture Review, the National Security Strategy, and the Blueprint for a Secure Energy Future
Los Alamos is the senior laboratory in the DOE system, and executes work in all areas of the DOE mission: national security, science, energy, and environmental management. The laboratory also performs work for the Department of Defense (DoD), Intelligence Community (IC), and Department of Homeland Security (DHS), among others. The laboratory's multidisciplinary scientific capabilities and activities are organized into six Capability Pillars:
Information, Science and Technology (IS&T)
Materials for the Future seeks to optimize materials for national security applications by predicting and controlling their performance and functionality through discovery science and engineering.
Nuclear and Particle Futures integrates nuclear experiments, theory, and simulation to understand and engineer complex nuclear phenomena.
Science of Signatures (SoS) applies science and technology to intransigent problems of system identification and characterization in areas of global security, nuclear defense, energy, and health.
Complex Natural and Engineered Systems (CNES)
Weapons Systems (WS)
Los Alamos operates three main user facilities:
The Center for Integrated Nanotechnologies: The Center for Integrated Nanotechnologies is a DOE/Office of Science National User Facility operated jointly by Sandia and Los Alamos National Laboratories with facilities at both Laboratories. CINT is dedicated to establishing the scientific principles that govern the design, performance, and integration of nanoscale materials into microscale and macroscale systems and devices.
Los Alamos Neutron Science Center (LANSCE): The Los Alamos Neutron Science Center is one of the world's most powerful linear accelerators. LANSCE provides the scientific community with intense sources of neutrons with the capability of performing experiments supporting civilian and national security research. This facility is sponsored by the Department of Energy, the National Nuclear Security Administration, Office of Science and Office of Nuclear Energy, Science and Technology.
The National High Magnetic Field Laboratory (NHMFL), Pulsed Field Facility: The Pulsed Field Facility at Los Alamos National Laboratory in Los Alamos, New Mexico, is one of three campuses of the National High Magnetic Field Laboratory (NHMFL), the other two being at Florida State University, Tallahassee and the University of Florida. The Pulsed Field Facility at Los Alamos National Laboratory operates an international user program for research in high magnetic fields.
As of 2017, the Los Alamos National Laboratory is using data and algorithms to possibly protect public health by tracking the growth of infectious diseases. Digital epidemiologists at the lab's Information Systems and Modeling group are using clinical surveillance data, Google search queries, census data, Wikipedia, and even tweets to create a system that could predict epidemics. The team is using data from Brazil as its model; Brazil was notably threatened by the Zika virus as it prepared to host the Summer Olympics in 2016.
Laboratory management and operations
Within LANL's 43-square-mile property are approximately 2,000 dumpsites which have contaminated the environment. It also contributed to thousands of dumpsites at 108 locations in 29 US states.
Contract changes
Continuing efforts to make the laboratory more efficient led the Department of Energy to open its contract with the University of California to bids from other vendors in 2003. Though the university and the laboratory had difficult relations many times since their first World War II contract, this was the first time that the university ever had to compete for management of the laboratory. The University of California decided to create a private company with the Bechtel Corporation, Washington Group International, and the BWX Technologies to bid on the contract to operate the laboratory. The UC/Bechtel led corporation—Los Alamos National Security, LLC (LANS)—was pitted against a team formed by the University of Texas System partnered with Lockheed-Martin. In December 2005, the Department of Energy announced that LANS had won the next seven-year contract to manage and operate the laboratory.
On June 1, 2006, the University of California ended its sixty years of direct involvement in operating Los Alamos National Laboratory, and management control of the laboratory was taken over by Los Alamos National Security, LLC with effect October 1, 2007. Approximately 95% of the former 10,000 plus UC employees at LANL were rehired by LANS to continue working at LANL. Other than UC appointing three members to the eleven member board of directors that oversees LANS, UC now has virtually no responsibility or direct involvement in LANL. UC policies and regulations that apply to UC campuses and its two national laboratories in California (Lawrence Berkeley and Lawrence Livermore) no longer apply to LANL, and the LANL director no longer reports to the UC Regents or UC Office of the President.
On June 8, 2018, the NNSA announced that Triad National Security, LLC, a joint venture between Battelle Memorial Institute, the University of California, and Texas A&M University, would assume operation and management of LANL beginning November 1, 2018.
Safety management
In August 2011, the close placement of eight plutonium rods for a photo nearly led to a criticality incident. The photo shoot, which was directed by the laboratory's management, was one of several factors relating to unsafe management practices that led to the departure of 12 of the lab's 14 safety staff. The criticality incident was one of several that led the Department of Energy to seek alternative bids to manage the laboratory after the 2018 expiration of the LANS contract.
The lab was penalized with a $57 million reduction in its 2014 budget over the February 14, 2014, accident at the Waste Isolation Pilot Plant for which it was partly responsible.
In August 2017, the improper storage of plutonium metal could have triggered a criticality accident, and subsequently staff failed to declare the failure as required by procedure.
Extended operations
With support of the National Science Foundation, LANL operates one of the three National High Magnetic Field Laboratories in conjunction with and located at two other sites Florida State University in Tallahassee, Florida, and University of Florida in Gainesville, Florida.
Los Alamos National Laboratory is a partner in the Joint Genome Institute (JGI) located in Walnut Creek, California. JGI was founded in 1997 to unite the expertise and resources in genome mapping, DNA sequencing, technology development, and information sciences pioneered at the three genome centers at University of California's Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and LANL.
The Integrated Computing Network (ICN) is a multi-security level network at the LANL integrating large host supercomputers, a file server, a batch server, a printer and graphics output server and numerous other general purpose and specialized systems. IBM Roadrunner, which was part of this network, was the first supercomputer to hit petaflop speeds.
Until 1999, The Los Alamos National Laboratory hosted the arXiv e-print archive. The arXiv is currently operated and funded by Cornell University.
The coreboot project was initially developed at LANL.
In the recent years, the Laboratory has developed a major research program in systems biology modeling, known at LANL under the name q-bio.
Several serials are published by LANL:
National Security Science
1663
Community Connections
Actinide Research Quarterly
@theBradbury
Physical Sciences Vistas
LANL also published Los Alamos Science from 1980 to 2005, as well as the Nuclear Weapons Journal, which was replaced by National Security Science after two issues in 2009.
Controversy and criticism
In 2005, Congress held new hearings on lingering security issues at Los Alamos National Weapons Laboratory in New Mexico; documented problems continued to be ignored.
In November 2008, a drum containing nuclear waste was ruptured due to a 'deflagration' according to an inspector general report of the Dept. of Energy, which due to lab mistakes, also occurred in 2014 at the Carlsbad plant with significant disruptions and costs across the industry.
In 2009, 69 computers which did not contain classified information were lost. The same year also saw a scare in which 1 kg (2.2 lb) of missing plutonium prompted a Department of Energy investigation into the laboratory. The investigation found that the "missing plutonium" was a result of miscalculation by LANL's statisticians and did not actually exist; but the investigation did lead to heavy criticism of the laboratory by the DOE for security flaws and weaknesses that the DOE claimed to have found.
Institutional statistics
LANL is northern New Mexico's largest institution and the largest employer with approximately 8,762 direct employees, 277 guard force, 505 contractors, 1,613 students, 1,143 unionized craft workers, and 452 post-doctoral researchers. Additionally, there are roughly 120 DOE employees stationed at the laboratory to provide federal oversight of LANL's work and operations. Approximately one-third of the laboratory's technical staff members are physicists, one-quarter are engineers, one-sixth are chemists and materials scientists, and the remainder work in mathematics and computational science, biology, geoscience, and other disciplines. Professional scientists and students also come to Los Alamos as visitors to participate in scientific projects. The staff collaborates with universities and industry in both basic and applied research to develop resources for the future. The annual budget is approximately US$2.2 billion.
Directors
J. Robert Oppenheimer (1942–1945)
Norris Bradbury (1945–1970)
Harold Agnew (1970–1979)
Donald Kerr (1979–1986)
Siegfried S. Hecker (1986–1997)
John C. Browne (1997–2003)
George Peter Nanos (2003–2005)
Robert W. Kuckuck (2005–2006)
Michael R. Anastasio (2006–2011)
Charles F. McMillan (2011–2017)
Terry Wallace (2018)
Thomas Mason (2018–present)
Notable scientists
Stirling Colgate (1925–2013)
George Cowan (1920–2012), American physical chemist, businessman, and philanthropist
Mitchell Feigenbaum (1944–2019)
Richard Feynman (1918–1988)
Bette Korber
Tom Lehrer
Maria Goeppert Mayer (1906–1972)
Howard O. McMahon (1914–1990), Canadian-born American electrical engineer, inventor of the Gifford-McMahon cryocooler, and the Science Director, Vice President, Head of the Research and Development Division, and then President of Arthur D. Little, Inc; lived and worked partially in Los Alamos during development of the first Hydrogen bomb
Emily Willbanks (1930–2007)
See also
Anti-nuclear movement in the United States
Association of Los Alamos Scientists
Bradbury Science Museum
Chalk River Laboratories
Federation of American Scientists
Clarence Max Fowler
David Greenglass
Ed Grothus
Theodore Hall
History of nuclear weapons
Hydrogen-moderated self-regulating nuclear power module
National Historic Landmarks in New Mexico
National Register of Historic Places listings in Los Alamos County, New Mexico
Julius and Ethel Rosenberg
Timeline of Cox Report controversy
Timeline of nuclear weapons development
Venona project
Notes
References
Further reading
External links
Los AlamosOverview of Historical Operations
Annotated bibliography on Los Alamos from the Alsos Digital Library
University of California Office of Laboratory Management (official website)
Los Alamos Neutron Science Center "LANSCE"
Los Alamos Weather Machine
LANL: The Real Story (LANL community blog)
LANL: The Corporate Story (follow-up blog to "LANL: The Real Story)
LANL: Technology Transfer, an example
LANL: The Rest of the Story (ongoing blog for LANL employees)
Protecting the Nation's Nuclear Materials. Government Calls Arms Complexes Secure; Critics Disagree NPR.
Los Alamos Study Groupan Albuquerque-based group opposed to nuclear weapons
Site Y: Los Alamos A map of Manhattan Project Era Site Y: Los Alamos, New Mexico.
Los Alamos National Laboratory Nuclear Facilities, 1997
Machinists who assembled the atomic bomb.
Archival collections
Los Alamos University notebooks, 1945-1946, Niels Bohr Library & Archives
Los Alamos, New Mexico
United States Department of Energy national laboratories
Buildings and structures in Los Alamos County, New Mexico
Federally Funded Research and Development Centers
Government buildings in New Mexico
Manhattan Project sites
Nuclear research institutes
Nuclear weapons infrastructure of the United States
Supercomputer sites
History of Los Alamos County, New Mexico
Government buildings on the National Register of Historic Places in New Mexico
Historic districts on the National Register of Historic Places in New Mexico
National Historic Landmarks in New Mexico
National Register of Historic Places in Los Alamos County, New Mexico
World War II on the National Register of Historic Places
Bechtel
University of California
Military research of the United States
Physics research institutes
Theoretical physics institutes
1943 establishments in New Mexico
Research institutes in New Mexico | Los Alamos National Laboratory | Physics,Engineering | 4,500 |
10,924,883 | https://en.wikipedia.org/wiki/Ethylmethylthiambutene | Ethylmethylthiambutene (; Emethibutin) is an opioid analgesic drug from the thiambutene family, around 1.3x the potency of morphine. It is under international control under Schedule I of the UN Single Convention On Narcotic Drugs 1961, presumably due to high abuse potential.
It is a Schedule I controlled substance in the United States with a DEA ACSCN of 9623 and zero annual manufacturing quota as of 2013.
References
Synthetic opioids
Thiophenes
Amines
Mu-opioid receptor agonists | Ethylmethylthiambutene | Chemistry | 125 |
2,571,292 | https://en.wikipedia.org/wiki/Z-variant | In Unicode, two glyphs are said to be Z-variants (often spelled zVariants) if they share the same etymology but have slightly different appearances and different Unicode code points. For example, the Unicode characters 說 and U+8AAC 説 are Z-variants. The notion of Z-variance is only applicable to the "CJKV scripts"—Chinese, Japanese, Korean and Vietnamese—and is a subtopic of Han unification.
Differences on the Z-axis
The Unicode philosophy of code point allocation for CJK languages is organized along three "axes." The X-axis represents differences in semantics; for example, the Latin capital A ( A) and the Greek capital alpha ( Α) are represented by two distinct code points in Unicode, and might be termed "X-variants" (though this term is not common). The Y-axis represents significant differences in appearance though not in semantics; for example, the traditional Chinese character māo "cat" ( 貓) and the simplified Chinese character ( 猫) are Y-variants.
The Z-axis represents minor typographical differences. For example, the Chinese characters ( 莊) and ( 荘) are Z-variants, as are ( 說) and ( 説). The glossary at Unicode.org defines "Z-variant" as "Two CJK unified ideographs with identical semantics and unifiable shapes," where "unifiable" is taken in the sense of Han unification.
Thus, were Han unification perfectly successful, Z-variants would not exist. They exist in Unicode because it was deemed useful to be able to "round-trip" documents between Unicode and other CJK encodings such as Big5 and CCCII. For example, the character 莊 has CCCII encoding 21552D, while its Z-variant 荘 has CCCII encoding 2D552D. Therefore, these two variants were given distinct Unicode code points, so that converting a CCCII document to Unicode and back would be a lossless operation.
Confusion
There is some confusion over the exact definition of "Z-variant." For example, in an Internet Draft (of ) dated 2002, one finds "no" ( 不) and ( 不︀) described as "font variants," the term "Z-variant" being apparently reserved for interlanguage pairs such as the Mandarin Chinese "rabbit" ( 兔) and the Japanese "rabbit" ( 兎). However, the Unicode Consortium's Unihan database treats both pairs as Z-variants.
See also
Backward compatibility
References
Character encoding
Unicode
Computer-related introductions in 1991 | Z-variant | Technology | 535 |
69,634,673 | https://en.wikipedia.org/wiki/Laurie%20Marhoefer | Laurie Marhoefer is a historian of queer and trans politics who is employed as the Jon Bridgman Endowed Professor of History at the University of Washington. In January 2021, together with Jennifer V. Evans, they facilitated the Jack and Anita Hess Research Seminar at the United States Holocaust Memorial Museum on LGBTQ+ histories of the Holocaust.
Works
References
Living people
University of Washington faculty
Historians of Germany
Historians of sexuality
Year of birth missing (living people)
Historians of LGBTQ topics
LGBTQ studies academics | Laurie Marhoefer | Biology | 100 |
615,385 | https://en.wikipedia.org/wiki/Gate%20valve | A gate valve, also known as a sluice valve, is a valve that opens by lifting a barrier (gate) out of the path of the fluid. Gate valves require very little space along the pipe axis and hardly restrict the flow of fluid when the gate is fully opened. The gate faces can be parallel but are most commonly wedge-shaped (in order to be able to apply pressure on the sealing surface).
Typical use
Gate valves are used to shut off the flow of liquids rather than for flow regulation, which is frequently done with a globe valve. When fully open, the typical gate valve has no obstruction in the flow path, resulting in very low flow resistance. The size of the open flow path generally varies in a nonlinear manner as the gate is moved. This means that the flow rate does not change evenly with stem travel. Depending on the construction, a partially open gate can vibrate from the fluid flow.
Gate valves are mostly used with larger pipe diameters (from 2" to the largest pipelines) since they are less complex to construct than other types of valves in large sizes.
At high pressures, friction can become a problem. As the gate is pushed against its guiding rail by the pressure of the medium, it becomes harder to operate the valve. Large gate valves are sometimes fitted with a bypass controlled by a smaller valve to be able to reduce the pressure before operating the gate valve itself.
Gate valves without an extra sealing ring on the gate or the seat are used in applications where minor leaking of the valve is not an issue, such as heating circuits or sewer pipes.
Valve construction
Common gate valves are actuated by a threaded stem that connects the actuator (e.g. handwheel or motor) to the gate. They are characterised as having either a rising or a nonrising stem, depending on which end of the stem is threaded. Rising stems are fixed to the gate and rise and lower together as the valve is operated, providing a visual indication of valve position. The actuator is attached to a nut that is rotated around the threaded stem to move it. Nonrising stem valves are fixed to, and rotate with, the actuator, and are threaded into the gate. They may have a pointer threaded onto the stem to indicate valve position, since the gate's motion is concealed inside the valve. Nonrising stems are used where vertical space is limited.
Gate valves may have flanged ends drilled according to pipeline-compatible flange dimensional standards.
Gate valves are typically constructed from cast iron, cast carbon steel, ductile iron, gunmetal, stainless steel, alloy steels, and forged steels.
All-metal gate valves are used in ultra-high vacuum chambers to isolate regions of the chamber.
Bonnet
Bonnets provide leakproof closure for the valve body. Gate valves may have a screw-in, union, or bolted bonnet. A screw-in bonnet is the simplest, offering a durable, pressure-tight seal. A union bonnet is suitable for applications requiring frequent inspection and cleaning. It also gives the body added strength. A bolted bonnet is used for larger valves and higher pressure applications.
Pressure seal bonnet
Another type of bonnet construction in a gate valve is pressure seal bonnet. This construction is adopted for valves for high pressure service, typically in excess of 2250 psi (15 MPa). The unique feature of the pressure seal bonnet is that the bonnet ends in a downward-facing cup that fits inside the body of the valve. As the internal pressure in the valve increases, the sides of the cup are forced outward. improving the body-bonnet seal. Other constructions where the seal is provided by external clamping pressure tend to create leaks in the body-bonnet joint.
Knife gate valve
For plastic solids and high-viscosity slurries such as paper pulp, a specialty valve known as a knife gate valve is used to cut through the material to stop the flow. A knife gate valve is usually not wedge shaped and has a tapered knife-like edge on its lower surface.
Images
See also
Ball valve
Blast gate
Butterfly valve
Control valve
Diaphragm valve
Globe valve
Needle valve
Process flow diagram
Piping and instrumentation diagram
References
Plumbing valves
Valves
Articles containing video clips | Gate valve | Physics,Chemistry | 857 |
3,509,706 | https://en.wikipedia.org/wiki/Home%20network | A home network or home area network (HAN) is a type of computer network, specifically a type of local area network (LAN), that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment.
Infrastructure devices
Certain devices on a home network are primarily concerned with enabling or supporting the communications of the kinds of end devices home-dwellers more directly interact with. Unlike their data center counterparts, these "networking" devices are compact and passively cooled, aiming to be as hands-off and non-obtrusive as possible.
A router is key in a typical home network and performs the key function of network address translation giving independent private addresses to each device. These devices often come with an integrated wireless access point and 4-port Ethernet switch. The switch is used to allow devices on the home network to talk to one another via Ethernet; while the needs of most home networks are satisfied with the built-in wireless and/or switching capabilities of their router, some situations require the addition of a separate switch with advanced capabilities. For example the fact a typical home router has 4 to 6 Ethernet LAN ports, so a router's switching capacity could be exceeded. Another example may be that a network device might require a non-standard port feature such as power over Ethernet (PoE). (IP cameras and IP phones). A wireless access point is required for connecting wireless devices to a network. When a router includes this device, it is referred to as a wireless router, which is predominantly the case nowawadays.
A gateway establishes physical and data link layer connectivity to a WAN like the Internet. Home routers provided by internet service providers (ISP) usually have the modem integrated within the unit. It is effectivelys a client of the external DHCP servers owned by the ISP.
Controllers for home automation or smart home hubs act as a gateway and router for low-power wireless networks of simple, non-data-intensive devices such as light bulbs and locks.
Connectivity and protocols
Home networks may use either wired or wireless connectivity methods that are found and standardized on local area networks or personal area networks. One of the most common ways of creating a home network is by using wireless radio signal technology; the 802.11 network as certified by the IEEE. Most wireless-capable residential devices operate at a frequency of 2.4 GHz under 802.11b and 802.11g or 5 GHz under 802.11a. Some home networking devices operate in both radio-band signals and fall within the 802.11n or 802.11ac standards. Wi-Fi is a marketing and compliance certification for IEEE 802.11 technologies. The Wi-Fi Alliance has tested compliant products, and certifies them for interoperability.
Low power, close range communication based on IEEE 802.15 standards has a strong presence in homes. Bluetooth continues to be the technology of choice for most wireless accessories such as keyboards, mice, headsets, and game controllers. These connections are often established in a transient, ad-hoc manner and are not thought of as permanent residents of a home network. A "low-rate" version of the original WPAN protocol was used as the basis of Zigbee.
Endpoint devices and services
Home networks may consist of a variety of devices and services. Personal computers such as desktops and mobile computers like tablets and smartphones are commonly used on home networks to communicate with other devices. A network attached storage (NAS) device may be part of the network, for general storage or backup purposes. A print server can be used to share any directly connected printers with other computers on the network.
Smart speakers may be used on a network for streaming media. DLNA is a common protocol used for interoperability between networked media-centric devices in the home, allowing devices like stereo systems on the network to access the music library from a PC on the same network, for example. Using an additional Internet connection, TVs for instance may stream online video content, while video game consoles can use online multiplayer.
Traditionally, data-centric equipment such as computers and media players have been the primary tenants of a home network. However, due to the lowering cost of computing and the ubiquity of smartphone usage, many traditionally non-networked home equipment categories now include new variants capable of control or remote monitoring through an app on a smartphone. Newer startups and established home equipment manufacturers alike have begun to offer these products as part of a "Smart" or "Intelligent" or "Connected Home" portfolio. Examples of such may include "connected" light bulbs (see also Li-Fi), home security alarms and smoke detectors. These often run over the Internet so that they can be accessed remotely.
Individuals may opt to subscribe to managed cloud computing services that provide such services instead of maintaining similar facilities within their home network. In such situations, local services along with the devices maintaining them are replaced by those in an external data center and made accessible to the home-dweller's computing devices via a WAN Internet connection.
Network management
Apple devices aim to make networking as hidden and automatic as possible, utilizing a zero-configuration networking protocol called Bonjour embedded within their otherwise proprietary line of software and hardware products.
Microsoft offers simple access control features built into their Windows operating system. Homegroup is a feature that allows shared disk access, shared printer access and shared scanner access among all computers and users (typically family members) in a home, in a similar fashion as in a small office workgroup, e.g., by means of distributed peer-to-peer networking (without a central server). Additionally, a home server may be added for increased functionality. The Windows HomeGroup feature was introduced with Microsoft Windows 7 in order to simplify file sharing in residences. All users (typically all family members), except guest accounts, may access any shared library on any computer that is connected to the home group. Passwords are not required from the family members during logon. Instead, secure file sharing is possible by means of a temporary password that is used when adding a computer to the HomeGroup.
See also
Access control
Computer security software
Data backup
Encryption
Firewall (computing)
Home automation
Home server
Indoor positioning system (IPS)
Matter
Network security
Smart, connected products
Software update
Virtual assistant
References
External links
WikiBooks:Transferring Data between Standard Dial-Up Modems
Home Net WG of the IETF
Computer networking
Wi-Fi
Network | Home network | Technology,Engineering | 1,389 |
42,777,911 | https://en.wikipedia.org/wiki/H%C3%A9non%E2%80%93Heiles%20system | While at Princeton University in 1962, Michel Hénon and Carl Heiles worked on the non-linear motion of a star around a galactic center with the motion restricted to a plane. In 1964 they published an article titled "The applicability of the third integral of motion: Some numerical experiments". Their original idea was to find a third integral of motion in a galactic dynamics. For that purpose they took a simplified two-dimensional nonlinear rotational symmetric potential and found that the third integral existed only for a limited number of initial conditions.
In the modern perspective the initial conditions that do not have the third integral of motion are called chaotic orbits.
Introduction
The Hénon–Heiles potential can be expressed as
The Hénon–Heiles Hamiltonian can be written as
The Hénon–Heiles system (HHS) is defined by the following four equations:
In the classical chaos community, the value of the parameter is usually taken as unity.
Since HHS is specified in , we need a Hamiltonian with 2 degrees of freedom to model it.
It can be solved for some cases using Painlevé analysis.
Quantum Hénon–Heiles Hamiltonian
In the quantum case the Hénon–Heiles Hamiltonian can be written as a two-dimensional Schrödinger equation.
The corresponding two-dimensional Schrödinger equation is given by
Wada property of the exit basins
Hénon–Heiles system shows rich dynamical behavior. Usually the Wada property cannot be seen in the Hamiltonian system, but Hénon–Heiles exit basin shows an interesting Wada property. It can be seen that when the energy is greater than the critical energy, the Hénon–Heiles system has three exit basins. In 2001 M. A. F. Sanjuán et al. had shown that in the Hénon–Heiles system the exit basins have the Wada property.
References
External links
http://mathworld.wolfram.com/Henon-HeilesEquation.html
Stellar astronomy
Chaotic maps | Hénon–Heiles system | Astronomy,Mathematics | 418 |
55,904,412 | https://en.wikipedia.org/wiki/Nuclear%20star%20cluster | A nuclear star cluster (NSC) or compact stellar nucleus (sometimes called young stellar nucleus) is a star cluster with high density and high luminosity near the center of mass of most galaxies.
NSCs are the central massive objects of fainter, low-mass galaxies where supermassive black holes (SMBHs) are not present or are of negligible mass. In the most massive galaxies, NSCs are entirely absent. Some galaxies, including the Milky Way, are known to contain both a NSC and a SMBH of comparable mass.
Properties
Nuclear star clusters are found in most galaxies that can be resolved sufficiently:
at least 50% of all early spiral galaxies (types Sa-Sc)
at least 75% of all late spiral galaxies (types Scd-Sm)
at least 70% of all spheroidal galaxies (types S0 and E).
NSCs are the densest known star clusters in the Universe. With apparent magnitudes between -14 and -10 mag in the infrared, they are on average 40 times brighter than globular clusters, although their effective radii are not larger than 2 to 5 parsecs. With a dynamic mass of 106 to 108 solar masses, they are at the upper end of the values reached by globular clusters.
The majority of nuclear star clusters contain a mix of old (at least one billion years old) and young stellar populations and show signs of star formation within the last 100 million years.
Formation
Although the mechanisms behind their formation are not entirely known, hypotheses provide four possibilities:
Nuclear star clusters originate somewhere else and are captured by a central black hole.
Nuclear star clusters are due to an incidence of gas at some distance from the center of the galaxy.
A combination of the above possibilities whereby the gravitational potential of a trapped object, such as the nucleus of a dwarf galaxy, triggers new star formation by incident gas near the galactic center.
Nuclear star clusters are created by merging star clusters with subsequent migration to the galactic center due to dynamical friction with background stars.
Relationship with globular clusters
Because nuclear star clusters occur in most galaxy species, they should still be present in the halo of the resulting galaxy after the fusion of galaxies. This is a hypothesis for the formation of globular clusters. Thus, globular clusters could be the remains of nuclear star clusters excluded from gas incidence, in which no new star formation occurs.
According to other hypotheses, however, the nuclear star clusters could be the result of a fusion of globular clusters captured by a supermassive black hole in the center of the galaxy and dynamically destroyed.
References
Star clusters | Nuclear star cluster | Astronomy | 539 |
15,210,932 | https://en.wikipedia.org/wiki/Lists%20of%20star%20names | In astronomy, star names, in contrast to star designations, are proper names of stars that have emerged from usage in pre-modern astronomical traditions. Lists of these names appear in the following articles:
List of Arabic star names
List of Chinese star names
List of proper names of stars: traditional proper names in modern usage around astronomy
Stars named after people
Names | Lists of star names | Astronomy | 71 |
35,171,726 | https://en.wikipedia.org/wiki/Field%20effect%20%28semiconductor%29 | In physics, the field effect refers to the modulation of the electrical conductivity of a material by the application of an external electric field.
In a metal, the electron density that responds to applied fields is so large that an external electric field can penetrate only a very short distance into the material. However, in a semiconductor the lower density of electrons (and possibly holes) that can respond to an applied field is sufficiently small that the field can penetrate quite far into the material. This field penetration alters the conductivity of the semiconductor near its surface, and is called the field effect. The field effect underlies the operation of the Schottky diode and of field-effect transistors, notably the MOSFET, the JFET and the MESFET.
Surface conductance and band bending
The change in surface conductance occurs because the applied field alters the energy levels available to electrons to considerable depths from the surface, and that in turn changes the occupancy of the energy levels in the surface region. A typical treatment of such effects is based upon a band-bending diagram showing the positions in energy of the band edges as a function of depth into the material.
An example band-bending diagram is shown in the figure. For convenience, energy is expressed in eV and voltage is expressed in volts, avoiding the need for a factor q for the elementary charge. In the figure, a two-layer structure is shown, consisting of an insulator as left-hand layer and a semiconductor as right-hand layer. An example of such a structure is the MOS capacitor, a two-terminal structure made up of a metal gate contact, a semiconductor body (such as silicon) with a body contact, and an intervening insulating layer (such as silicon dioxide, hence the designation O). The left panels show the lowest energy level of the conduction band and the highest energy level of the valence band. These levels are "bent" by the application of a positive voltage V. By convention, the energy of electrons is shown, so a positive voltage penetrating the surface lowers the conduction edge. A dashed line depicts the occupancy situation: below this Fermi level the states are more likely to be occupied, the conduction band moves closer to the Fermi level, indicating more electrons are in the conducting band near the insulator.
Bulk region
The example in the figure shows the Fermi level in the bulk material beyond the range of the applied field as lying close to the valence band edge. This position for the occupancy level is arranged by introducing impurities into the semiconductor. In this case the impurities are so-called acceptors which soak up electrons from the valence band becoming negatively charged, immobile ions embedded in the semiconductor material. The removed electrons are drawn from the valence band levels, leaving vacancies or holes in the valence band. Charge neutrality prevails in the field-free region because a negative acceptor ion creates a positive deficiency in the host material: a hole is the absence of an electron, it behaves like a positive charge. Where no field is present, neutrality is achieved because the negative acceptor ions exactly balance the positive holes.
Surface region
Next the band bending is described. A positive charge is placed on the left face of the insulator (for example using a metal "gate" electrode). In the insulator there are no charges so the electric field is constant, leading to a linear change of voltage in this material. As a result, the insulator conduction and valence bands are therefore straight lines in the figure, separated by the large insulator energy gap.
In the semiconductor at the smaller voltage shown in the top panel, the positive charge placed on the left face of the insulator lowers the energy of the valence band edge. Consequently, these states are fully occupied out to a so-called depletion depth where the bulk occupancy reestablishes itself because the field cannot penetrate further. Because the valence band levels near the surface are fully occupied due to the lowering of these levels, only the immobile negative acceptor-ion charges are present near the surface, which becomes an electrically insulating region without holes (the depletion layer). Thus, field penetration is arrested when the exposed negative acceptor ion charge balances the positive charge placed on the insulator surface: the depletion layer adjusts its depth enough to make the net negative acceptor ion charge balance the positive charge on the gate.
Inversion
The conduction band edge also is lowered, increasing electron occupancy of these states, but at low voltages this increase is not significant. At larger applied voltages, however, as in the bottom panel, the conduction band edge is lowered sufficiently to cause significant population of these levels in a narrow surface layer, called an inversion layer because the electrons are opposite in polarity to the holes originally populating the semiconductor. This onset of electron charge in the inversion layer becomes very significant at an applied threshold voltage, and once the applied voltage exceeds this value charge neutrality is achieved almost entirely by addition of electrons to the inversion layer rather than by an increase in acceptor ion charge by expansion of the depletion layer. Further field penetration into the semiconductor is arrested at this point, as the electron density increases exponentially with band-bending beyond the threshold voltage, effectively pinning the depletion layer depth at its value at threshold voltages.
References
Semiconductors
Semiconductor technology
Semiconductor structures
Electronic band structures
Physical phenomena
MOSFETs | Field effect (semiconductor) | Physics,Chemistry,Materials_science,Engineering | 1,130 |
1,010,745 | https://en.wikipedia.org/wiki/Giemsa%20stain | Giemsa stain (), named after German chemist and bacteriologist Gustav Giemsa, is a nucleic acid stain used in cytogenetics and for the histopathological diagnosis of malaria and other parasites.
Uses
It is specific for the phosphate groups of DNA and attaches itself to regions of DNA where there are high amounts of adenine-thymine bonding. Giemsa stain is used in Giemsa banding, commonly called G-banding, to stain chromosomes and often used to create a karyogram (chromosome map). It can identify chromosomal aberrations such as translocations and rearrangements.
It stains the trophozoite Trichomonas vaginalis, which presents with greenish discharge and motile cells on wet prep.
Giemsa stain is also a differential stain, such as when it is combined with Wright stain to form Wright-Giemsa stain. It can be used to study the adherence of pathogenic bacteria to human cells. It differentially stains human and bacterial cells purple and pink respectively. It can be used for histopathological diagnosis of the Plasmodium species that cause malaria and some other spirochete and protozoan blood parasites. It is also used to stain Wolbachia cells in host tissue.
Giemsa stain is a classic blood film stain for peripheral blood smears and bone marrow specimens. Erythrocytes stain pink, platelets show a light pale pink, lymphocyte cytoplasm stains sky blue, monocyte cytoplasm stains pale blue, and leukocyte nuclear chromatin stains magenta. It is also used to visualize the classic "safety pin" shape in Yersinia pestis.
Giemsa stain is also used to visualize chromosomes. This is particularly relevant for detection of Cytomegalovirus infection, where the classical finding would be an "owl-eye" viral inclusion.
Giemsa stains the fungus Histoplasma, Chlamydia bacteria, and can be used to identify mast cells.
Generation
Giemsa's solution is a mixture of methylene blue, eosin, and Azure B. The stain is usually prepared from commercially available Giemsa powder.
A thin film of the specimen on a microscope slide is fixed in pure methanol for 30 seconds, by immersing it or by putting a few drops of methanol on the slide. The slide is immersed in a freshly prepared 5% Giemsa stain solution for 20–30 minutes (in emergencies 5–10 minutes in 10% solution can be used), then flushed with tap water and left to dry.
See also
Biological stains and staining protocols
Histology
Leishman stain
Microscopy
Romanowsky stain
Wright's stain
References
Histopathology
Histotechnology
Staining dyes | Giemsa stain | Chemistry | 593 |
72,420,474 | https://en.wikipedia.org/wiki/Nano-ARPES | Nano Angle-Resolved Photoemission Spectroscopy (Nano-ARPES), is a variant of the experimental technique ARPES (Angle-Resolved Photoemission Spectroscopy). It has the ability to precisely determine the electronic band structure of materials in momentum space with submicron lateral resolution. Due to its demanding experimental setup, this technique is much less extended than ARPES, widely used in condensed matter physics to experimentally determine the electronic properties of a broad range of crystalline materials. Nano-ARPES can access the electronic structure of well-ordered monocrystalline solids with high energy, momentum, and lateral resolution, even if they are nanometric or heterogeneous mesoscopic samples. Nano-ARPES technique is also based on Einstein's photoelectric effect, being photon-in electron-out spectroscopy, which has converted into an essential tool in studying the electronic structure of nanomaterials, like quantum and low dimensional materials.
NanoARPES allows to determine experimentally the relationship between the binding energies and wave momenta of the electrons of the occupied electronic states of the bands with energies close and approximately 10-15 eV below the Fermi level. These electrons are ejected from a solid when it is illuminated by monochromatic photons with sufficient energy to emit photoelectrons from the surface of the material. These photoelectrons are detected by an electron analyzer placed close to the samples surface in vacuum to preserve the uncontaminated surfaces and to avoid the collisions with particles able to modify the energy and trajectory of the photoelectrons in their way to the spectrometer. As in the photoemission process, the momentum is conserved; therefore, the angular distribution of photoelectrons from a monocrystal, even if it is a nanometric size, is also enabled to directly reveal the momentum distribution of initial electronic states in that crystal. The Nano-ARPES results, as in the ARPES technique, are traditionally shown as energy-momentum dispersion relation along the high symmetry directions of the irreducible Brillouin Zone, displaying the band dispersions of the investigated materials. When the emitted photoelectrons are shown by constant energy surfaces throughout large portions of the reciprocal space, Nano-ARPES can also precisely determine the Fermi surface of the investigated materials. Due to the unique ability to spatially map the electronic dispersion of the electrons in the samples, Nano-ARPES can also generate electronic imaging of nanomaterials with high binding energy and momentum resolution. As Nano-ARPES is a scanning technique, it can use state-of-the-art ARPES spectrometers without requiring them to be able also to discriminate spatially the origin of the analysed photoelectrons. Consequently, Nano-ARPES instrumentation can profit from the most advanced spectrometers developed for ARPES setups, particularly those of the latest generation electron spectrometers with bidimensional detection and high energy and momentum resolution.
Background
The comprehension of the electronic band structure of solids is applied in many fields of condensed matter physics, contributing to the microscopic understanding of many phenomenological trends and guiding the interpretation of experimental spectra in photoemission, optics, inelastic neutron scattering, specific heat, among others, including the effect of spin-polarisation. Most modern band electronic structure theoretical methods employ Density Functional Theory to solve the full many-body Schrödinger equation for electrons in a solid. The consolidated experimental and theoretical approach to describe the electronic structure of solids allows the straightforward visualization of the difference between conductors, insulators, and semiconductors according to the presence of permitted and forbidden electronic states of particular energy and momentum, which can be calculated by quantum mechanics and measured using ARPES.
The ARPES technique has the unique ability to determine the band structure directly. It thus helps understand the degree and type of electron interaction in the solids, corroborating or contesting band electronic structure results calculated using different theoretical approaches. However, this technique's lateral resolution, manipulation, and orientation of submicrometric of heterogeneous samples are rather limited. That is because the electrons measured in ARPES are all those electrons ejected by the photo-absorption process prompted by the incident photons. If the illuminated area of the sample is large enough to cover nonhomogeneous areas, the detected ejected electrons are the sum up of all the photoelectrons emitted by all different illuminated patches. If each area has a distinctive electronic band structure, the ARPES spectra will show the average of all of them weighted according to the size of each different patch present in the illuminated area.
In fact, many complex materials are constituted by disoriented small monocrystals or composed of several nanometric monocrystals. Traditional ARPES can only provide their average electronic structure if the patch size is smaller than the spot size of the ARPES setup, typically 200 um. This limitation is also present in samples with micrometric and submicrometric zones with distinctive chemical composition due to undesired side chemical reactions, for example, originating by the contamination or oxidation of the primitive sample. Hence, being the spot size of the monochromatic photon beam typically over 200 ųm side for conventional ARPES, only homogeneous samples with this size or bigger can be studied.
Consequently, a sub-micrometric lateral resolution should be added to ARPES to perform the experimental determination of the electronic structure of small crystalline materials and large samples with heterogeneities. Nano-ARPES has implemented this lateral discrimination by focalising the size of the photon incident beam within the nanometric scale. Similarly to ARPES, the electronic band structure of nanomaterials can be directly measured using Nano-ARPES by measuring the ejected electrons' kinetic energy, velocity, and absolute momentum.
The photon beam focusing to a spot size down to nanometric scale has been routinely achieved in a few well-known X-ray-based methods, such as scanning transmission X-ray microscopy (STXM) and scanning photoemission microscopy (SPEM). However, these techniques are much less demanding because they typically use incident photon energies higher than 150 eV and require non-angle resolved measurements, only recording integrated signals proportional to the X-Ray absorption coefficient and core-level photoelectrons, respectively. In both cases, the Fresnel Zone Plates (FZPs) performance is the essential component determining the lateral resolution, varying from micro- to nanometric lateral resolution. Nowadays, several companies in the market provide FZPs with a resolution better than 30 nm, which has facilitated the construction and operation of several x-Ray based microscopes such as STXM and SPEM instruments in different synchrotron radiation facilities like Elettra, ALS, CLS, and MAX-lab, among others. Nano-ARPES technique, however, requires much lower incident photon energy, typically from 6 eV to 100 eV) to detect those photoelectrons emitted by the electronic states below and close to the Fermi level, which cross-section increases as the incident photon energy decreases. An alternative k-space imaging approach is based on energy-filtered photoemission microscopes (PEEMs), The lateral resolution is achieved using an electron optical column instead of focalizing the incident photon beam. This full-field k-space version of PEEM is available commercially. However, for this commercially available full-field PEEM version with k-space imaging, achieving high energy and momentum resolution is challenging.
Instrumentation
Typically, high energy and momentum resolution ARPES experiments are performed at synchrotrons, which can provide bright and tunable high-energy photons sources to record the electronic band structure of ordered materials directly. That yields sharp and precise E vs k dispersions and constant energy surfaces, including those corresponding to the Fermi surface of the studied materials.
The conventional ARPES systems consist of a monochromatic light source to deliver a narrow beam of photons, a sample holder connected to a manipulator used to position the samples angular and translationally concerning the electron spectrometer (detector), and the incident light beam focus. The equipment is contained within an ultra-high vacuum (UHV) environment, which protects the sample from undesired contamination and prevents scattering of the emitted electrons. After being dispersed along two perpendicular directions for kinetic energy and emission angle, the electrons are directed to the detector and counted to provide ARPES spectra-slices of the band structure along one momentum direction.
The main difference of its typical instrumental setup from other conventional ARPES apparatus is that the soft x-ray beam is focused to a submicrometric spot using Fresnel Zone plates (FZP) lenses. The specimens can be mounted on a high-precision manipulator that ensures the nanoscale sample positioning in the x, y, and z directions, where the polar angle (Θ) and the azimuthal angle (Ψ) can also be automatically scanned.
This basic instrumentation allows two operating modes: Nano-ARPES punctual mode (operating mode type 1) with nano-spot that maps the band structure of nanometric crystalline solids to study quasiparticle dynamics in highly correlated and non-correlated materials as in conventional ARPES, and Nano-ARPES imaging mode (operating mode type 2) that measures the spatial distribution in real space of photoelectrons of a selected binding energy and momentum values range.
State-of-the-art Nano-ARPES microscopes are equipped with continuous interferometric control of the position of the samples for the FZPs, which avoids thermal and mechanical drifts. That is required to prevent undesirable distortions of the recorded Nano-ARPES images (operating mode type 2) and precision and reproducibility of E vs. k dispersion curves along specific directions of the reciprocal space.
Energy constant surfaces in the reciprocal space & Fermi surface mapping
In the Nano-ARPES setups, the used analyzers are the hemispherical electron energy typically installed in high energy and angular resolution conventional ARPES apparatus. They use a slit to prevent the mixing of momentum and energy channels and, consequently, can only take angular maps in one direction. To record maps over energy and two-dimensional momentum space as in conventional ARPES, either the sample needs to be rotated, or the collected photoelectrons beam should be discriminated inside the spectrometer with the electrostatic lens, keeping the sample fixed. The energy-angle-angle maps are converted to binding energy-k//x-k//y maps. These images display constant energy surfaces as a function of the reciprocal space's k//x and k//y waves vectors. The most remarkable constant energy surface is the Fermi surface map, obtained by detecting those photoelectrons with binding energy right at the Fermi level.
Applications
The Nano-ARPES technique is an essential tool to resolve the electronic band structure of mesoscopic or heterogeneous materials in diverse condensed Matter fields like quantum materials high-temperature superconductors, topological materials semiconductors metals, insulators with not-too-large band gap and in a wide variety of low dimensional materials and heterostructures with effects of confinements, different stackings and hybridization. Also, electronic structure changes associated with all types of phase transitions, charge density waves, bands hybridization phase separation, charge transfer, and in-operando devices can be revealed by combining nano-lateral resolution with high energy and momentum resolution.
References
Laboratory techniques in condensed matter physics
Emission spectroscopy
Electron spectroscopy | Nano-ARPES | Physics,Chemistry,Materials_science | 2,377 |
5,036,959 | https://en.wikipedia.org/wiki/List%20of%20exoplanet%20extremes | The following are lists of extremes among the known exoplanets. The properties listed here are those for which values are known reliably. The study of exoplanets is one of the most dynamic emerging fields of science, and these values may change as new discoveries are made.
Extremes from Earth's viewpoint
Planetary characteristics
Orbital characteristics
Stellar characteristics
System characteristics
See also
Extremes on Earth
Lists of exoplanets
List of exoplanet firsts
List of stars with proplyds
Methods of detecting exoplanets
List of potentially habitable exoplanets
Notes and references
External links
WiredScience, Top 5 Most Extreme Exoplanets, Clara Moskowitz, 21 January 2009
Planetary extremes
Extrasolar planet extremes
Extrasolar planet extremes
Extremes
exoplanets | List of exoplanet extremes | Astronomy | 157 |
23,892,026 | https://en.wikipedia.org/wiki/Sharq%20El%20Owainat | Sharq El Owainat, or East Oweinat is a 110,000 acre desert land reclamation project that started in 1991, in the New Valley Governorate, Egypt. It is in a remote location in the Western Desert in the extreme south-west of the country, east of Oweinat Mountain, delimiting Egypt's south western border with Libya and Sudan. The project is operated by the Egyptian Military's National Company for Reclamation and Agriculture in East Oweinat, and in 2021 a further 1.4 million acres were added to its concession.
Water management
The Sharq El Owainat project depends on “fossil water” from the Nubian Sandstone Aquifer, which recharges slowly and is considered a non-renewable resource. The water is pumped from underground and delivered to sprinklers that rotate around a central pivot point, creating green crop circles.
Operators
The initial phase of the project resulted in 27,000 feddans of barren desert land was converted to fertile land. There are about 400 water wells in the area with a further 250 under construction. There is also a nursery that includes 26 greenhouses.
The National Company for Reclamation and Agriculture in East Oweinat has undertaken a large part of the land cultivation, in addition to selling vast plots to other government agencies. The Awkaf Agency owns 48,000 acres of which it has cultivated 20,000.
In addition to Egyptian government companies, a number of private and foreign companies operate in Oweinat. For example, the United Arab Emirates' Jenaan owns 50,000 acres and Al Dahra 23,500 acres. Jenaan's agreement also included signing an agreement with the national airline of Egypt, EgyptAir Express (subsidiary of EgyptAir), to operate a weekly flight from Cairo International Airport to Sharq El Owainat Airport in order to serve the movement of workers and investors to encourage agricultural investment in the region. The flights began 1 November 2009 for an initial 1-year period.
Their cultivation works are seen by some researchers as part of a UAE policy towards consolidating a pivotal role as a food re-export hub, intensifying the industrialisation and commodification of agriculture in the region.
See also
Sharq El Owainat Airport
New Valley Governorate
Toshka
New Valley Project
References
New Valley Governorate
Geography of Egypt
Agriculture in Egypt
Interbasin transfer | Sharq El Owainat | Environmental_science | 489 |
10,085,323 | https://en.wikipedia.org/wiki/Geographical%20centre%20of%20Scotland | There is some debate as to the location of the geographical centre of Scotland. This is due to different methods of calculating the centre, and whether surrounding islands are included.
Centre of gravity method
In 2002, the Ordnance Survey calculated the centre using a mathematical centre of gravity method. This is the mathematical equivalent of calculating the point at which a cardboard cut-out of Scotland could be perfectly balanced on the tip of a pin. It becomes complicated when the islands are included so one simplification is just to ignore them.
The Ordnance Survey calculated that the centre of Mainland Scotland is at (). The point is 5 km east of the mountain of Schiehallion, which is sometimes claimed to be at the centre of Scotland.
Including islands
The centre point including islands was found to be at (). This is on a hillside in Glen Garry, near the Pass of Drumochter.
Nearby, it is claimed that the centre lies a few miles from the village of Newtonmore, Badenoch. It is marked by a stone set into a wall.
Latitude and longitude
Another cruder method is to take the intersection between the line of latitude midway between the most northerly and southerly points on the Scottish mainland, and the line of longitude midway between the most easterly and westerly points. In the days when Corrachadh Mòr in Ardnamurchan was undisputedly the most westerly point, this also produced 56 degrees 39 minutes N, 4 degrees 0 minutes W, very near the summit of Schiehallion.
However the construction of the Skye Bridge, arguably turning Skye into part of the Scottish mainland, may have upset some of these calculations.
Megalithic centre
Less credible candidates for the centre of Scotland also exist. The Society of Antiquaries of Scotland in 1908 suggested the megalithic Faskally Cottages Standing Stones. The Society were aware of other contenders of the centre of Scotland: "Various spots have been so designated: a site at Struan, several miles to the N.W. of Faskally; also a house on the Killiecrankie road, being the most talked of besides a house in the Fair City of Perth itself."
Historic centre
Matthew Paris's map of 1247 shows a clear north–south divide to Scotland. Proverbially Stirling is the strategically important "Gateway to the Highlands". It has been said that "Stirling, like a huge brooch clasps Highlands and Lowlands together". There is also and east–west divide as told in the story as recorded by Boece who relates that in 855 Scotland was invaded by two Northumbrian princes, Osbrecht and Ella. They united their Northumberian Anglian forces with the Lowland Strathclyde Britons in order to defeat the Highland Pictish Scots. Having secured Stirling castle, they built the first stone bridge over the Forth. On the top they reportedly raised a crucifix with the inscription: "Anglos, a Scotis separat, crux ista remotis; Arma hic stant Bruti; stant Scoti hac sub cruce tuti." It may be the stone cross was a tripoint for the three kingdom's borders or marches. In this way the stone cross in the centre of Stirling Bridge was the heart of Scotland.
Central Belt and Watershed
The centre of the Central Belt may also be a point of interest. The Heart of Scotland services known as Harthill is close to the centre of the M8 motorway, Scotland's main road linking East with West. Cumbernauld, also in the Central Belt, is a watershed with one of its rivers (from which its name is derived) flowing to the east and the other flowing west. This watershed test could also apply to other sites like the summit of Ben Lomond being on the line of the Scottish watershed but Cumbernauld arguably has this property in its very name. A map of Scotland's watershed has been produced for walkers.
Furthest from the sea
There have been other centres suggested, such as the furthest point from salt water including sea lochs. The point furthest from the Mean High Water mark is in Glen Quoich, near Braemar, in Aberdeenshire which is 67.6 km from the sea.
As with other topics like defining the location of the North Pole the answer largely depends on which criteria you choose.
Other contenders
Some have also claimed Gartincaber Tower for the title. Even some Stirlingshire residents consider it ahead of Stirling Bridge.
See also
Extreme points of Scotland
Geographical centre of Europe
Centre points of the United Kingdom
References
External links
Heart of Scotland using QGIS
Centre
Scotland | Geographical centre of Scotland | Physics,Mathematics | 948 |
65,785,668 | https://en.wikipedia.org/wiki/Fully%20Automatic%20Installation | Fully Automated Installation (FAI) is a group of shell and Perl scripts that install and configure a complete Linux distribution quickly on a large number of computers.
It's the oldest automated deployment system for Debian.
Automation
FAI allows for installing Debian and Ubuntu distributions. But it also supports CentOS, Rocky Linux and SuSe Linux.
In the past it supported Scientific Linux Cern.
By default a network installation is done, but it's easy to create an installation ISO for booting from CD or USB stick.
There's a web service for FAI which is called FAI.me, which allows creating customized installation images without setting up your own FAI server.
This service also creates cloud images and live images
This service supports Debian and Ubuntu.
Debian's cloud team uses FAI for creating their official cloud images.
Similar software exists for Red Hat (Kickstart), SuSE (AutoYaST, YaST and alice), Solaris (Jumpstart) and likely other operating systems.
References
External links
Official website
FAI.me web service
System administration
Network management | Fully Automatic Installation | Technology,Engineering | 229 |
1,370,289 | https://en.wikipedia.org/wiki/Epizootiology | Epizootiology, epizoology, or veterinary epidemiology is the study of disease patterns within animal populations.
See also
Epizootic
Epidemiology
References
Epidemiology
Veterinary medicine | Epizootiology | Environmental_science | 45 |
32,170,042 | https://en.wikipedia.org/wiki/Mobile%20technology%20in%20Africa | Mobile technology in Africa is a fast growing market. Nowhere is the effect more dramatic than in Africa, where mobile technology often represents the first modern infrastructure of any kind. Over 10% of Internet users are in Africa. However, 50% of Africans have mobile phones and their penetration is expanding rapidly. This means that mobile technology is the largest platform in Africa, and can access a wide range of income groups. AppsAfrica reports Mobile App downloads has surpassed 98 billion which is a very huge benefit for mobile app developers in Africa.
As a consequence of the wider availability of mobile telephony with respect to fixed telephony, in many African countries, most Internet traffic goes through the mobile network. An example is Seychelles, that is the African country with a larger percentage of Internet subscribers, where most Internet users access the net through the mobile network.
Growth of mobile telephony in the 2000s
Several factors contributed to the "boom" of mobile telephony in Africa in the 2000s.
Limitations of African PSTNs
A major success factor of mobile telephony in Africa is the scarce diffusion of PSTNs (fixed line networks). In 2000, Sub-Saharan Africa as a whole had fewer telephone lines than Manhattan alone. Fixed line networks hardly reach the remote rural areas where a relevant percentage of the African population lives. Of about 400.000 rural settlements that are estimated to exist in Africa, less than 3% have PSTN access. Mobile telephony providers have taken advantage of this situation, implementing a very aggressive diffusion strategy for mobile networks. In 2006, 45% of rural settlements in Africa had GSM coverage. More recently, coverage has reached 90% of the territory in several countries, including Comoros, Kenya, Malawi, Mauritius, Seychelles, South Africa, and Uganda. Other countries that in 2007 reached above 50% of GSM coverage are Botswana, Burkina Faso, Burundi, Cape Verde, Guinea, Namibia, Rwanda, Senegal, Swaziland, and Togo. As a consequence of the larger diffusion of GSM networks over fixed line networks, "mobile-telephone booths" are common in some areas of Africa.
The fixed line market in Africa is generally based on monopoly (often state monopoly), with a few number of incumbent operators who did not invest in spreading their networks much farther than the larger urban areas. While this situation is changing (for example, both Telkom Kenya and Botswana Telecommunications Corporation have recently been privatized, and a market liberalization strategy has been initiated in several countries), the mobile telephony market is generally more competitive and dynamic.
The table below outlines the percentage of African countries where telecommunications markets (fixed line telephony, mobile telephony, Internet) are fully competitive, partially competitive, or monopolistic, either de iure or de facto (data refer to 2007).
Market strategies
Mobile telephony providers that introduced mobile telephony in Africa in the 2000s adopted business models explicitly designed to reach the poorest (and largest) section of the population, with low-priced mobile phones and small denomination prepaid cards.
Another key success factor in the providers' strategy in Africa has been the cutting down of roaming costs. This is especially relevant in Africa since strong relationships often hold between neighbouring communities that happen to be separated by national borders. Celtel was the first operator to provide free roaming with the 2006 One Network campaign, whereby roaming became free between Uganda, Kenya, and Tanzania. In 2007 this has been extended to Gabon, DR Congo, Congo-Brazzaville, Burkina Faso, Chad, Malawi, Niger, Nigeria, and Sudan. After Celtel, other providers operating in African markets have announced their intent to gradually reduce and eventually abolish roaming costs for certain areas.
Orange Guinée builds off-grid sites that improve the cell network using masts powered by photovoltaic panels, expanding coverage in rural areas and strengthening coverage in urban areas. This is being financed with a $30 million loan from the European Investment Bank. These solar-powered cell telecommunications antennae will slash grid fuel usage by more than 80%.
Non-profit mobile technology
Mobile technology can be used, not only to generate profit from high income groups, but to provide information and create social change for low income groups. For example, mobile technology is used to provide information on health, education, finances or to access specific groups such as the youth.
However, people who are very poor have very basic phones. Thus non-profit mobile technology is not aimed at advanced smart phones, but ranges from sending out bulk SMSs to USSD, mobi-sites and mobile communities. AppsAfrica writes the next 1 Billion phone users will come from Rural areas.
The ultimate aim of non-profit mobile technology is to make it free, or as near to free, for the end user. This means enlisting donors and getting mobile networks on board. Internationally, companies such as TextToChange, FrontlineSMS, RapidSMS, Ushahidi all work with mobiles in health, disaster relief and aid management.
Promote health
mHealth is using mobile technology to provide groups with health information. It was pioneered in part by the UN Foundation and Vodafone Foundation through partnerships with the World Health Organization (WHO) and the social enterprise DataDyne, who then joined with other partners in forging the mHealth Alliance.
mHealth activities come in the form of appointment reminders, community mobilization and health promotion, emergency toll-free telephone services, health call centres, health surveys, information initiatives and patient monitoring among others.
In June 2011, the first African mobile health summit was held in Cape Town. At the summit, the WHO released a report stating that eighty-three per cent of governments surveyed had at least one mHealth project in their country. However, the majority of mHealth activities were limited in size and scope.
mHealth initiatives were health call centres (59%), emergency toll-free telephone services (55%), managing emergencies and disasters (54%), and mobile telemedicine (49%).
In South Africa, companies like Cell-Life and GeoMed and HealthSMS use mobile technology for health.
Fight HIV/Aids
The Praekelt Foundation is a South African example of a non profit organisation that is using mobile technology to create social change. Their programmes have currently reached 50 million people across 15 countries in sub-Saharan Africa.
The founders saw that the technology they were creating for corporate clients could be useful for NGOs to provide information to their target markets. “Full profit want to reach people for different reasons, but people should not be charged for having access to life saving information,” says Marcha Neethling, head of operations at Praekelt Foundation.
One of the mobile technologies developed by Praekelt Foundation is a mobile community called YoungAfricaLive (YAL). Users do not need to have airtime or data bundles on their phones to use it. The aim of the mobile community was to create a space that would be interactive and fun where young people could talk candidly and learn about love, relationships and sex and HIV/AIDS.
The mobile community is unique to the Vodacom network. At the end of 2010, Vodacom’s mobile platform, Vodafone Live, was receiving 3.2 million unique users monthly. As (young) people were already using mobile technology to surf the net and download songs etc. it seemed the perfect place to engage with this target group.
The community is aimed at users between 16 and 24 and users receive daily news and celeb stories. All with a social call to action at the end, they participate in polls, watch videos that link to stories and can engage in anonymous chat rooms. Experts come on to the chat rooms to discuss sexual topics and allow users to ask personal questions anonymously. For example, well known South African sexologist Dr Eve hosts live chats once a week.
Users have engaged with the community and many of the updated features of the community have come directly from user suggestions. Users have commented saying YoungAfricaLive creates a platform for them to express their ideas, making them proud of their status and encouraging them to be responsible around sex.
The ongoing challenge with free mobile communities and technology is continuing to engage the service provider to allow the community to be entirely free. “With YoungAfricaLive South Africa, Vodacom is sponsoring the bandwidth, which is a massive investment... (thus) sustainability is always a question.”
Community crime fighting
In 2011 Vodacom pioneered a project in South Africa to fight crime using mobile phones. They partnered with The Khulisa's Youth out of School Ubuntu Club in Tembisa, Johannesburg and donated a computer and seven mobile phones to the Club. These are used by the young patrollers in the community to keep in touch and to report all crime incidents, as well as update the community on current events.
The project is based in the Phomolong area of Tembisa, which is notorious for high levels of criminal activity. Each mobile phone donated has internet capabilities and the members of the Club will be allocated a mobile phone that they will use to capture events, interview members of the community and create video clips. These will be uploaded to their Facebook page and website all in an effort to report on criminal activity in the community.
The South African Police Service also runs a national crime line which they encourage citizens to SMS in and report crimes in their communities.
See also
Internet in Africa
Africa Digital Awards
References
Bibliography
Darren Waters (2007), Africa waiting for net revolution. «BBC News» October 29,
ITU (2007), Telecommunications/ICT Markets and Trends in Africa,
Reuters (2008), Celtel Expands Free Roaming Network to 12 African Nations,
Mobile technology
Economy of Africa
Telecommunications in Africa
Science and technology in Africa | Mobile technology in Africa | Technology | 2,008 |
60,675,751 | https://en.wikipedia.org/wiki/Rossbeevera%20griseobrunnea | Rossbeevera griseobrunnea is a species of the fungal family Boletaceae. This species was first described in April 2019 from southern China.
References
Fungi of China
Boletaceae
Fungus species | Rossbeevera griseobrunnea | Biology | 42 |
27,040,743 | https://en.wikipedia.org/wiki/Stockfish%20%28chess%29 | Stockfish is a free and open-source chess engine, available for various desktop and mobile platforms. It can be used in chess software through the Universal Chess Interface.
Stockfish has been one of the strongest chess engines in the world for several years; it has won all main events of the Top Chess Engine Championship (TCEC) and the Chess.com Computer Chess Championship (CCC) since 2020 and, , is the strongest CPU chess engine in the world with an estimated Elo rating of 3642, in a time control of 40/15 (15 minutes to make 40 moves), according to CCRL.
The Stockfish engine was developed by Tord Romstad, Marco Costalba, and Joona Kiiski, and was derived from Glaurung, an open-source engine by Tord Romstad released in 2004. It is now being developed and maintained by the Stockfish community.
Stockfish historically used only a classical hand-crafted function to evaluate board positions, but with the introduction of the efficiently updatable neural network (NNUE) in August 2020, it adopted a hybrid evaluation system that primarily used the neural network and occasionally relied on the hand-crafted evaluation. In July 2023, Stockfish removed the hand-crafted evaluation and transitioned to a fully neural network-based approach.
Features
Stockfish uses a tree-search algorithm based on alpha–beta search with several hand-designed heuristics, and since Stockfish 12 (2020) uses an efficiently updatable neural network as its evaluation function. It represents positions using bitboards.
Stockfish supports Chess960, a feature it inherited from Glaurung. Support for Syzygy tablebases, previously available in a fork maintained by Ronald de Man, was integrated into Stockfish in 2014. In 2018 support for the 7-man Syzygy was added, shortly after the tablebase was made available. Stockfish supports up to 1024 CPU threads in multiprocessor systems, with a maximum transposition table size of 32 TB.
Stockfish has been a very popular engine on various platforms. On desktop, it is the default chess engine bundled with the Internet Chess Club interface programs BlitzIn and Dasher. On mobile, it has been bundled with the Stockfish app, SmallFish and Droidfish. Other Stockfish-compatible graphical user interfaces (GUIs) include Fritz, Arena, Stockfish for Mac, and PyChess. Stockfish can be compiled to WebAssembly or JavaScript, allowing it to run in the browser. Both Chess.com and Lichess provide Stockfish in this form in addition to a server-side program. Release versions and development versions are available as C++ source code and as precompiled versions for Microsoft Windows, macOS, Linux 32-bit/64-bit and Android.
History
The program originated from Glaurung, an open-source chess engine created by Tord Romstad and first released in 2004. Four years later, Marco Costalba forked the project, naming it Stockfish because it was "produced in Norway and cooked in Italy" (Romstad is Norwegian and Costalba is Italian). The first version, Stockfish 1.0, was released in November 2008. For a while, new ideas and code changes were transferred between the two programs in both directions, until Romstad decided to discontinue Glaurung in favor of Stockfish, which was the stronger engine at the time. The last Glaurung version (2.2) was released in December 2008.
Around 2011, Romstad decided to abandon his involvement with Stockfish in order to spend more time on his new iOS chess app. On 18 June 2014 Marco Costalba announced that he had "decided to step down as Stockfish maintainer" and asked that the community create a fork of the current version and continue its development. An official repository, managed by a volunteer group of core Stockfish developers, was created soon after and currently manages the development of the project.
Fishtest
Since 2013, Stockfish has been developed using a distributed testing framework named Fishtest, where volunteers can donate CPU time for testing improvements to the program.
Changes to game-playing code are accepted or rejected based on results of playing of tens of thousands of games on the framework against an older "reference" version of the program, using sequential probability ratio testing. Tests on the framework are verified using the chi-squared test, and only if the results are statistically significant are they deemed reliable and used to revise the software code.
After the inception of Fishtest, Stockfish experienced an explosive growth of 120 Elo points in just 12 months, propelling it to the top of all major rating lists.
, the framework has used a total of more than 16,300 years of CPU time to play over 8.6 billion chess games.
NNUE
In June 2020, Stockfish introduced the efficiently updatable neural network (NNUE) approach, based on earlier work by computer shogi programmers. Instead of using manually designed heuristics to evaluate the board, this approach introduced a neural network trained on millions of positions which could be evaluated quickly on CPU. On 2 September 2020, the twelfth version of Stockfish was released, incorporating NNUE, and reportedly winning ten times more game pairs than it loses when matched against version eleven. In July 2023, the classical evaluation was completely removed in favor of the NNUE evaluation.
Competition results
Top Chess Engine Championship
Stockfish is a TCEC multiple-time champion and the current leader in trophy count. Ever since TCEC restarted in 2013, Stockfish has finished first or second in every season except one. Stockfish finished second in TCEC Season 4 and 5, with scores of 23–25 first against Houdini 3 and later against Komodo 1142 in the Superfinal event. Season 5 was notable for the winning Komodo team as they accepted the award posthumously for the program's creator Don Dailey, who succumbed to an illness during the final stage of the event. In his honor, the version of Stockfish that was released shortly after that season was named "Stockfish DD".
On 30 May 2014, Stockfish 170514 (a development version of Stockfish 5 with tablebase support) convincingly won TCEC Season 6, scoring 35.5–28.5 against Komodo 7x in the Superfinal. Stockfish 5 was released the following day. In TCEC Season 7, Stockfish again made the Superfinal, but lost to Komodo with a score of 30.5–33.5. In TCEC Season 8, despite losses on time caused by buggy code, Stockfish nevertheless qualified once more for the Superfinal, but lost 46.5–53.5 to Komodo. In Season 9, Stockfish defeated Houdini 5 with a score of 54.5–45.5.
Stockfish finished third during season 10 of TCEC, the only season since 2013 in which Stockfish had failed to qualify for the superfinal. It did not lose a game but was still eliminated because it was unable to score enough wins against lower-rated engines. After this technical elimination, Stockfish went on a long winning streak, winning seasons 11 (59–41 against Houdini 6.03), 12 (60–40 against Komodo 12.1.1), and 13 (55–45 against Komodo 2155.00) convincingly. In Season 14, Stockfish faced a new challenger in Leela Chess Zero, ekeing out a win by one point (50.5–49.5). Its winning streak was finally ended in Season 15, when Leela qualified again and won 53.5–46.5, but Stockfish promptly won Season 16, defeating AllieStein 54.5–45.5, after Leela failed to qualify for the Superfinal. In Season 17, Stockfish faced Leela again in the superfinal, losing 52.5–47.5. However, Stockfish has won every Superfinal since: beating Leela 53.5–46.5 in Season 18, 54.5–45.5 in Season 19, 53–47 in Season 20, and 56–44 in Season 21. In Season 22, Komodo Dragon beat out Leela to qualify for the Superfinal, losing to Stockfish by a large margin 59.5-40.5. Stockfish did not lose an opening pair in this match. Leela made the Superfinal in Seasons 23 and 24, but was crushed by Stockfish both times (58.5-41.5 and 58-42). In Season 25, Stockfish once again defeated Leela, but this time by a narrower margin of 52-48.
Stockfish also took part in the TCEC cup, winning the first edition, but was surprisingly upset by Houdini in the semifinals of the second edition. Stockfish recovered to beat Komodo in the third-place playoff. In the third edition, Stockfish made it to the finals, but was defeated by Leela Chess Zero after blundering in a 7-man endgame tablebase draw. It turned this result around in the fourth edition, defeating Leela in the final 4.5–3.5. In TCEC Cup 6, Stockfish finished third after losing to AllieStein in the semifinals, the first time it had failed to make the finals. Since then, Stockfish has consistently won the tournament, with the exception of the 11th edition which Leela won 8.5-7.5.
Chess.com Computer Chess Championship
Ever since Chess.com hosted its first Chess.com Computer Chess Championship in 2018, Stockfish has been the most successful engine. It dominated the earlier championships, winning six consecutive titles before finishing second in CCC7. Since then, its dominance has come under threat from the neural-network engines Leelenstein and Leela Chess Zero, but it has continued to perform well, reaching at least the superfinal in every edition up to CCC11. CCC12 had for the first time a knockout format, with seeding placing CCC11 finalists Stockfish and Leela in the same half. Leela eliminated Stockfish in the semi-finals. However, a post-tournament match against the loser of the final, Leelenstein, saw Stockfish winning in the same format as the main event. After finishing second again to Leela in CCC13, and an uncharacteristic fourth in CCC14, Stockfish went on a long winning streak, taking first place in every championship since.
Other matches
Stockfish 5 versus Nakamura
Stockfish's strength relative to the best human chess players was most apparent in a handicap match with grandmaster Hikaru Nakamura (2798-rated) in August 2014. In the first two games of the match, Nakamura had the assistance of an older version of Rybka, and in the next two games, he received White with pawn odds but no assistance. Nakamura was the world's fifth highest rated human chess player at the time of the match, while Stockfish 5 was denied use of its opening book and endgame tablebase. Stockfish won each half of the match 1.5–0.5. Both of Stockfish's wins arose from positions in which Nakamura, as is typical for his playing style, pressed for a win instead of acquiescing to a draw.
Stockfish 8 versus AlphaZero
In December 2017, Stockfish 8 was used as a benchmark to test Google division DeepMind's AlphaZero, with Stockfish running on CPU and AlphaZero running on Google's proprietary Tensor Processing Units. AlphaZero was trained through self-play for a total of nine hours, and reached Stockfish's level after just four. In 100 games from the starting position, AlphaZero won 25 games as White, won 3 as Black, and drew the remaining 72, with 0 losses. AlphaZero also played twelve 100-game matches against Stockfish starting from twelve popular openings for a final score of 290 wins, 886 draws and 24 losses, for a point score of 733:467.
AlphaZero's victory over Stockfish sparked a flurry of activity in the computer chess community, leading to a new open-source engine aimed at replicating AlphaZero, known as Leela Chess Zero. By January 2019, Leela was able to defeat the version of Stockfish that played AlphaZero (Stockfish 8) in a 100-game match. An updated version of Stockfish narrowly defeated Leela Chess Zero in the superfinal of the 14th TCEC season, 50.5–49.5 (+10 =81 −9), but lost the Superfinal of the next season to Leela 53.5–46.5 (+14 =79 -7). The two engines remained close in strength for a while, but Stockfish has pulled away since the introduction of NNUE, winning every TCEC season since Season 18.
Derivatives
YaneuraOu, a strong shogi engine and the origin of NNUE. Speaks USI, a variant of UCI for shogi.
Fairy Stockfish, a version modified to play fairy chess. Runs with regional variants (chess, shogi, makruk, etc.) as well as other variants like antichess.
Lichess Stockfish, a version for playing variants without fairy pieces.
Crystal, which seeks to address common issues with chess engines such as positional or tactical blindness due to over reductions or over pruning, draw blindness due to the move horizon and displayed principal variation reliability.
Brainfish, which contains a reduced version of Cerebellum, a chess opening library.
BrainLearn, a derivative of Brainfish but with a persisted learning algorithm.
ShashChess, a derivative with the goal to apply Alexander Shashin theory from the book Best Play: a New Method for Discovering the Strongest Move.
Pikafish, a free, open source, and strong UCI Xiangqi engine derived from Stockfish that analyzes xiangqi positions and computes the optimal moves.
Houdini 6, a Stockfish derivative that did not comply with the terms of the GPL license.
Fat Fritz 2, a Stockfish derivative that did not comply with the terms of the GPL license.
Notes
References
Further reading
Interview with Tord Romstad (Norway), Joona Kiiski (Finland) and Marco Costalba (Italy), programmers of Stockfish
External links
WebAssembly port of Stockfish
Development versions built for Linux and Windows
Developers forum
Stockfish Testing Framework
2008 software
Chess engines
Free software programmed in C++
Distributed computing projects
Software using the GNU General Public License
Applied machine learning
Cross-platform free software
Free software for Linux
Free software for Windows
Free software for macOS
Free and open-source Android software
Volunteer computing projects | Stockfish (chess) | Engineering | 3,078 |
35,360,069 | https://en.wikipedia.org/wiki/Wave%20surface | In mathematics, Fresnel's wave surface, found by Augustin-Jean Fresnel in 1822, is a quartic surface describing the propagation of light in an optically biaxial crystal. Wave surfaces are special cases of tetrahedroids which are in turn special cases of Kummer surfaces.
In projective coordinates (w:x:y:z) the wave surface is given by
They are used in the treatment of conical refractions.
References
Fresnel, A. (1822), "Second supplément au mémoire sur la double réfraction" (signed 31 March 1822, submitted 1 April 1822), in H. de Sénarmont, É. Verdet, and L. Fresnel (eds.), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol.2 (1868), pp.369–442, especially pp. 369 (date présenté), 386–8 (eq.4), 442 (signature and date).
External links
Fresnel wave surface
Algebraic surfaces
Complex surfaces
Waves | Wave surface | Physics | 241 |
3,236,606 | https://en.wikipedia.org/wiki/Barrer | The barrer is a non-SI unit of permeability of gases used in the membrane technology and contact lens industry. It is named after the New Zealand-born chemist Richard Barrer.
Definition
The barrer is defined as follows:
Confusingly, the centimetre notation is used in four different ways.
To denote an amount of substance, the 'cm3STP' is standard cubic centimeter, which is a unit of amount of substance rather than a unit of volume. It represents the number of gas molecules or moles that would occupy one cubic centimeter at standard temperature and pressure, as calculated via the ideal gas law.
To denote a pressure differential, the notation 'cmHg' is used; a 'centimetre of mercury', which is ten times the more familiar 'millimetre of mercury'.
And finally, the centimetre and square centimetre are used in the normal way to measure thickness and area.
The cm corresponds in the permeability equations to the thickness of the material whose permeability is being evaluated, the cm3STPcm−2s−1 to the flux of gas through the material, and the cmHg to the pressure drop across the material. That is, it measures the rate of fluid flow passing through an area of material with a thickness driven by a given pressure. See Darcy's Law.
In SI units, the barrer can be expressed as:
To convert to CGS permeability unit, one must use the following:
Where M is the molecular weight of the penetrant gas (g/mol).
Another commonly expressed unit is Gas Permeance Unit (GPU). It is used in the measurement of gas permeance. Permeance can be expressed as the ratio of the permeability with the thickness of membrane.
Or in SI units:
References
Units of measurement | Barrer | Mathematics | 382 |
23,124,520 | https://en.wikipedia.org/wiki/I-spline | In the mathematical subfield of numerical analysis, an I-spline is a monotone spline function.
Definition
A family of I-spline functions of degree k with n free parameters is defined in terms of the M-splines Mi(x|k, t)
where L is the lower limit of the domain of the splines.
Since M-splines are non-negative, I-splines are monotonically non-decreasing.
Computation
Let j be the index such that tj ≤ x < tj+1. Then Ii(x|k, t) is zero if i > j, and equals one if j − k + 1 > i. Otherwise,
Applications
I-splines can be used as basis splines for regression analysis and data transformation when monotonicity is desired (constraining the regression coefficients to be non-negative for a non-decreasing fit, and non-positive for a non-increasing fit).
References
Splines (mathematics) | I-spline | Mathematics | 203 |
23,577 | https://en.wikipedia.org/wiki/Partial%20function | In mathematics, a partial function from a set to a set is a function from a subset of (possibly the whole itself) to . The subset , that is, the domain of viewed as a function, is called the domain of definition or natural domain of . If equals , that is, if is defined on every element in , then is said to be a total function.
In other words, a partial function is a binary relation over two sets that associates to every element of the first set at most one element of the second set; it is thus a univalent relation. This generalizes the concept of a (total) function by not requiring every element of the first set to be associated to an element of the second set.
A partial function is often used when its exact domain of definition is not known, or is difficult to specify. However, even when the exact domain of definition is known, partial functions are often used for simplicity or brevity. This is the case in calculus, where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator; in this context, a partial function is generally simply called a .
In computability theory, a general recursive function is a partial function from the integers to the integers; no algorithm can exist for deciding whether an arbitrary such function is in fact total.
When arrow notation is used for functions, a partial function from to is sometimes written as or However, there is no general convention, and the latter notation is more commonly used for inclusion maps or embeddings.
Specifically, for a partial function and any one has either:
(it is a single element in ), or
is undefined.
For example, if is the square root function restricted to the integers
defined by:
if, and only if,
then is only defined if is a perfect square (that is, ). So but is undefined.
Basic concepts
A partial function arises from the consideration of maps between two sets and that may not be defined on the entire set . A common example is the square root operation on the real numbers : because negative real numbers do not have real square roots, the operation can be viewed as a partial function from to The domain of definition of a partial function is the subset of on which the partial function is defined; in this case, the partial function may also be viewed as a function from to . In the example of the square root operation, the set consists of the nonnegative real numbers
The notion of partial function is particularly convenient when the exact domain of definition is unknown or even unknowable. For a computer-science example of the latter, see Halting problem.
In case the domain of definition is equal to the whole set , the partial function is said to be total. Thus, total partial functions from to coincide with functions from to .
Many properties of functions can be extended in an appropriate sense of partial functions. A partial function is said to be injective, surjective, or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively.
Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective.
An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to a bijective partial function.
The notion of transformation can be generalized to partial functions as well. A partial transformation is a function where both and are subsets of some set
Function spaces
For convenience, denote the set of all partial functions from a set to a set by This set is the union of the sets of functions defined on subsets of with same codomain :
the latter also written as In finite case, its cardinality is
because any partial function can be extended to a function by any fixed value not contained in so that the codomain is an operation which is injective (unique and invertible by restriction).
Discussion and examples
The first diagram at the top of the article represents a partial function that is a function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Whereas, the second diagram represents a function since every element on the left-hand set is associated with exactly one element in the right hand set.
Natural logarithm
Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a function.
Subtraction of natural numbers
Subtraction of natural numbers (in which is the non-negative integers) is a partial function:
It is defined only when
Bottom element
In denotational semantics a partial function is considered as returning the bottom element when it is undefined.
In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE floating point standard defines a not-a-number value which is returned when a floating point operation is undefined and exceptions are suppressed, e.g. when the square root of a negative number is requested.
In a programming language where function parameters are statically typed, a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function.
In category theory
In category theory, when considering the operation of morphism composition in concrete categories, the composition operation is a total function if and only if has one element. The reason for this is that two morphisms and can only be composed as if that is, the codomain of must equal the domain of
The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science."
The category of sets and partial bijections is equivalent to its dual. It is the prototypical inverse category.
In abstract algebra
Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a field, in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined).
The set of all partial functions (partial transformations) on a given base set, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on ), typically denoted by The set of all partial bijections on forms the symmetric inverse semigroup.
Charts and atlases for manifolds and fiber bundles
Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map, which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps.
The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined.
See also
References
Martin Davis (1958), Computability and Unsolvability, McGraw–Hill Book Company, Inc, New York. Republished by Dover in 1982. .
Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam, Netherlands, 10th printing with corrections added on 7th printing (1974). .
Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGraw–Hill Book Company, New York.
Notes
Mathematical relations
Functions and mappings
Properties of binary relations | Partial function | Mathematics | 1,800 |
3,170,341 | https://en.wikipedia.org/wiki/Peptostreptococcus | Peptostreptococcus is a genus of anaerobic, Gram-positive, non-spore forming bacteria. The cells are small, spherical, and can occur in short chains, in pairs or individually. They typically move using cilia. Peptostreptococcus are slow-growing bacteria with increasing resistance to antimicrobial drugs. Peptostreptococcus is a normal inhabitant of the healthy lower reproductive tract of women.
Pathogenesis
Peptostreptococcus species are commensal organisms in humans, living predominantly in the mouth, skin, gastrointestinal, vagina and urinary tracts, and are members of the gut microbiota. Under immunosuppressed or traumatic conditions these organisms can become pathogenic, as well as septicemic, harming their host. Peptostreptococcus can cause brain, liver, breast, and lung abscesses, as well as generalized necrotizing soft tissue infections. They participate in mixed anaerobic infections, a term which is used to describe infections that are caused by multiple bacteria that do not require or may even be harmed by oxygen.
Peptostreptococcus species are susceptible to beta-lactam antibiotics.
They are isolated with high frequency from all specimen sources. Anaerobic gram-positive cocci such as Peptostreptococcus are the second most frequently recovered anaerobes and account for approximately one quarter of anaerobic isolates found. Most often anaerobic gram-positive cocci are usually recovered mixed in with other anaerobic or aerobic bacteria from various infections at different sites of the human body. This contributes to the difficulty of isolating Peptostreptococcus organisms.
Infections
Peptostreptococcus species that are found in clinical infections were once part of the genus formerly known as Peptococcus. Peptostreptococcus is the only genus among anaerobic gram-positive cocci that is encountered in clinical infections. As such, Peptostreptococcus species are viewed as being clinically significant anaerobic cocci. Other similar clinically significant anaerobic cocci include Veillonella species (gram-negative cocci), and microaerophilic streptococci (aerotolerant). Anaerobic gram-positive cocci include various clinically significant species of the genus Peptostreptococcus.
The species of anaerobic gram-positive cocci isolated most commonly include Peptostreptococcus magnus, Peptostreptococcus asaccharolyticus, Peptostreptococcus anaerobius, Peptostreptococcus prevotii, and Peptostreptococcus micros.
Anaerobic gram-positive cocci that produce large amounts of lactic acid during the process of carbohydrate fermentation were reclassified as Streptococcus parvulus and Streptococcus morbillorum from Peptococcus or Peptostreptococcus. Most of these organisms are anaerobic, but some are microaerophilic.
Due to a large amount of new research done on the human microbe and more information on bacteria, many species of bacteria have been renamed and re-classified. Based on DNA homology and whole-cell polypeptide-pattern study findings supported by phenotypic characteristics, the DNA homology group of microaerobic streptococci that was formerly known as Streptococcus anginosus or Streptococcus milleri is now composed of three distinct species: S. anginosus, S. constellatus, and S. intermedius. The microaerobic species S. morbillorum was transferred into the genus Gemella. A new species within the genus Peptostreptococcus is Peptostreptococcus hydrogenalis; it contains the indole-positive, saccharolytic strains of the genus.
Peptostreptococcus infections occur in/on all body sites, including the CNS, head, neck, chest, abdomen, pelvis, skin, bone, joint, and soft tissues. Adequate therapy must be taken against infections, or it could result in clinical failures.
Peptostreptoccocci are often overlooked and they are very difficult to isolate, appropriate specimen collection is required. Peptostreptococci grow slowly which makes them increasingly resistant to antimicrobrials.
The most common Peptostreptococcus species found in infections are P. magnus (18% of all anaerobic gram-positive cocci and microaerophilic streptococci), P. asaccharolyticus (17%), P. anaerobius (16%), P. prevotii (13%), P. micros (4%), P. saccharolyticus (3%), and P. intermedius (2%).
P. magnus were highly recovered in bone and chest infections. P. asaccharolyticus and P. anaerobius had the highest recovery rate in obstetrical/gynecological and respiratory tract infections and wounds. When anaerobic and facultative cocci were recovered most of the infection were polymicrobial. Most patients from whom microaerophilic streptococci were recovered in pure culture had abscesses (e.g., dental, intracranial, pulmonary), bacteremia, meningitis, or conjunctivitis. P. magnus is the most commonly isolated anaerobic cocci and is often recovered in pure culture. Other common Peptostreptococci in the different infectious sites are P. anaerobius which occurs in oral infections; P. micros in respiratory tract infections, P. magnus, P. micros, P. asaccharolyticus, P. vaginalis, and P. anaerobius in skin and soft tissue infections; P. magnus and P. micros in deep organ abscesses; P. magnus, P. micros, and P. anaerobius in gastrointestinal tract-associated infections; P. magnus, P. micros, P. asaccharolyticus, P. vaginalis, P. tetradius, and P. anaerobius in female genitourinary infections; and P. magnus, P. asaccharolyticus, P. vaginalis, and P. anaerobius in bone and joint infections and leg and foot ulcers.
Many infections caused by Peptostreptococcus bacteria are synergistic. Bacterial synergy, the presence of which is determined by mutual induction of sepsis enhancement, increased mortality, increased abscess inducement, and enhancement of the growth of the bacterial components in mixed infections, is found between anaerobic gram-positive cocci and their aerobic and anaerobic counterparts. The ability of anaerobic gram-positive cocci and microaerophilic streptococci to produce capsular material is an important virulence mechanism, but other factors may also influence the interaction of these organisms in mixed infections.
Although anaerobic cocci can be isolated from infections at all body sites, a predisposition for certain sites has been observed. In general, Peptostreptococcus species, particularly P. magnus, have been recovered more often from subcutaneous and soft tissue abscesses and diabetes-related foot ulcers than from intra-abdominal infections. Peptostreptococcus infections occur more often in chronic infections.
Frequency of infections
It is difficult to determine the exact frequency of Peptostreptococcus infections because of inappropriate collection methods, transportation, and specimen cultivation. Peptostreptococcus infections are most commonly found in patients who have had or have chronic infections. Patients who have predisposing conditions are shown to have 5% higher recovery rate of the bacteria in blood cultures.
Of all anaerobic bacteria recovered at hospitals from 1973 to 1985, anaerobic gram-positive cocci accounted for 26% of it. The infected sited where these organisms were found in the greatest abundance were obstetrical and gynecological sites (35%), bones (39%) cysts (40%), and ears (53%). Occasionally found in other places such as abdomen, lymph nodes, bile, and eyes.
Frequency of infections is greater in developing countries because treatment is often slow, or it is impossible to get the adequate treatment, but mortality due to Peptostreptococcus infections have decreased in the last 30 years and will continue to do so due to better treatment.
All ages are susceptible to Peptostreptococcus infections, however children are more likely to get head and neck infections.
Infection types
Skin and soft tissue infections
Anaerobic gram-positive cocci and microaerophilic streptococci are often recovered in polymicrobial skin and soft tissue infections, such as gangrene, fasciitis, ulcers, diabetes-related foot infections, burns, human or animal bites, infected cysts, abscesses of the breast, rectum, and anus. Anaerobic gram-positive cocci and microaerophilic streptococci are generally found mixed with other aerobic and anaerobic bacteria that originate from the mucosal surface adjacent to the infected site or that have been inoculated into the infected site.
Peptostreptococcus spp. can cause infections such as gluteal decubitus ulcers, diabetes-related foot infections, and rectal abscesses.
Anaerobic gram-positive cocci and microaerophilic streptococci are part of the normal skin microbiota, so it is hard to avoid contamination by these bacteria when obtaining specimens.
CNS infections
CNS infections can be isolated from subdural empyema and brain abscesses which are a result of chronic infections. Also isolated from sinuses, teeth and mastoid. 46% of 39 brain abscesses in one study showed anaerobic gram-positive cocci and microaerophilic streptococci.
Upper respiratory tract and dental infections
There is a high rate of anaerobic cocci colonization which accounts for the organisms significance in these infections. Anaerboci gram-positive cocci and micraerophilic streptococci are often recovered in these infections. They have been recovered in 15% of patients with chronic mastoiditis. When Peptostreptococci and other anaerobes predominate, aggressive treatment of acute infection can prevent chronic infection. When the risk of anaerobic infection is high, as with intra-abdominal and post-surgical infections, proper antimicrobial prophylaxis may reduce the risk
90% of the time, other organisms were mixed in with the anaerobic gram-positive cocci and microaerophilic streptococci. This includes Streptococcus species, and Staphylococcus aureus. Peptostreptococcus micros has a moderate association with periodontal disease.
Bacteremia and endocarditis
Peptostreptococci can cause fatal endocarditis, paravalvular abscess, and pericarditis. The most frequent source of bacteremia due to Peptostreptococcus are infections of the oropharynx, lower respiratory tract, female genital tract, abdomen, skin, and soft tissues. Recent gynecological surgery, immunosuppression, dental procedures, infections of the female genital tract, abdominal and soft tissue along with gastrointestinal surgery are predisposing factors for bacteremia due to peptostreptococcus.
Microaerophilic streptococci typically account for 5-10% of cases of endocarditis; however, Peptostreptococci have only rarely been isolated.
Anaerobic pleuropulmonary infections
Anaerobic gram-positive cocci and microaerophilic streptococci are most frequently found in aspiration pneumonia, empyema, lung abscesses, and mediastinitis. These bacteria account for 10-20% of anaerobic isolated recovered from pulmonary infections.
It is difficult to obtain appropriate culture specimens. It requires a direct lung puncture, or the use of trans-tracheal aspiration.
Abdominal infections
Anaerobic gram-positive cocci are part of the normal gastrointestinal microbiota. They are isolated in approximately 20% of specimens from intra-abdominal infections, such as peritonitis. Found in abscesses of the liver, spleen, and abdomen. Like in upper respiratory tract and dental infections, anaerobic gram-positive cocci are recovered mixed with other bacteria. In this case they are mixed with organisms of intestinal origin such as E coli, bacteroides fragilis group, and clostridium species.
Female pelvic infections
Anaerobic gram-positive cocci are frequently isolated from anaerobically infected bones and joints., they accounted for 40% of anaerobic isolates of osteomyelitis caused by anaerobic bacteria and 20% of anaerobic isolates of arthritis caused by anaerobic bacteria. P magnus and P prevotii are the predominant bone and joint isolates. Management of these infections requires prolonged courses of antimicrobials and is enhanced by removal of the foreign material.
Peptostreptococcus species are part of the microbiota of the lower reproductive tract of women.
Causes of infection
Infections with anaerobic gram-positive cocci and microaerophilic streptococci are often caused by:
Trauma
Immunodeficiency
Steroid therapy
Vascular disease
Malignancy
Reduced blood supply
Previous surgery
Presence of a foreign body
Sickle cell anemia
Diabetes
Treatment
When Peptostreptococci and other anaerobes predominate, aggressive treatment of acute infection can prevent chronic infection. When the risk of anaerobic infection is high, as with intra-abdominal and post-surgical infections, proper antimicrobial prophylaxis may reduce the risk.
Therapy with antimicrobials (e.g., aminoglycosides, trimethoprim-sulfamethoxazole, older quinolones) often does not eradicate anaerobes.
Taxonomy
As of 2022, there are 5 species validly published in the genus Peptostreptococcus, with several species formerly being described, that have been moved to a more accurate genus.
Not validly published species
Peptostreptococcus faecalis
Peptostreptococcus glycinophilus
Candidatus Peptostreptococcus massiliensis
Species formerly described in Peptostreptococcus
Order Coriobacteriales
Family Atopobiaceae
Genus Lancefieldella: P. parvulus, first moved to Atopobium in 1993, reassigned in 2018.
Order Eggerthellales
Family Eggerthellaceae
Genus Slackia: P. heliotrinreducens, reassigned in 1999.
Order Eubacteriales
Family Lachnospiraceae
Genus Blautia: P. productus, first moved to Ruminococcus (Oscillospiraceae) in 1994, reassigned in 2008.
Family Peptoniphilaceae
Genus Anaerococcus: P. hydrogenalis, P. lactolyticus, P. octavius, P. prevotii, P. tetradius, P. vaginalis, reassigned in 2001.
Genus Finegoldia: P. magnus, reassigned in 2000.
Genus Gallicola: P. barnesae, reassigned in 2001.
Genus Parvimonas: P. micros, reassigned in 2006.
Genus Peptoniphilus: P. asaccharolyticus, P. harei, P. indolicus, P. ivorii, P. lacrimalis, reassigned in 2001.
See also
List of bacterial vaginosis microbiota
References
External links
Peptostreptococcus infections from eMedicine.
Peptostreptococcaceae
Gut flora bacteria
Bacterial vaginosis
Bacteria genera
Taxa described in 1936 | Peptostreptococcus | Biology | 3,394 |
67,513 | https://en.wikipedia.org/wiki/Systematic%20element%20name | A systematic element name is the temporary name assigned to an unknown or recently synthesized chemical element. A systematic symbol is also derived from this name.
In chemistry, a transuranic element receives a permanent name and symbol only after its synthesis has been confirmed. In some cases, such as the Transfermium Wars, controversies over the formal name and symbol have been protracted and highly political. In order to discuss such elements without ambiguity, the International Union of Pure and Applied Chemistry (IUPAC) uses a set of rules, adopted in 1978, to assign a temporary systematic name and symbol to each such element. This approach to naming originated in the successful development of regular rules for the naming of organic compounds.
IUPAC rules
The temporary names derive systematically from the element's atomic number, and apply only to 101 ≤ Z ≤ 999. Each digit is translated into a "numerical root" according to the table. The roots are concatenated, and the name is completed by the suffix -ium. Some of the roots are Latin and others are Greek, to avoid two digits starting with the same letter (for example, the Greek-derived pent is used instead of the Latin-derived quint to avoid confusion with quad for 4). There are two elision rules designed to prevent odd-looking names.
Traditionally the suffix -ium was used only for metals (or at least elements that were expected to be metallic), and other elements used different suffixes: halogens used -ine and noble gases used -on instead. However, the systematic names use -ium for all elements regardless of group. Thus, elements 117 and 118 were ununseptium and ununoctium, not ununseptine and ununocton. This does not apply to the trivial names these elements receive once confirmed; thus, elements 117 and 118 are now tennessine and oganesson, respectively. For these trivial names, all elements receive the suffix -ium except those in group 17, which receive -ine (like the halogens), and those in group 18, which receive -on (like the noble gases). (That being said, tennessine and oganesson are expected to behave quite differently from their lighter congeners.)
The systematic symbol is formed by taking the first letter of each root, converting the first to uppercase. This results in three-letter symbols instead of the one- or two-letter symbols used for named elements. The rationale is that any scheme producing two-letter symbols will have to deviate from full systematicity to avoid collisions with the symbols of the permanently named elements.
The Recommendations for the Naming of Elements of Atomic Numbers Greater than 100 can be found here.
, all 118 discovered elements have received individual permanent names and symbols. Therefore, systematic names and symbols are now used only for the undiscovered elements beyond element 118, oganesson. When such an element is discovered, it will keep its systematic name and symbol until its discovery meets the criteria of and is accepted by the IUPAC/IUPAP Joint Working Party, upon which the discoverers are invited to propose a permanent name and symbol. Once this name and symbol is proposed, there is still a comment period before they become official and replace the systematic name and symbol.
At the time the systematic names were recommended (1978), names had already been officially given to all elements up to atomic number 103, lawrencium. While systematic names were given for elements 101 (mendelevium), 102 (nobelium), and 103 (lawrencium), these were only as "minor alternatives to the trivial names already approved by IUPAC". The following elements for some time only had systematic names as approved names, until their final replacement with trivial names after their discoveries were accepted.
See also
Mendeleev's predicted elements – a much earlier (1869) system of naming undiscovered elements
References
External links
Naming of chemical elements
Chemical nomenclature
Periodic table | Systematic element name | Chemistry | 818 |
14,306,107 | https://en.wikipedia.org/wiki/Comparison%20of%20Nvidia%20nForce%20chipsets | This is a comparison of chipsets designed by Nvidia. Nvidia stopped producing chipsets in 2009. Nvidia codenames its chipsets MCPs (Media and Communications Processors).
nForce
nForce
nForce Southbridges
nForce2
nForce2
nForce2 Southbridges
nForce3
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used.
nForce4
For AMD processors
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used.
For Intel processors (LGA 775)
nForce 400 (GeForce 6000)
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used. List is incomplete because multiple variants of 410 and 430 exist.
nForce 500 Series
For AMD processors
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used.
For Intel processors
nForce 600 (GeForce 7000) Series
For AMD processors
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used.
HT1.0 = (2000 MT/s)
HT3.0 = (5200 MT/s)
For Intel processors
nForce 700 Series
For AMD processors
The memory controller is integrated into the CPU; the supported memory is DDR2 in dual channel.
For Intel processors
GeForce 8000/9000 Series
For AMD processors
In GeForce 8000/9000-series chipsets the memory controller is integrated into the CPU and the supported memory is DDR2 in dual channel.
For Intel processors
nForce 900
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used.
nForce Professional
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used.
Mobile Chipsets
For AMD processors
The memory controller is integrated into the CPU, the supported memory types depend on the CPU and socket used.
For Intel processors
See also
Comparison of Nvidia graphics processing units
Comparison of AMD chipsets
Comparison of ATI chipsets
List of Intel chipsets
List of VIA chipsets
Larrabee
References
External links
Product Comparison Chart - Nvidia nForce for AMD - Desktop
Product Comparison Chart - Nvidia nForce for Intel - Desktop (dated Aug 2007 - nForce6, Core2, LGA 775)
NVIDIA based motherboards for Intel - Desktop (dated Mar 2008 - nForce7, Core2, LGA 775)
NVIDIA based motherboards for AMD - Desktop
Nvidia nForce chipsets | Comparison of Nvidia nForce chipsets | Technology | 545 |
315,927 | https://en.wikipedia.org/wiki/Modern%20architecture | Modern architecture, also called modernist architecture, was an architectural movement and style that was prominent in the 20th century, between the earlier Art Deco and later postmodern movements. Modern architecture was based upon new and innovative technologies of construction (particularly the use of glass, steel, and concrete); the principle functionalism (i.e. that form should follow function); an embrace of minimalism; and a rejection of ornament.
According to Le Corbusier, the roots of the movement were to be found in the works of Eugène Viollet le duc, while Mies van der Rohe was heavily inspired by Karl Friedrich Schinkel. The movement emerged in the first half of the 20th century and became dominant after World War II until the 1980s, when it was gradually replaced as the principal style for institutional and corporate buildings by postmodern architecture.
Origins
Modern architecture emerged at the end of the 19th century from revolutions in technology, engineering, and building materials, and from a desire to break away from historical architectural styles and invent something that was purely functional and new.
The revolution in materials came first, with the use of cast iron, drywall, plate glass, and reinforced concrete, to build structures that were stronger, lighter, and taller. The cast plate glass process was invented in 1848, allowing the manufacture of very large windows. The Crystal Palace by Joseph Paxton at the Great Exhibition of 1851 was an early example of iron and plate glass construction, followed in 1864 by the first glass and metal curtain wall. These developments together led to the first steel-framed skyscraper, the ten-story Home Insurance Building in Chicago, built in 1884 by William Le Baron Jenney and based on the works of Viollet le Duc.
French industrialist François Coignet was the first to use iron-reinforced concrete, that is, concrete strengthened with iron bars, as a technique for constructing buildings. In 1853 Coignet built the first iron reinforced concrete structure, a four-storey house in the suburbs of Paris. A further important step forward was the invention of the safety elevator by Elisha Otis, first demonstrated at the New York Crystal Palace exposition in 1854, which made tall office and apartment buildings practical. Another important technology for the new architecture was electric light, which greatly reduced the inherent danger of fires caused by gas in the 19th century.
The debut of new materials and techniques inspired architects to break away from the neoclassical and eclectic models that dominated European and American architecture in the late 19th century, most notably eclecticism, Victorian and Edwardian architecture, and the Beaux-Arts architectural style. This break with the past was particularly urged by the architectural theorist and historian Eugène Viollet-le-Duc. In his 1872 book Entretiens sur L'Architecture, he urged: "use the means and knowledge given to us by our times, without the intervening traditions which are no longer viable today, and in that way we can inaugurate a new architecture. For each function its material; for each material its form and its ornament." This book influenced a generation of architects, including Louis Sullivan, Victor Horta, Hector Guimard, and Antoni Gaudí.
Early modernism in Europe (1900–1914)
At the end of the 19th century, a few architects began to challenge the traditional Beaux Arts and Neoclassical styles that dominated architecture in Europe and the United States. The Glasgow School of Art (1896–99) designed by Charles Rennie Mackintosh, had a façade dominated by large vertical bays of windows. The Art Nouveau style was launched in the 1890s by Victor Horta in Belgium and Hector Guimard in France; it introduced new styles of decoration, based on vegetal and floral forms. In Barcelona, Antonio Gaudi conceived architecture as a form of sculpture; the façade of the Casa Batlló in Barcelona (1904–1907) had no straight lines; it was encrusted with colorful mosaics of stone and ceramic tiles.
Architects also began to experiment with new materials and techniques, which gave them greater freedom to create new forms. In 1903–1904 in Paris Auguste Perret and Henri Sauvage began to use reinforced concrete, previously only used for industrial structures, to build apartment buildings. Reinforced concrete, which could be molded into any shape, and which could create enormous spaces without the need of supporting pillars, replaced stone and brick as the primary material for modernist architects. The first concrete apartment buildings by Perret and Sauvage were covered with ceramic tiles, but in 1905 Perret built the first concrete parking garage on 51 rue de Ponthieu in Paris; here the concrete was left bare, and the space between the concrete was filled with glass windows. Henri Sauvage added another construction innovation in an apartment building on Rue Vavin in Paris (1912–1914); the reinforced concrete building was in steps, with each floor set back from the floor below, creating a series of terraces. Between 1910 and 1913, Auguste Perret built the Théâtre des Champs-Élysées, a masterpiece of reinforced concrete construction, with Art Deco sculptural bas-reliefs on the façade by Antoine Bourdelle. Because of the concrete construction, no columns blocked the spectator's view of the stage.
Otto Wagner, in Vienna, was another pioneer of the new style. In his book Moderne Architektur (1895) he had called for a more rationalist style of architecture, based on "modern life". He designed a stylized ornamental metro station at Karlsplatz in Vienna (1888–89), then an ornamental Art Nouveau residence, Majolika House (1898), before moving to a much more geometric and simplified style, without ornament, in the Austrian Postal Savings Bank (1904–1906). Wagner declared his intention to express the function of the building in its exterior. The reinforced concrete exterior was covered with plaques of marble attached with bolts of polished aluminum. The interior was purely functional and spare, a large open space of steel, glass, and concrete where the only decoration was the structure itself.
The Viennese architect Adolf Loos also began removing any ornament from his buildings. His Steiner House, in Vienna (1910), was an example of what he called rationalist architecture; it had a simple stucco rectangular façade with square windows and no ornament. The fame of the new movement, which became known as the Vienna Secession spread beyond Austria. Josef Hoffmann, a student of Wagner, constructed a landmark of early modernist architecture, the Stoclet Palace, in Brussels, in 1906–1911. This residence, built of brick covered with Norwegian marble, was composed of geometric blocks, wings, and a tower. A large pool in front of the house reflected its cubic forms. The interior was decorated with paintings by Gustav Klimt and other artists, and the architect even designed clothing for the family to match the architecture.
In Germany, a modernist industrial movement, Deutscher Werkbund (German Work Federation) had been created in Munich in 1907 by Hermann Muthesius, a prominent architectural commentator. Its goal was to bring together designers and industrialists, to turn out well-designed, high-quality products, and in the process to invent a new type of architecture. The organization originally included twelve architects and twelve business firms, but quickly expanded. The architects include Peter Behrens, Theodor Fischer (who served as its first president), Josef Hoffmann and Richard Riemerschmid. In 1909 Behrens designed one of the earliest and most influential industrial buildings in the modernist style, the AEG turbine factory, a functional monument of steel and concrete. In 1911–1913, Adolf Meyer and Walter Gropius, who had both worked for Behrens, built another revolutionary industrial plant, the Fagus Factory in Alfeld an der Laine, a building without ornament where every construction element was on display. The Werkbund organized a major exposition of modernist design in Cologne just a few weeks before the outbreak of the First World War in August 1914. For the 1914 Cologne exhibition, Bruno Taut built a revolutionary glass pavilion.
Early American modernism (1890s–1914)
Frank Lloyd Wright was a highly original and independent American architect who refused to be categorized in any one architectural movement. Like Le Corbusier and Ludwig Mies van der Rohe, he had no formal architectural training. From 1887 to 1893 he worked in the Chicago office of Louis Sullivan, who pioneered the first tall steel-frame office buildings in Chicago, and who famously stated "form follows function". Wright set out to break all the traditional rules. He was particularly famous for his Prairie Houses, including the Winslow House in River Forest, Illinois (1893–94); Arthur Heurtley House (1902) and Robie House (1909); sprawling, geometric residences without decoration, with strong horizontal lines which seemed to grow out of the earth, and which echoed the wide flat spaces of the American prairie. His Larkin Building (1904–1906) in Buffalo, New York, Unity Temple (1905) in Oak Park, Illinois and Unity Temple had highly original forms and no connection with historical precedents.
Early skyscrapers
At the end of the 19th century, the first skyscrapers began to appear in the United States. They were a response to the shortage of land and high cost of real estate in the center of the fast-growing American cities, and the availability of new technologies, including fireproof steel frames and improvements in the safety elevator invented by Elisha Otis in 1852. The first steel-framed "skyscraper", The Home Insurance Building in Chicago, was ten stories high. It was designed by William Le Baron Jenney in 1883, and was briefly the tallest building in the world. Louis Sullivan built another monumental new structure, the Carson, Pirie, Scott and Company Building, in the heart of Chicago in 1904–1906. While these buildings were revolutionary in their steel frames and height, their decoration was borrowed from Neo-Renaissance, Neo-Gothic and Beaux-Arts architecture. The Woolworth Building, designed by Cass Gilbert, was completed in 1912, and was the tallest building in the world until the completion of the Chrysler Building in 1929. The structure was purely modern, but its exterior was decorated with Neo-Gothic ornament, complete with decorative buttresses, arches and spires, which caused it to be nicknamed the "Cathedral of Commerce".
Rise of modernism in Europe and Russia (1918–1931)
After the first World War, a prolonged struggle began between architects who favored the more traditional styles of neo-classicism and the Beaux-Arts architecture style, and the modernists, led by Le Corbusier and Robert Mallet-Stevens in France, Walter Gropius and Ludwig Mies van der Rohe in Germany, and Konstantin Melnikov in the new Soviet Union, who wanted only pure forms and the elimination of any decoration. Louis Sullivan popularized the axiom Form follows function to emphasize the importance of utilitarian simplicity in modern architecture. Art Deco architects such as Auguste Perret and Henri Sauvage often made a compromise between the two, combining modernist forms and stylized decoration.
International Style (1920s–1970s)
The dominant figure in the rise of modernism in France was Charles-Édouard Jeanerette, a Swiss-French architect who in 1920 took the name Le Corbusier. In 1920 he co-founded a journal called L'Espirit Nouveau and energetically promoted architecture that was functional, pure, and free of any decoration or historical associations. He was also a passionate advocate of a new urbanism, based on planned cities. In 1922 he presented a design of a city for three million people, whose inhabitants lived in identical sixty-story tall skyscrapers surrounded by open parkland. He designed modular houses, which would be mass-produced on the same plan and assembled into apartment blocks, neighborhoods, and cities. In 1923 he published "Toward an Architecture", with his famous slogan, "a house is a machine for living in."<ref>Le Corbusier, Vers une architecture", (1923), Flammarion edition (1995), pages XVIII-XIX</ref> He tirelessly promoted his ideas through slogans, articles, books, conferences, and participation in Expositions.
To illustrate his ideas, in the 1920s he built a series of houses and villas in and around Paris. They were all built according to a common system, based upon the use of reinforced concrete, and of reinforced concrete pylons in the interior which supported the structure, allowing glass curtain walls on the façade and open floor plans, independent of the structure. They were always white, and had no ornament or decoration on the outside or inside. The best-known of these houses was the Villa Savoye, built in 1928–1931 in the Paris suburb of Poissy. An elegant white box wrapped with a ribbon of glass windows around on the façade, with living space that opened upon an interior garden and countryside around, raised up by a row of white pylons in the center of a large lawn, it became an icon of modernist architecture.
Bauhaus and the German Werkbund (1919–1933)
In Germany, two important modernist movements appeared after the first World War, The Bauhaus was a school founded in Weimar in 1919 under the direction of Walter Gropius. Gropius was the son of the official state architect of Berlin, who studied before the war with Peter Behrens, and designed the modernist Fagus turbine factory. The Bauhaus was a fusion of the prewar Academy of Arts and the school of technology. In 1926 it was transferred from Weimar to Dessau; Gropius designed the new school and student dormitories in the new, purely functional modernist style he was encouraging. The school brought together modernists in all fields; the faculty included the modernist painters Vasily Kandinsky, Joseph Albers and Paul Klee, and the designer Marcel Breuer.
Gropius became an important theorist of modernism, writing The Idea and Construction in 1923. He was an advocate of standardization in architecture, and the mass construction of rationally designed apartment blocks for factory workers. In 1928 he was commissioned by the Siemens company to build apartment for workers in the suburbs of Berlin, and in 1929 he proposed the construction of clusters of slender eight- to ten-story high-rise apartment towers for workers.
While Gropius was active at the Bauhaus, Ludwig Mies van der Rohe led the modernist architectural movement in Berlin. Inspired by the De Stijl movement in the Netherlands, he built clusters of concrete summer houses and proposed a project for a glass office tower. He became the vice president of the German Werkbund, and became the head of the Bauhaus from 1930 to 1933. proposing a wide variety of modernist plans for urban reconstruction. His most famous modernist work was the German pavilion for the 1929 international exposition in Barcelona. It was a work of pure modernism, with glass and concrete walls and clean, horizontal lines. Though it was only a temporary structure, and was torn down in 1930, it became, along with Le Corbusier's Villa Savoye, one of the best-known landmarks of modernist architecture. A reconstructed version now stands on the original site in Barcelona.
When the Nazis came to power in Germany, they viewed the Bauhaus as a training ground for communists, and closed the school in 1933. Gropius left Germany and went to England, then to the United States, where he and Marcel Breuer both joined the faculty of the Harvard Graduate School of Design, and became the teachers of a generation of American postwar architects. In 1937 Mies van der Rohe also moved to the United States; he became one of the most famous designers of postwar American skyscrapers.
Expressionist architecture (1918–1931)
Expressionism, which appeared in Germany between 1910 and 1925, was a counter-movement against the strictly functional architecture of the Bauhaus and Werkbund. Its advocates, including Bruno Taut, Hans Poelzig, Fritz Hoger and Erich Mendelsohn, wanted to create architecture that was poetic, expressive, and optimistic. Many expressionist architects had fought in World War I and their experiences, combined with the political turmoil and social upheaval that followed the German Revolution of 1919, resulted in a utopian outlook and a romantic socialist agenda. Economic conditions severely limited the number of built commissions between 1914 and the mid-1920s, As result, many of the most innovative expressionist projects, including Bruno Taut's Alpine Architecture and Hermann Finsterlin's Formspiels, remained on paper. Scenography for theatre and films provided another outlet for the expressionist imagination, and provided supplemental incomes for designers attempting to challenge conventions in a harsh economic climate. A particular type, using bricks to create its forms (rather than concrete) is known as Brick Expressionism.
Erich Mendelsohn, (who disliked the term Expressionism for his work) began his career designing churches, silos, and factories which were highly imaginative, but, for lack of resources, were never built. In 1920, he finally was able to construct one of his works in the city of Potsdam; an observatory and research center called the Einsteinium, named in tribute to Albert Einstein. It was supposed to be built of reinforced concrete, but because of technical problems it was finally built of traditional materials covered with plaster. His sculptural form, very different from the austere rectangular forms of the Bauhaus, first won him commissions to build movie theaters and retail stores in Stuttgart, Nuremberg, and Berlin. His Mossehaus in Berlin was an early model for the streamline moderne style. His Columbushaus on Potsdamer Platz in Berlin (1931) was a prototype for the modernist office buildings that followed. (It was torn down in 1957, because it stood in the zone between East and West Berlin, where the Berlin Wall was constructed.) Following the rise of the Nazis to power, he moved to England (1933), then to the United States (1941).
Fritz Höger was another notable Expressionist architect of the period. His Chilehaus was built as the headquarters of a shipping company, and was modeled after a giant steamship, a triangular building with a sharply pointed bow. It was constructed of dark brick, and used external piers to express its vertical structure. Its external decoration borrowed from Gothic cathedrals, as did its internal arcades. Hans Poelzig was another notable expressionist architect. In 1919 he built the Großes Schauspielhaus, an immense theater in Berlin, seating five thousand spectators for theater impresario Max Reinhardt. It featured elongated shapes like stalagmites hanging down from its gigantic dome, and lights on massive columns in its foyer. He also constructed the IG Farben building, a massive corporate headquarters, now the main building of Goethe University in Frankfurt. Bruno Taut specialized in building large-scale apartment complexes for working-class Berliners. He built twelve thousand individual units, sometimes in buildings with unusual shapes, such as a giant horseshoe. Unlike most other modernists, he used bright exterior colors to give his buildings more life The use of dark brick in the German projects gave that particular style a name, Brick Expressionism.
The Austrian philosopher, architect, and social critic Rudolf Steiner also departed as far as possible from traditional architectural forms. His Second Goetheanum, built from 1926 near Basel, Switzerland and Mendelsohn's Einsteinturm in Potsdam, Germany, were based on no traditional models and had entirely original shapes.
Constructivist architecture (1919–1931)
After the Russian Revolution of 1917, Russian avant-garde artists and architects began searching for a new Soviet style which could replace traditional neoclassicism. The new architectural movements were closely tied with the literary and artistic movements of the period, the futurism of poet Vladimir Mayakovskiy, the Suprematism of painter Kasimir Malevich, and the colorful Rayonism of painter Mikhail Larionov. The most startling design that emerged was the tower proposed by painter and sculptor Vladimir Tatlin for the Moscow meeting of the Third Communist International in 1920: he proposed two interlaced towers of metal four hundred meters high, with four geometric volumes suspended from cables. The movement of Russian Constructivist architecture was launched in 1921 by a group of artists led by Aleksandr Rodchenko. Their manifesto proclaimed that their goal was to find the "communist expression of material structures". Soviet architects began to construct workers' clubs, communal apartment houses, and communal kitchens for feeding whole neighborhoods.
One of the first prominent constructivist architects to emerge in Moscow was Konstantin Melnikov, the number of working clubs – including Rusakov Workers' Club (1928) – and his own living house, Melnikov House (1929) near Arbat Street in Moscow. Melnikov traveled to Paris in 1925 where he built the Soviet Pavilion for the International Exhibition of Modern Decorative and Industrial Arts in Paris in 1925; it was a highly geometric vertical construction of glass and steel crossed by a diagonal stairway, and crowned with a hammer and sickle. The leading group of constructivist architects, led by Vesnin brothers and Moisei Ginzburg, was publishing the 'Contemporary Architecture' journal. This group created several major constructivist projects in the wake of the First Five Year Plan – including colossal Dnieper Hydroelectric Station (1932) – and made an attempt to start the standardization of living blocks with Ginzburg's Narkomfin building. A number of architects from the pre-Soviet period also took up the constructivist style. The most famous example was Lenin's Mausoleum in Moscow (1924), by Alexey Shchusev (1924)
The main centers of constructivist architecture were Moscow and Leningrad; however, during the industrialization many constructivist buildings were erected in provincial cities. The regional industrial centers, including Ekaterinburg, Kharkiv or Ivanovo, were rebuilt in the constructivist manner; some cities, like Magnitogorsk or Zaporizhzhia, were constructed anew (the so-called socgorod, or 'socialist city').
The style fell markedly out of favor in the 1930s, replaced by the more grandiose nationalist styles that Stalin favored. Constructivist architects and even Le Corbusier projects for the new Palace of the Soviets from 1931 to 1933, but the winner was an early Stalinist building in the style termed Postconstructivism. The last major Russian constructivist building, by Boris Iofan, was built for the Paris World Exhibition (1937), where it faced the pavilion of Nazi Germany by Hitler's architect Albert Speer.
New Objectivity (1920–1933)
The New Objectivity (in German Neue Sachlichkeit, sometimes also translated as New Sobriety) is a name often given to the Modern architecture that emerged in Europe, primarily German-speaking Europe, in the 1920s and 30s. It is also frequently called Neues Bauen (New Building). The New Objectivity took place in many German cities in that period, for example in Frankfurt with its Neues Frankfurt project.
Modernism becomes a movement: CIAM (1928)
By the late 1920s, modernism had become an important movement in Europe. Architecture, which previously had been predominantly national, began to become international. The architects traveled, met each other, and shared ideas. Several modernists, including Le Corbusier, had participated in the competition for the headquarters of the League of Nations in 1927. In the same year, the German Werkbund organized an architectural exposition at the Weissenhof Estate Stuttgart. Seventeen leading modernist architects in Europe were invited to design twenty-one houses; Le Corbusier, and Ludwig Mies van der Rohe played a major part. In 1927 Le Corbusier, Pierre Chareau, and others proposed the foundation of an international conference to establish the basis for a common style. The first meeting of the Congrès Internationaux d'Architecture Moderne or International Congresses of Modern Architects (CIAM), was held in a chateau on Lake Leman in Switzerland 26–28 June 1928. Those attending included Le Corbusier, Robert Mallet-Stevens, Auguste Perret, Pierre Chareau and Tony Garnier from France; Victor Bourgeois from Belgium; Walter Gropius, Erich Mendelsohn, Ernst May and Ludwig Mies van der Rohe from Germany; Josef Frank from Austria; Mart Stam and Gerrit Rietveld from the Netherlands, and Adolf Loos from Czechoslovakia. A delegation of Soviet architects was invited to attend, but they were unable to obtain visas. Later members included Josep Lluís Sert of Spain and Alvar Aalto of Finland. No one attended from the United States. A second meeting was organized in 1930 in Brussels by Victor Bourgeois on the topic "Rational methods for groups of habitations". A third meeting, on "The functional city", was scheduled for Moscow in 1932, but was cancelled at the last minute. Instead, the delegates held their meeting on a cruise ship traveling between Marseille and Athens. On board, they together drafted a text on how modern cities should be organized. The text, called The Athens Charter, after considerable editing by Corbusier and others, was finally published in 1957 and became an influential text for city planners in the 1950s and 1960s. The group met once more in Paris in 1937 to discuss public housing and was scheduled to meet in the United States in 1939, but the meeting was cancelled because of the war. The legacy of the CIAM was a roughly common style and doctrine which helped define modern architecture in Europe and the United States after World War II.
Art Deco
The Art Deco architectural style (called Style Moderne in France), was modern, but it was not modernist; it had many features of modernism, including the use of reinforced concrete, glass, steel, chrome, and it rejected traditional historical models, such as the Beaux-Arts style and Neo-classicism; but, unlike the modernist styles of Le Corbusier and Mies van der Rohe, it made lavish use of decoration and color. It reveled in the symbols of modernity; lightning flashes, sunrises, and zig-zags. Art Deco had begun in France before World War I and spread through Europe; in the 1920s and 1930s it became a highly popular style in the United States, South America, India, China, Australia, and Japan. In Europe, Art Deco was particularly popular for department stores and movie theaters. The style reached its peak in Europe at the International Exhibition of Modern Decorative and Industrial Arts in 1925, which featured art deco pavilions and decoration from twenty countries. Only two pavilions were purely modernist; the Esprit Nouveau pavilion of Le Corbusier, which represented his idea for a mass-produced housing unit, and the pavilion of the USSR, by Konstantin Melnikov in a flamboyantly futurist style.
Later French landmarks in the Art Deco style included the Grand Rex movie theater in Paris, La Samaritaine department store by Henri Sauvage (1926–28) and the Social and Economic Council building in Paris (1937–38) by Auguste Perret, and the Palais de Tokyo and Palais de Chaillot, both built by collectives of architects for the 1937 Paris .
American Art Deco; the skyscraper style (1919–1939)
In the late 1920s and early 1930s, an exuberant American variant of Art Deco appeared in the Chrysler Building, Empire State Building and Rockefeller Center in New York City, and Guardian Building in Detroit. The first skyscrapers in Chicago and New York had been designed in a neo-gothic or neoclassical style, but these buildings were very different; they combined modern materials and technology (stainless steel, concrete, aluminum, chrome-plated steel) with Art Deco geometry; stylized zig-zags, lightning flashes, fountains, sunrises, and, at the top of the Chrysler building, Art Deco "gargoyles" in the form of stainless steel radiator ornaments. The interiors of these new buildings, sometimes termed Cathedrals of Commerce", were lavishly decorated in bright contrasting colors, with geometric patterns variously influenced by Egyptian and Mayan pyramids, African textile patterns, and European cathedrals, Frank Lloyd Wright himself experimented with Mayan Revival, in the concrete cube-based Ennis House of 1924 in Los Angeles. The style appeared in the late 1920s and 1930s in all major American cities. The style was used most often in office buildings, but it also appeared in the enormous movie palaces that were built in large cities when sound films were introduced.
Streamline style and Public Works Administration (1933–1939)
The beginning of the Great Depression in 1929 brought an end to lavishly decorated Art Deco architecture and a temporary halt to the construction of new skyscrapers. It also brought in a new style, called "Streamline Moderne" or sometimes just Streamline. This style, sometimes modeled after for the form of ocean liners, featured rounded corners, strong horizontal lines, and often nautical features, such as superstructures and steel railings. It was associated with modernity and especially with transportation; the style was often used for new airport terminals, train and bus stations, and for gas stations and diners built along the growing American highway system. In the 1930s the style was used not only in buildings, but in railroad locomotives, and even refrigerators and vacuum cleaners. It both borrowed from industrial design and influenced it.
In the United States, the Great Depression led to a new style for government buildings, sometimes called PWA Moderne, for the Public Works Administration, which launched gigantic construction programs in the U.S. to stimulate employment. It was essentially classical architecture stripped of ornament, and was employed in state and federal buildings, from post offices to the largest office building in the world at that time, Pentagon (1941–43), begun just before the United States entered the Second World War.
American modernism (1919–1939)
During the 1920s and 1930s, Frank Lloyd Wright resolutely refused to associate himself with any architectural movements. He considered his architecture to be entirely unique and his own. Between 1916 and 1922, he broke away from his earlier prairie house style and worked instead on houses decorated with textured blocks of cement; this became known as his "Mayan style", after the pyramids of the ancient Mayan civilization. He experimented for a time with modular mass-produced housing. He identified his architecture as "Usonian", a combination of USA, "utopian" and "organic social order". His business was severely affected by the beginning of the Great Depression that began in 1929; he had fewer wealthy clients who wanted to experiment. Between 1928 and 1935, he built only two buildings: a hotel near Chandler, Arizona, and the most famous of all his residences, Fallingwater (1934–37), a vacation house in Pennsylvania for Edgar J. Kaufman. Fallingwater is a remarkable structure of concrete slabs suspended over a waterfall, perfectly uniting architecture and nature.
The Austrian architect Rudolph Schindler designed what could be called the first house in the modern style in 1922, the Schindler house.
Schindler also contributed to American modernism with his design for the Lovell Beach House in Newport Beach. The Austrian architect Richard Neutra moved to the United States in 1923, worked for a short time with Frank Lloyd Wright, also quickly became a force in American architecture through his modernist design for the same client, the Lovell Health House in Los Angeles. Neutra's most notable architectural work was the Kaufmann Desert House in 1946, and he designed hundreds of further projects.
Paris International Exposition of 1937 and the architecture of dictators
The 1937 Paris International Exposition in Paris effectively marked the end of the Art Deco, and of pre-war architectural styles. Most of the pavilions were in a neoclassical Deco style, with colonnades and sculptural decoration. The pavilions of Nazi Germany, designed by Albert Speer, in a German neoclassical style topped by eagle and swastika, faced the pavilion of the Soviet Union, topped by enormous statues of a worker and a peasant carrying a hammer and sickle. As to the modernists, Le Corbusier was practically, but not quite invisible at the Exposition; he participated in the Pavilion des temps nouveaux, but focused mainly on his painting. The one modernist who did attract attention was a collaborator of Le Corbusier, Josep Lluis Sert, the Spanish architect, whose pavilion of the Second Spanish Republic was pure modernist glass and steel box. Inside it displayed the most modernist work of the Exposition, the painting Guernica by Pablo Picasso. The original building was destroyed after the Exposition, but it was recreated in 1992 in Barcelona.
The rise of nationalism in the 1930s was reflected in the Fascist architecture of Italy, and Nazi architecture of Germany, based on classical styles and designed to express power and grandeur. The Nazi architecture, much of it designed by Albert Speer, was intended to awe the spectators by its huge scale. Adolf Hitler intended to turn Berlin into the capital of Europe, grander than Rome or Paris. The Nazis closed the Bauhaus, and the most prominent modern architects soon departed for Britain or the United States. In Italy, Benito Mussolini wished to present himself as the heir to the glory and empire of ancient Rome. Mussolini's government was not as hostile to modernism as The Nazis; the spirit of Italian Rationalism of the 1920s continued, with the work of architect Giuseppe Terragni. His Casa del Fascio in Como, headquarters of the local Fascist party, was a perfectly modernist building, with geometric proportions (33.2 meters long by 16.6 meters high), a clean façade of marble, and a Renaissance-inspired interior courtyard. Opposed to Terragni was Marcello Piacitini, a proponent of monumental fascist architecture, who rebuilt the University of Rome, and designed the Italian pavilion at the 1937 Paris Exposition, and planned a grand reconstruction of Rome on the fascist model.
New York World's Fair (1939)
The 1939 New York World's Fair marked a turning point in architecture between Art Deco and modern architecture. The theme of the Fair was the World of Tomorrow, and its symbols were the purely geometric trylon and periphery sculpture. It had many monuments to Art Deco, such as the Ford Pavilion in the Streamline Moderne style, but also included the new International Style that would replace Art Deco as the dominant style after the War. The Pavilions of Finland, by Alvar Aalto, of Sweden by Sven Markelius, and of Brazil by Oscar Niemeyer and Lúcio Costa, looked forward to a new style. They became leaders in the postwar modernist movement.
World War II: wartime innovation and postwar reconstruction (1939–1945)
World War II (1939–1945) and its aftermath was a major factor in driving innovation in building technology, and in turn, architectural possibilities. The wartime industrial demands resulted in shortages of steel and other building materials, leading to the adoption of new materials, such as aluminum, The war and postwar period brought greatly expanded use of prefabricated building; largely for the military and government. The semi-circular metal Nissen hut of World War I was revived as the Quonset hut. The years immediately after the war saw the development of radical experimental houses, including the enameled-steel Lustron house (1947–1950), and Buckminster Fuller's experimental aluminum Dymaxion House.
The unprecedented destruction caused by the war was another factor in the rise of modern architecture. Large parts of major cities, from Berlin, Tokyo, and Dresden to Rotterdam and east London; all the port cities of France, particularly Le Havre, Brest, Marseille, Cherbourg had been destroyed by bombing. In the United States, little civilian construction had been done since the 1920s; housing was needed for millions of American soldiers returning from the war. The postwar housing shortages in Europe and the United States led to the design and construction of enormous government-financed housing projects, usually in run-down center of American cities, and in the suburbs of Paris and other European cities, where land was available,
One of the largest reconstruction projects was that of the city center of Le Havre, destroyed by the Germans and by Allied bombing in 1944; 133 hectares of buildings in the center were flattened, destroying 12,500 buildings and leaving 40,000 persons homeless. The architect Auguste Perret, a pioneer in the use of reinforced concrete and prefabricated materials, designed and built an entirely new center to the city, with apartment blocks, cultural, commercial, and government buildings. He restored historic monuments when possible, and built a new church, St. Joseph, with a lighthouse-like tower in the center to inspire hope. His rebuilt city was declared a UNESCO World Heritage site in 2005.
Le Corbusier and the Cité Radieuse (1947–1952)
Shortly after the War, the French architect Le Corbusier, who was nearly sixty years old and had not constructed a building in ten years, was commissioned by the French government to construct a new apartment block in Marseille. He called it Unité d'Habitation in Marseille, but it more popularly took the name of the Cité Radieuse (and later "Cité du Fada" "City of the crazy one" in Marseille French), after his book about futuristic urban planning. Following his doctrines of design, the building had a concrete frame raised up above the street on pylons. It contained 337 duplex apartment units, fit into the framework like pieces of a puzzle. Each unit had two levels and a small terrace. Interior "streets" had shops, a nursery school, and other serves, and the flat terrace roof had a running track, ventilation ducts, and a small theater. Le Corbusier designed furniture, carpets, and lamps to go with the building, all purely functional; the only decoration was a choice of interior colors that Le Corbusier gave to residents. Unité d'Habitation became a prototype for similar buildings in other cities, both in France and Germany. Combined with his equally radical organic design for the Chapel of Notre-Dame du-Haut at Ronchamp, this work propelled Corbusier in the first rank of postwar modern architects.
Team X and the 1953 International Congress of Modern Architecture
In the early 1950s, Michel Écochard, director of urban planning under the French Protectorate in Morocco, commissioned GAMMA ()—which initially included the architects Elie Azagury, George Candillis, Alexis Josic and Shadrach Woods—to design housing in the Hay Mohammedi neighborhood of Casablanca that provided a "culturally specific living tissue" for laborers and migrants from the countryside. Sémiramis, (Honeycomb), and Carrières Centrales were some of the first examples of this Vernacular Modernism.
At the 1953 Congrès Internationaux d'Architecture Moderne (CIAM), ATBAT-Afrique—the Africa branch of founded in 1947 by figures including Le Corbusier, Vladimir Bodiansky, and André Wogenscky—prepared a study of Casablanca's bidonvilles entitled "Habitat for the Greatest Number". The presenters, Georges Candilis and Michel Ecochard, argued—against doctrine—that architects must consider local culture and climate in their designs. This generated great debate among modernist architects around the world and eventually provoked a schism and the creation of Team 10. Ecochard's 8x8 meter model at Carrières Centrales earned him recognition as a pioneer in the architecture of collective housing, though his Moroccan colleague Elie Azagury was critical of him for serving as a tool of the French colonial regime and for ignoring the economic and social necessity that Moroccans live in higher density vertical housing.
Late modernist architecture
Late modernist architecture is generally understood to include buildings designed (1968–1980) with exceptions. Modernist architecture includes the buildings designed between 1945 and the 1960s. The late modernist style is characterized by bold shapes and sharp corners, slightly more defined than Brutalist architecture.
Postwar modernism in the United States (1945–1985)
The International Style of architecture had appeared in Europe, particularly in the Bauhaus movement, in the late 1920s. In 1932 it was recognized and given a name at an Exhibition at the Museum of Modern Art in New York City organized by architect Philip Johnson and architectural critic Henry-Russell Hitchcock, Between 1937 and 1941, following the rise Hitler and the Nazis in Germany, most of the leaders of the German Bauhaus movement found a new home in the United States, and played an important part in the development of American modern architecture.
Frank Lloyd Wright and the Guggenheim Museum
Frank Lloyd Wright was eighty years old in 1947; he had been present at the beginning of American modernism, and though he refused to accept that he belonged to any movement, continued to play a leading role almost to its end. One of his most original late projects was the campus of Florida Southern College in Lakeland, Florida, begun in 1941 and completed in 1943. He designed nine new buildings in a style that he described as "The Child of the Sun". He wrote that he wanted the campus to "grow out of the ground and into the light, a child of the sun".
He completed several notable projects in the 1940s, including the Johnson Wax Headquarters and the Price Tower in Bartlesville, Oklahoma (1956). The building is unusual that it is supported by its central core of four elevator shafts; the rest of the building is cantilevered to this core, like the branches of a tree. Wright originally planned the structure for an apartment building in New York City. That project was cancelled because of the Great Depression, and he adapted the design for an oil pipeline and equipment company in Oklahoma. He wrote that in New York City his building would have been lost in a forest of tall buildings, but that in Oklahoma it stood alone. The design is asymmetrical; each side is different.
In 1943 he was commissioned by the art collector Solomon R. Guggenheim to design a museum for his collection of modern art. His design was entirely original; a bowl-shaped building with a spiral ramp inside that led museum visitors on an upward tour of the art of the 20th century. Work began in 1946 but it was not completed until 1959, the year that he died.
Walter Gropius and Marcel Breuer
Walter Gropius, the founder of the Bauhaus, moved to England in 1934 and spent three years there before being invited to the United States by Walter Hudnut of the Harvard Graduate School of Design; Gropius became the head of the architecture faculty. Marcel Breuer, who had worked with him at the Bauhaus, joined him and opened an office in Cambridge. The fame of Gropius and Breuer attracted many students, who themselves became famous architects, including Ieoh Ming Pei and Philip Johnson. They did not receive an important commission until 1941, when they designed housing for workers in Kensington, Pennsylvania, near Pittsburgh., In 1945 Gropius and Breuer associated with a group of younger architects under the name TAC (The Architects Collaborative). Their notable works included the building of the Harvard Graduate School of Design, the U.S. Embassy in Athens (1956–57), and the headquarters of Pan American Airways in New York (1958–63).
Ludwig Mies van der Rohe
Ludwig Mies van der Rohe described his architecture with the famous saying, "Less is more". As the director of the school of architecture of what is now called the Illinois Institute of Technology from 1939 to 1956, Mies (as he was commonly known) made Chicago the leading city for American modernism in the postwar years. He constructed new buildings for the Institute in modernist style, two high-rise apartment buildings on Lakeshore Drive (1948–51), which became models for high-rises across the country. Other major works included Farnsworth House in Plano, Illinois (1945–1951), a simple horizontal glass box that had an enormous influence on American residential architecture. The Chicago Convention Center (1952–54) and Crown Hall at the Illinois Institute of Technology (1950–56), and The Seagram Building in New York City (1954–58) also set a new standard for purity and elegance. Based on granite pillars, the smooth glass and steel walls were given a touch of color by the use of bronze-toned I-beams in the structure. He returned to Germany in 1962–68 to build the new Nationalgallerie in Berlin. His students and followers included Philip Johnson, and Eero Saarinen, whose work was substantially influenced by his ideas.
Richard Neutra and Charles and Ray Eames
Influential residential architects in the new style in the United States included Richard Neutra and Charles and Ray Eames. The most celebrated work of the Eames was Eames House in Pacific Palisades, California, (1949) Charles Eames in collaboration with Eero Saarinen It is composed of two structures, an architects residence and his studio, joined in the form of an L. The house, influenced by Japanese architecture, is made of translucent and transparent panels organized in simple volumes, often using natural materials, supported on a steel framework. The frame of the house was assembled in sixteen hours by five workmen. He brightened up his buildings with panels of pure colors.
Richard Neutra continued to build influential houses in Los Angeles, using the theme of the simple box. Many of these houses erased the line distinction between indoor and outdoor spaces with walls of plate glass. Neutra's Constance Perkins House in Pasadena, California (1962) was re-examination of the modest single-family dwelling. It was built of inexpensive material–wood, plaster, and glass–and completed at a cost of just under $18,000. Neutra scaled the house to the physical dimensions of its owner, a small woman. It features a reflecting pool which meanders under of the glass walls of the house. One of Neutra's most unusual buildings was Shepherd's Grove in Garden Grove, California, which featured an adjoining parking lot where worshippers could follow the service without leaving their cars.
Skidmore, Owings and Merrill and Wallace K. Harrison
Many of the notable modern buildings in the postwar years were produced by two architectural mega-agencies, which brought together large teams of designers for very complex projects. The firm of Skidmore, Owings & Merrill was founded in Chicago in 1936 by Louis Skidmore and Nathaniel Owings, and joined in 1939 by engineer John Merrill, It soon went under the name of SOM. Its first big project was Oak Ridge National Laboratory in Oak Ridge, Tennessee, the gigantic government installation that produced plutonium for the first nuclear weapons. In 1964 the firm had eighteen "partner-owners", 54 "associate participants, "and 750 architects, technicians, designers, decorators, and landscape architects. Their style was largely inspired by the work of Ludwig Mies van der Rohe, and their buildings soon had a large place in the New York skyline, including the Manhattan House (1950–51), Lever House (1951–52) and the Manufacturers Trust Company Building (1954). Later buildings by the firm include Beinecke Library at Yale University (1963), the Willis Tower, formerly Sears Tower in Chicago (1973) and One World Trade Center in New York City (2013), which replaced the building destroyed in the terrorist attack of 11 September 2001.
Wallace Harrison played a major part in the modern architectural history of New York; as the architectural advisor of the Rockefeller Family, he helped design Rockefeller Center, the major Art Deco architectural project of the 1930s. He was supervising architect for the 1939 New York World's Fair, and, with his partner Max Abramowitz, was the builder and chief architect of the headquarters of the United Nations; Harrison headed a committee of international architects, which included Oscar Niemeyer (who produced the original plan approved by the committee) and Le Corbusier. Other landmark New York buildings designed by Harrison and his firm included Metropolitan Opera House, the master plan for Lincoln Center, and John F. Kennedy International Airport.
Philip Johnson
Philip Johnson (1906–2005) was one of the youngest and last major figures in American modern architecture. He trained at Harvard with Walter Gropius, then was director of the department of architecture and modern design at the Metropolitan Museum of Art from 1946 to 1954. In 1947, he published a book about Ludwig Mies van der Rohe, and in 1953 designed his own residence, the Glass House in New Canaan, Connecticut in a style modeled after Mies's Farnsworth House. Beginning in 1955 he began to go in his own direction, moving gradually toward expressionism with designs that increasingly departed from the orthodoxies of modern architecture. His final and decisive break with modern architecture was the AT&T Building (later known as the Sony Tower), and now the 550 Madison Avenue in New York City, (1979) an essentially modernist skyscraper completely altered by the addition of broken pediment with a circular opening. This building is generally considered to mark the beginning of Postmodern architecture in the United States.
Eero Saarinen
Eero Saarinen (1910–1961) was the son of Eliel Saarinen, the most famous Finnish architect of the Art Nouveau period, who emigrated to the United States in 1923, when Eero was thirteen. He studied art and sculpture at the academy where his father taught, and then at the Académie de la Grande Chaumière Academy in Paris before studying architecture at Yale University. His architectural designs were more like enormous pieces of sculpture than traditional modern buildings; he broke away from the elegant boxes inspired by Mies van der Rohe and used instead sweeping curves and parabolas, like the wings of birds. In 1948 he conceived the idea of a monument in St. Louis, Missouri in the form of a parabolic arch 192 meters high, made of stainless steel (1948). He then designed the General Motors Technical Center in Warren, Michigan (1949–55), a glass modernist box in the style of Mies van der Rohe, followed by the IBM Research Center in Yorktown, Virginia (1957–61). His next works were a major departure in style; he produced a particularly striking sculptural design for the Ingalls Rink in New Haven, Connecticut (1956–59, an ice skiing rink with a parabolic roof suspended from cables, which served as a preliminary model for next and most famous work, the TWA Terminal at JFK airport in New York (1956–1962). His declared intention was to design a building that was distinctive and memorable, and also one that would capture the particular excitement of passengers before a journey. The structure is separated into four white concrete parabolic vaults, which together resemble a bird on the ground perched for flight. Each of the four curving roof vaults has two sides attached to columns in a Y form just outside the structure. One of the angles of each shell is lightly raised, and the other is attached to the center of the structure. The roof is connected with the ground by curtain walls of glass. All of the details inside the building, including the benches, counters, escalators, and clocks, were designed in the same style.
Louis Kahn
Louis Kahn (1901–74) was another American architect who moved away from the Mies van der Rohe model of the glass box, and other dogmas of the prevailing international style. He borrowed from a wide variety of styles, and idioms, including neoclassicism. He was a professor of architecture at Yale University from 1947 to 1957, where his students included Eero Saarinen. From 1957 until his death he was a professor of architecture at the University of Pennsylvania. His work and ideas influenced Philip Johnson, Minoru Yamasaki, and Edward Durell Stone as they moved toward a more neoclassical style. Unlike Mies, he did not try to make his buildings look light; he constructed mainly with concrete and brick, and made his buildings look monumental and solid. He drew from a wide variety of different sources; the towers of Richards Medical Research Laboratories were inspired by the architecture of the Renaissance towns he had seen in Italy as a resident architect at the American Academy in Rome in 1950. Notable buildings by Kahn in the United States include the First Unitarian Church of Rochester, New York (1962); and the Kimball Art Museum in Fort Worth, Texas (1966–72). Following the example of Le Corbusier and his design of the government buildings in Chandigarh, the capital city of the Haryana & Punjab State of India, Kahn designed the Jatiyo Sangshad Bhaban (National Assembly Building) in Dhaka, Bangladesh (1962–74), when that country won independence from Pakistan. It was Kahn's last work.
I. M. Pei
I. M. Pei (1917–2019) was a major figure in late modernism and the debut of Post-modern architecture. He was born in China and educated in the United States, studying architecture at the Massachusetts Institute of Technology. While the architecture school there still trained in the Beaux-Arts architecture style, Pei discovered the writings of Le Corbusier, and a two-day visit by Le Corbusier to the campus in 1935 had a major impact on Pei's ideas of architecture. In the late 1930s, he moved to the Harvard Graduate School of Design, where he studied with Walter Gropius and Marcel Breuer and became deeply involved in Modernism. After the war he worked on large projects for the New York real estate developer William Zeckendorf, before breaking away and starting his own firm. One of the first buildings his own firm designed was the Green Building at the Massachusetts Institute of Technology. While the clean modernist façade was admired, the building developed an unexpected problem; it created a wind tunnel effect, and in strong winds the doors could not be opened. Pei was forced to construct a tunnel so visitors could enter the building during high winds.
Between 1963 and 1967 Pei designed the Mesa Laboratory for the National Center for Atmospheric Research outside Boulder, Colorado, in an open area at the foothills of the Rocky Mountains. The project differed from Pei's earlier urban work; it would rest in an open area in the foothills of the Rocky Mountains. His design was a striking departure from traditional modernism; it looked as if it were carved out of the side of the mountain.
In the late modernist area, art museums bypassed skyscrapers as the most prestigious architectural projects; they offered greater possibilities for innovation in form and more visibility. Pei established himself with his design for the Herbert F. Johnson Museum of Art at Cornell University in Ithaca, New York (1973), which was praised for its imaginative use of a small space, and its respect for the landscape and other buildings around it. This led to the commission for one of the most important museum projects of the period, the new East Wing of the National Gallery of Art in Washington, completed in 1978, and to another of Pei's most famous projects, the pyramid at the entrance of Louvre Museum in Paris (1983–89). Pei chose the pyramid as the form that best harmonized with the Renaissance and neoclassical forms of the historic Louvre, as well as for its associations with Napoleon and the Battle of the Pyramids. Each face of the pyramid is supported by 128 beams of stainless steel, supporting 675 panels of glass, each .
Fazlur Rahman Khan
In 1955, employed by the architectural firm Skidmore, Owings & Merrill (SOM), he began working in Chicago. He was made a partner in 1966. He worked the rest of his life side by side with Architect Bruce Graham. Khan introduced design methods and concepts for efficient use of material in building architecture. His first building to employ the tube structure was the Chestnut De-Witt apartment building. During the 1960s and 1970s, he became noted for his designs for Chicago's 100-story John Hancock Center, which was the first building to use the trussed-tube design, and 110-story Sears Tower, since renamed Willis Tower, the tallest building in the world from 1973 until 1998, which was the first building to use the framed-tube design.
He believed that engineers needed a broader perspective on life, saying, "The technical man must not be lost in his own technology; he must be able to appreciate life, and life is art, drama, music, and most importantly, people." Khan's personal papers, most of which were in his office at the time of his death, are held by the Ryerson & Burnham Libraries at the Art Institute of Chicago. The Fazlur Khan Collection includes manuscripts, sketches, audio cassette tapes, slides and other materials regarding his work.
Khan's seminal work of developing tall building structural systems are still used today as the starting point when considering design options for tall buildings. Tube structures have since been used in many skyscrapers, including the construction of the World Trade Center, Aon Centre, Petronas Towers, Jin Mao Building, Bank of China Tower and most other buildings in excess of 40 stories constructed since the 1960s. The strong influence of tube structure design is also evident in the world's current tallest skyscraper, the Burj Khalifa in Dubai. According to Stephen Bayley of The Daily Telegraph:
Minoru Yamasaki
In the United States, Minoru Yamasaki found major independent success in implementing unique engineering solutions to then-complicated problems, including the space that elevator shafts took up on each floor, and dealing with his personal fear of heights. During this period, he created a number of office buildings which led to his innovative design of the towers of the World Trade Center in 1964, which began construction 21 March 1966. The first of the towers was finished in 1970. Many of his buildings feature superficial details inspired by the pointed arches of Gothic architecture, and make use of extremely narrow vertical windows. This narrow-windowed style arose from his own personal fear of heights. One particular design challenge of the World Trade Center's design related to the efficacy of the elevator system, which was unique in the world. Yamasaki integrated the fastest elevators at the time, running at 1,700 feet per minute. Instead of placing a large traditional elevator shaft in the core of each tower, Yamasaki created the Twin Towers' "Skylobby" system. The Skylobby design created three separate, connected elevator systems which would serve different segments of the building, depending on which floor was chosen, saving approximately 70% of the space used for a traditional shaft. The space saved was then used for office space. In addition to these accomplishments, he had also designed the Pruitt-Igoe Housing Project, the largest ever housing project built in the United States, which was fully torn down in 1976 due to bad market conditions and the decrepit state of the buildings themselves. Separately, he had also designed the Century Plaza Towers and One Woodward Avenue, among 63 other projects he had developed during his career.
Postwar modernism in Europe (1945–1975)
In France, Le Corbusier remained the most prominent architect, though he built few buildings there. His most prominent late work was the convent of Sainte Marie de La Tourette in Eveux-sur-l'Arbresle. The Convent, built of raw concrete, was austere and without ornament, inspired by the medieval monasteries he had visited on his first trip to Italy.
In Britain, the major figures in modernism included Wells Coates (1895–1958), FRS Yorke (1906–1962), James Stirling (1926–1992) and Denys Lasdun (1914–2001). Lasdun's best-known work is the Royal National Theatre (1967–1976) on the south bank of the Thames. Its raw concrete and blockish form offended British traditionalists; Charles III, King of the U.K compared it with a nuclear power station.
In Belgium, a major figure was Charles Vandenhove (born 1927) who constructed an important series of buildings for the University Hospital Center in Liège. His later work ventured into colorful rethinking of historical styles, such as Palladian architecture.
In Finland, the most influential architect was Alvar Aalto, who adapted his version of modernism to the Nordic landscape, light, and materials, particularly the use of wood. After World War II, he taught architecture in the United States. In Denmark, Arne Jacobsen was the best-known of the modernists, who designed furniture as well as carefully proportioned buildings.
In Italy, the most prominent modernist was Gio Ponti, who worked often with the structural engineer Pier Luigi Nervi, a specialist in reinforced concrete. Nervi created concrete beams of exceptional length, twenty-five meters, which allowed greater flexibility in forms and greater heights. Their best-known design was the Pirelli Building in Milan (1958–1960), which for decades was the tallest building in Italy.
The most famous Spanish modernist was the Catalan architect Josep Lluis Sert, who worked with great success in Spain, France, and the United States. In his early career, he worked for a time under Le Corbusier, and designed the Spanish pavilion for the 1937 Paris Exposition. His notable later work included the Fondation Maeght in Saint-Paul-de-Provence, France (1964), and the Harvard Science Center in Cambridge, Massachusetts. He served as Dean of Architecture at the Harvard School of Design.
Notable German modernists included Johannes Krahn, who played an important part in rebuilding German cities after World War II, and built several important museums and churches, notably St. Martin, Idstein, which artfully combined stone masonry, concrete, and glass. Leading Austrian architects of the style included Gustav Peichl, whose later works included the Art and Exhibition Center of the German Federal Republic in Bonn, Germany (1989).
Tropical Modernism Tropical Modernism, or Tropical Modern' is a style of architecture that merges modernist architecture principles with tropical vernacular traditions, emerging in the mid-20th century. The term is used to describe modernist architecture in various regions of the world, including Latin America, Asia and Africa, as detailed below. Architects adapted to local conditions by using features which encouraged protection from harsh sunlight (such as solar shading) and encouraged the flow of cooling breezes through buildings (through narrow corridors). Some contend that the style originated in the 'hot, humid conditions' of West Africa in the 1940s. Typical features include geometric screens. Maxwell Fry and Jane Drew, of the Architectural Association architecture school in London, UK, made important contributions to research and practice in the Tropical Modernism style, after founding the School of Tropical Study at the AA. Speaking about the adoption of modernism in post-independence Ghana, Professor Ola Ukuku, states that ‘those involved in developing Tropical Modernism were actually operating as agents of the colonies at the time’.
Latin America
Architectural historians sometimes label Latin American modernism as "tropical modernism". This reflects architects who adapted modernism to the tropical climate as well as the sociopolitical contexts of Latin America.
Brazil became a showcase of modern architecture in the late 1930s through the work of Lúcio Costa (1902–1998) and Oscar Niemeyer (1907–2012). Costa had the lead and Niemeyer collaborated on the Ministry of Education and Health in Rio de Janeiro (1936–43) and the Brazilian pavilion at the 1939 World's Fair in New York. Following the war, Niemeyer, along with Le Corbusier, conceived the form of the United Nations Headquarters constructed by Walter Harrison.
Lúcio Costa also had overall responsibility for the plan of the most audacious modernist project in Brazil; the creation of new capital, Brasília, constructed between 1956 and 1961. Costa made the general plan, laid out in the form of a cross, with the major government buildings in the center. Niemeyer was responsible for designing the government buildings, including the palace of the President;the National Assembly, composed of two towers for the two branches of the legislature and two meeting halls, one with a cupola and other with an inverted cupola. Niemeyer also built the cathedral, eighteen ministries, and giant blocks of housing, each designed for three thousand residents, each with its own school, shops, and chapel. Modernism was employed both as an architectural principle and as a guideline for organizing society, as explored in The Modernist City.''
Following a military coup d'état in Brazil in 1964, Niemeyer moved to France, where he designed the modernist headquarters of the French Communist Party in Paris (1965–1980), a miniature of his United Nations plan.
Mexico also had a prominent modernist movement. Important figures included Félix Candela, born in Spain, who emigrated to Mexico in 1939; he specialized in concrete structures in unusual parabolic forms. Another important figure was Mario Pani, who designed the National Conservatory of Music in Mexico City (1949), and the Torre Insignia (1988); Pani was also instrumental in the construction of the new University of Mexico City in the 1950s, alongside Juan O'Gorman, Eugenio Peschard, and Enrique del Moral. The Torre Latinoamericana, designed by Augusto H. Alvarez, was one of the earliest modernist skyscrapers in Mexico City (1956); it successfully withstood the 1985 Mexico City earthquake, which destroyed many other buildings in the city center. Pedro Ramirez Vasquez and Rafael Mijares designed the Olympic Stadium for the 1968 Olympics, and Antoni Peyri and Candela designed the Palace of Sports. Luis Barragan was another influential figure in Mexican modernism; his raw concrete residence and studio in Mexico City looks like a blockhouse on the outside, while inside it features great simplicity in form, pure colors, abundant natural light, and, one of is signatures, a stairway without a railing. He won the Pritzker Architecture Prize in 1980, and the house was declared a UNESCO World Heritage Site in 2004.
Asia and Australia
Japan, like Europe, had an enormous shortage of housing after the war, due to the bombing of many cities. 4.2 million housing units needed to be replaced. Japanese architects combined both traditional and modern styles and techniques. One of the foremost Japanese modernists was Kunio Maekawa (1905–1986), who had worked for Le Corbusier in Paris until 1930. His own house in Tokyo was an early landmark of Japanese modernism, combining traditional style with ideas he acquired working with Le Corbusier. His notable buildings include concert halls in Tokyo and Kyoto and the International House of Japan in Tokyo, all in the pure modernist style.
Kenzo Tange (1913–2005) worked in the studio of Kunio Maekawa from 1938 until 1945 before opening his own architectural firm. His first major commission was the Hiroshima Peace Memorial Museum . He designed many notable office buildings and cultural centers. office buildings, as well as the Yoyogi National Gymnasium for the 1964 Summer Olympics in Tokyo. The gymnasium, built of concrete, features a roof suspended over the stadium on steel cables.
The Danish architect Jørn Utzon (1918–2008) worked briefly with Alvar Aalto, studied the work of Le Corbusier, and traveled to the United States to meet Frank Lloyd Wright. In 1957 he designed one of the most recognizable modernist buildings in the world; the Sydney Opera House. He is known for the sculptural qualities of his buildings, and their relationship with the landscape. The five concrete shells of the structure resemble seashells by the beach. Begun in 1957, the project encountered considerable technical difficulties making the shells and getting the acoustics right. Utzon resigned in 1966, and the opera house was not finished until 1973, ten years after its scheduled completion.
In India, modernist architecture was promoted by the postcolonial state under Prime Minister Jawaharlal Nehru, most notably by inviting Le Corbusier to design the city of Chandigarh. Although Nehru advocated for young Indians to be part of Le Corbuiser's design team in order to refine their skills whilst building their city, the team included only one female Indian architect, Eulie Chowdhury. Important Indian modernist architects also include BV Doshi, Charles Correa, Raj Rewal, Achyut Kanvinde, and Habib Rahman. Much discussion around modernist architecture took place in the journal MARG. In Sri Lanka, Geoffrey Bawa pioneered Tropical Modernism. Minnette De Silva was an important Sri Lankan modernist architect.
Post independence architecture in Pakistan is a blend of Islamic and modern styles of architecture with influences from Mughal, indo-Islamic and international architectural designs. The 1960s and 1970s was a period of architectural Significance. Jinnah's Mausoleum, Minar e Pakistan, Bab e Khyber, Islamic summit minar and the Faisal mosque date from this time.
Africa
Modernist architecture in Ghana is also considered part of Tropical Modernism.
Some notable modernist architects in Morocco were Elie Azagury and Jean-François Zevaco.
Asmara, capitol of Eritrea, is well known for its modernist architecture dating from the period of Italian colonization.
Preservation
Several works or collections of modern architecture have been designated by UNESCO as World Heritage Sites. In addition to the early experiments associated with Art Nouveau, these include a number of the structures mentioned above in this article: the Rietveld Schröder House in Utrecht, the Bauhaus structures in Weimar, Dessau, and Bernau, the Berlin Modernism Housing Estates, the White City of Tel Aviv, the city of Asmara, the city of Brasília, the Ciudad Universitaria of UNAM in Mexico City and the University City of Caracas in Venezuela, the Sydney Opera House, and the Centennial Hall in Wrocław, along with select works from Le Corbursier and Frank Lloyd Wright.
Private organizations such as Docomomo International, the World Monuments Fund, and the Recent Past Preservation Network are working to safeguard and document imperiled Modern architecture. In 2006, the World Monuments Fund launched Modernism at Risk, an advocacy and conservation program. The organization MAMMA. is working to document and preserve modernist architecture in Morocco.
See also
Complementary architecture
Contemporary architecture
Critical regionalism
Ecomodernism
List of post-war Category A listed buildings in Scotland
Modern art
Modern furniture
Modernisme
New Urbanism
Organic architecture
References
Bibliography
Colquhoun, Alan, Modern Architecture, Oxford history of art, Oxford University Press, 2002,
Morgenthaler, Hans Rudolf, The Meaning of Modern Architecture: Its Inner Necessity and an Empathetic Reading, Ashgate Publishing, Ltd., 2015, .
Further reading
USA: Modern Architectures in History Request PDF – ResearchGate
The article goes in-depth about the original main contributors of modern architecture.
Pfeiffer, Bruce Brooks. Frank Lloyd Wright, 1867–1959: Building for Democracy. Taschen, 2021.
This article goes into depth about Frank Lloyd Wright and his contributions to modern architecture. and what he focused on to be a part of modern architecture.
"What Is Modern Architecture?" Hammond Historic District.
The article goes through the elaborations of the origin of modern architecture and what constitutes modern architecture.
External links
Six Building Designers Who Are Redefining Modern Architecture, an April 2011 radio and Internet report by the Special English service of the Voice of America.
Architecture and Modernism
"Preservation of Modern Buildings" edition of AIA Architect
Brussels50s60s.be, Overview of the architecture of the 1950s and 1960s in Brussels
A Grand Design: The Toronto City Hall Design Competition Modernist designs from the 1958 international competition
Architectural history
+
Architectural design
Architectural theory | Modern architecture | Engineering | 14,654 |
42,375,327 | https://en.wikipedia.org/wiki/Born%E2%80%93Mayer%20equation | The Born–Mayer equation is an equation that is used to calculate the lattice energy of a crystalline ionic compound. It is a refinement of the Born–Landé equation by using an improved repulsion term.
where:
NA = Avogadro constant;
M = Madelung constant, relating to the geometry of the crystal;
z+ = charge number of cation
z− = charge number of anion
e = elementary charge, 1.6022 C
ε0 = permittivity of free space
4ε0 = 1.112 C2/(J·m)
r0 = distance to closest ion
ρ = a constant dependent on the compressibility of the crystal; 30 pm works well for all alkali metal halides
See also
Born–Landé equation
Kapustinskii equation
References
Eponymous equations of physics
Solid-state chemistry
Ions | Born–Mayer equation | Physics,Chemistry,Materials_science | 173 |
34,020,979 | https://en.wikipedia.org/wiki/Fred%20W.%20Turek | Fred W. Turek is the Director of the Center for Sleep & Circadian Biology and the Charles E. & Emma H. Morrison Professor of Biology in the Department of Neurobiology, both at Northwestern University. Turek received his Ph.D from Stanford University. He was awarded a Guggenheim Fellowship in 1991.
Background
Turek graduated from Stanford University, in Stanford, California, in 1973, receiving a PhD in Biological Sciences; he then completed a two-year postdoctoral fellowship at the University of Texas, where he studied in the Department of Zoology from 1973 to 1975.
He started working as an assistant professor at Northwestern University in 1975, where he still works to this day, serving as the Director of the Center for Sleep & Circadian Biology and as the Charles & Emma Morrison Professor of Biology in the Department of Neurobiology.
Research
Presently, Turek's research interests revolve around the genetic, molecular, and neural basis for sleep and circadian rhythms. He focuses most of his attention on the role of sleep and circadian clock systems for energy balance, obesity, premature birth, gastrointestinal function, and depression specifically. The Turek laboratory investigates cellular events involved in the entrainment, generation, and expression of circadian rhythms arising from a biological clock located in the suprachiasmatic nucleus of the hypothalamus; the genetics of the circadian clock system; the molecular genetic mechanisms underlying the sleep-wake cycle; the effects of advanced age on the expression of behavioral and endocrine rhythms and on the expression of circadian clock genes; the links between sleep, circadian rhythms, and energy metabolism; the role of melatonin in sleep and circadian rhythms; and other topics regarding sleep and circadian rhythms.
Turek's lab spends much of their time working on rodents, but they have also established working relationships with academic researchers. Studies in humans are aimed at shifting the human clock in an attempt to alleviate mental and physical problems that are associated with disorders in circadian time-keeping. Their sleep, circadian, and metabolic studies are focused on how disruption in these functions can lead to obesity, diabetes, and cardiovascular disease.
Center for Sleep & Circadian Biology
Since 1995, Turek has served as the Director of Northwestern University's Center for Sleep & Circadian Biology (CSCB). The CSCB is a University Research Center, within the Department of Neurobiology, that integrates research on sleep and circadian rhythms into a unified program.
Organizations
Since 1981 Turek has been a member of the American Association for the Advancement of Science. He has also served in roles for the Sleep Research Society: he was chair of their Government Relations Committee from 2009-2011, and was a member of the SRS National Institute of Health Liaison Group and the SRS Congressional Liaison Group.
He also served on the boards of the National Institute of Health National Center on Sleep Disorders Research and the National Sleep Foundation.
Turek was also the founder of Society for Research on Biological Rhythms (SRBR), where he also served as President from 1987 to 1992. He is still a member of the SRBR.
He is a deputy editor of the journal SLEEP, was editor in chief of the Journal of Biological Rhythms from 1995 to 2000, and has been the Section Editor for “Genetics of Sleep” and “Chronobiology” for the 2010, 2016, and 2017 editions of Principles and Practices of Sleep Medicine.
Publications
Turek has published nearly 400 reviews and peer-reviewed papers in his nearly 40 years of professorship.
Professional Recognition
Directors' Award for Research and Service, Society for Research on Biological Rhythms (2022)
Distinguished Service Award, Sleep Research Society (2011)
Distinguished Scientist Award, Sleep Research Society (2011)
Government Relations Committee Chair, Sleep Research Society (2009)
Pioneer Award, Institute for Women's Health Research, Northwestern University (2008)
Board of Trustees, Universities Space Research Association (2001)
Board of Directors, National Sleep Foundation (NSF) (2000)
Distinguished Senior Investigator Award, National Alliance for Research on Schizophrenia and Depression (NARSAD) (1998)
Endowed Chair Charles E. and Emma H. Morrison Professor of Biology, Northwestern University (1995)
Guggenheim Fellowship, John Simon Guggenheim Memorial Foundation (1991)
Senior Fellowship, Belgian American Educational Foundation (BAEF) (1991)
Senior International Fellowship, NIH Fogarty International Center (1991)
Curt P. Richter Prize, International Society of Psychoneuroendocrinology (1987)
Senior International Fellowship, NIH Fogarty International Center (1986)
Award from the Underwood Fund, Agricultural Research Council (1981)
Elected a Fellow, American Association for the Advancement of Science (AAAS) (1981)
Associated Student Government Faculty Honor Roll, Northwestern University (1980)
Research Career Development Award, National Institutes of Health (NIH) (1978)
References
Sleep researchers
American neuroscientists
Northwestern University faculty
Stanford University alumni
Living people
Chronobiologists
1947 births | Fred W. Turek | Biology | 1,002 |
26,803,784 | https://en.wikipedia.org/wiki/Sarcoscypha%20occidentalis | Sarcoscypha occidentalis, commonly known as the stalked scarlet cup or the western scarlet cup, is a species of fungus in the family Sarcoscyphaceae of the Pezizales order. Phylogenetic analysis has shown that it is most closely related to other Sarcoscypha species that contain large oil droplets in their spores. S. occidentalis has an imperfect form (reproducing asexually), classified as Molliardiomyces occidentalis.
The fruit bodies have small, bright red cups up to wide atop a slender whitish stem up to long. The species is distinguished from the related S. coccinea and S. austriaca by differences in distribution, fruiting season, and structure. The fungus can be found in North America and Asia. A saprobic species, it is found growing on hardwood twigs, particularly those that are partially buried in moist and shaded humus-rich soil.
Taxonomy
The fungus, originally collected from Muskingum County, Ohio, was named Peziza occidentalis by Lewis David de Schweinitz in 1832. It was assigned its current name by Pier Andrea Saccardo in 1888. Andrew Price Morgan renamed the species Geopyxis occidentalis in 1902 because of a perceived similarity with Geopyxis hesperidea, but the name change was not adopted by subsequent authors. In 1928, Fred Jay Seaver overturned Saccardo's naming and applied the name Plectania to Sarcoscypha coccinea and other red cup fungi. In later taxonomic revisions, Richard P. Korf reinstated the genus name Sarcoscypha.
Phylogeny
The phylogenetic relationships in the genus Sarcoscypha were analyzed by Francis Harrington in the late 1990s. The cladistic analysis combined comparison of sequences from the internal transcribed spacer in the non-functional RNA with fifteen traditional morphological characters, such as spore features, fruit body shape, and degree of hair curliness. Based on this analysis, S. occidentalis is part of a clade of evolutionarily related taxa that includes the species S. dudleyi, S. emarginata, S. hosoyae, S. korfiana and S. mesocyatha. All of these species contain large oil droplets in their spores, in contrast to the other major clade of Sarcoscypha (containing the type species S. coccinea), characterized by having smaller, more numerous droplets. The species most closely related to S. occidentalis is S. mesocyatha, known only from Hawaii.
Subdivision
A Jamaican variety has been named (as Plectania occidentalis var. jamaicensis); it has a pinker hymenium.
Anamorph form
Anamorphic or imperfect fungi are those that seem to lack a sexual stage in their life cycle, and typically reproduce by the process of mitosis in structures called conidia. In some cases, the sexual stage—or teleomorph stage—is later identified, and a teleomorph-anamorph relationship is established between the species. The International Code of Botanical Nomenclature permits the recognition of two (or more) names for one and the same organisms, one based on the teleomorph, the other(s) restricted to the anamorph.
The anamorphic state of S. occidentalis is Molliardiomyces occidentalis, described by John W. Paden. This form produces smooth, colorless conidiophores (specialized stalks that bear conidia) measuring 20–230 by 2–3.2 μm. The conidia are roughly spherical to ovoid, smooth, translucent (hyaline), and 4.6–7.0 by 3.0–3.8 μm.
Etymology
The specific epithet occidentalis, derived from the Latin word for "western", may refer to the distribution of the species in the Western Hemisphere. It is commonly known as the stalked scarlet cup or the western scarlet cup.
Description
Depending on their age, the fruit bodies of S. occidentalis may range in shape from deep cups to saucers to discs in maturity, and they can reach diameters up to . In young specimens, the edges of the cup are curled inwards, and crenulate (with small rounded scallops); the cup edges in older specimens become laciniate (with jagged edges cut into irregular segments). The cups rest atop a stem that is small to medium-sized, up to long and 1.5–2 mm thick, and attached centrally or to the side to the underside of the cup. The base of the stem may be covered with translucent "hairs". The fertile spore-bearing inner surface of the cups, the hymenium, is bright red but fades to yellow or orange when dry. It is smooth or becomes so with time. The fruit bodies are fleshy to rubbery when fresh, but become leathery when dry. The flesh is thin and has no distinctive odor or taste, nor culinary value.
Exipulum is a term used to refer to the tissue or tissues containing the hymenium of an ascomycete fruit body. The ectal excipulum (outer tissue layer) is thin (20–30 μm thickness), made of a tissue type known as texura porrecta, consisting of more or less parallel hyphae all in one direction, with wide lumina and non-thickened walls. The medullary exipulum (middle tissue layer) is thick (200–600 μm) and made of textura intricata, a tissue layer made of irregularly interwoven hyphae with distinct spaces between the hyphae. The asci (filamentous structures in which the ascospores develop) are cylindrical with gradually tapering bases, eight-spored, and measure 240–280 by 12–15 μm. The ascospores have ellipsoidal to roughly cylindrical shapes, usually with blunt ends, and measure 19–22 by 10–12 μm. They have smooth surfaces and usually contain two large oil drops. The paraphyses (sterile, filamentous hyphae present in the hymenium) are cylindrical, 2–3 μm thick, barely enlarged at their apices, straight, and mostly unbranched above. They may sometimes anastomose, but do not form a conspicuous network. The paraphyses contain numerous red granules.
Similar species
S. occidentalis is frequently confused with S. coccinea, but is distinguished macroscopically from this species by its smaller fruit bodies, smaller spores, and less hairy exterior. The two also differ in seasonal and geographic distribution: S. occidentalis fruits from late spring to early autumn in the United States, while S. coccinea fruits earlier in spring, and is distributed in eastern North America, in the midwest, in the valleys between the Pacific coast and the Sierras and Cascades, as well as Europe, Africa, Australia, and India. Another eastern North American species, S. austriaca, has scarlet fruit bodies up to wide, and fruits in early spring.
S. occidentalis may also be mistaken for Microstoma floccosum, which occurs in the same habitat. M. floccosum, however, has taller cups and is covered with stiff white hairs. Another cup-fungus, Scutellinia scutellata, is disc-shaped without a stem, and is fringed with black hairs around its rim. Melastiza species usually lack stems and Phillipsia domingensis produces purplish or dark red cups with white undersides.
Distribution and habitat
The fungus is found in North America east of the Rocky Mountains, and at higher elevations in Central America and the Caribbean. It has also been collected in Japan and Taiwan.
As a saprobic fungus, S. occidentalis is part of a community of fungi that play an important role in the forest ecosystem by breaking down the complex insoluble molecules cellulose and lignin of wood and leaf litter into smaller oligosaccharides that may be used by a variety of microbes. Fruit bodies of S. coccinea may grow either solitarily, scattered or grouped together on sticks, twigs, and fragments of dead wood, usually somewhat decomposed and partially buried in the top of soil and forest litter. It prefers soil that is moist and shaded and has a high content of humus. Like all Sarcoscypha species, it prefers the wood of angiosperms, such as oak, maple, and basswood; one field guide notes a preference for shagbark hickory.
References
Sarcoscyphaceae
Fungi of Asia
Fungi of North America
Fungi described in 1832
Fungi of Central America
Taxa named by Lewis David de Schweinitz
Fungus species | Sarcoscypha occidentalis | Biology | 1,852 |
73,444,592 | https://en.wikipedia.org/wiki/Eutrema%20salsugineum | Eutrema salsugineum (syn. Thellungiella salsuginea), the saltwater cress or salt-lick mustard, is a species of flowering plant in the family Brassicaceae. A petite annual or biennial, it is native to Central Asia, Siberia, Mongolia, northern and eastern China, northwestern and western Canada, Montana and Colorado in the United States, and Nuevo León in Mexico. An extremophile halophyte, it is a close relative of the model organism Arabidopsis thaliana and has been adopted to study salt, drought, and cold stress resistance in plants, including having its genome sequenced.
References
salsugineum
Halophytes
Flora of East European Russia
Flora of Siberia
Flora of Central Asia
Flora of Mongolia
Flora of Xinjiang
Flora of Inner Mongolia
Flora of Manchuria
Flora of North-Central China
Flora of Southeast China
Flora of the Northwest Territories
Flora of Yukon
Flora of Western Canada
Flora of Montana
Flora of Colorado
Flora of Nuevo León
Plants described in 2005 | Eutrema salsugineum | Chemistry | 206 |
10,647,616 | https://en.wikipedia.org/wiki/Future%20Internet%20Research%20and%20Experimentation | Future Internet Research and Experimentation (FIRE) is a program funded by the European Union to do research on the Internet, its prospects, and its future, a field known as "future Internet".
History
Some researchers met with government officials in Zurich in March 2007.
The first FIRE projects started in 2008, with a budget of 40 million Euro from the seventh of the Framework Programmes for Research and Technological Development (FP7).
This was known as "call 2".
In 2010, a second set of projects with a budget of 50 million Euro included technologies such as sensor networks, cloud computing and service-oriented architectures.
A third wave of projects were funded in 2011.
It included a web site and some conferences called a "Network of Excellence in InterNet Science".
A joint project with Brazil called Future Internet testbeds experimentation between BRazil and Europe (FIBRE) had an organizational meeting in October 2011 in Poznań, Poland.
In 2012, Call 8, with a budget of 25 million Euro, led to a fourth wave of projects which were expected to start in the Fall. The focus was on federation of FIRE facilities and on experimentation on existing facilities, with innovative applications.
Call 10 of WP2013 was published 10 July 2012 (OJ C202) with deadline of 15 January 2013.
The FIRE project funded a workshop on 21 September 2012 in Brussels called "FIRE in Horizon 2020".
Federation for Future Internet Research and Experimentation
A follow-on project was called the Federation for Future Internet Research and Experimentation (Fed4FIRE). An Integrated Project in the 7th EU Framework Programme funded under grant agreement No 318389, it started in October 2012 and ran until September 2016.
Fed4FIRE+ started in January 2017 and will run for 60 months, until the end of December 2021. The Fed4FIRE+ project is the successor of the Fed4FIRE project.
See also
Future Internet
Named data networking (U.S)
Universal Identifier Network (China)
References
External links
European Union and science and technology
Information technology organizations based in Europe
Internet architecture | Future Internet Research and Experimentation | Technology | 414 |
40,159,918 | https://en.wikipedia.org/wiki/Ecosystem%20health | Ecosystem health is a metaphor used to describe the condition of an ecosystem. Ecosystem condition can vary as a result of fire, flooding, drought, extinctions, invasive species, climate change, mining, fishing, farming or logging, chemical spills, and a host of other reasons. There is no universally accepted benchmark for a healthy ecosystem, rather the apparent health status of an ecosystem can vary depending upon which health metrics are employed in judging it and which societal aspirations are driving the assessment. Advocates of the health metaphor argue for its simplicity as a communication tool. "Policy-makers and the public need simple, understandable concepts like health." Some critics worry that ecosystem health, a "value-laden construct", can be "passed off as science to unsuspecting policy makers and the public." However, this term is often used in portraying the state of ecosystems worldwide and in conservation and management. For example, scientific journals and the UN often use the terms planetary and ecosystem health, such as the recent journal The Lancet Planetary Health.
History of the concept
The health metaphor applied to the environment has been in use at least since the early 1800s and the great American conservationist Aldo Leopold (1887–1948) spoke metaphorically of land health, land sickness, mutilation, and violence when describing land use practices. The term "ecosystem management" has been in use at least since the 1950s. The term "ecosystem health" has become widespread in the ecological literature, as a general metaphor meaning something good, and as an environmental quality goal in field assessments of rivers, lakes, seas, and forests.
Meaning
The term ecosystem health has been employed to embrace some suite of environmental goals deemed desirable. Edward Grumbine's highly cited paper "What is ecosystem management?" surveyed ecosystem management and ecosystem health literature and summarized frequently encountered goal statements:
Conserving viable populations of native species
Conserving ecosystem diversity
Maintaining evolutionary and ecological processes
Managing over long time frames to maintain evolutionary potential
Accommodating human use and occupancy within these constraints
Grumbine describes each of these goals as a "value statement" and stresses the role of human values in setting ecosystem management goals.
It is the last goal mentioned in the survey, accommodating humans, that is most contentious. "We have observed that when groups of stakeholders work to define ... visions, this leads to debate over whether to emphasize ecosystem health or human well-being ... Whether the priority is ecosystems or people greatly influences stakeholders' assessment of desirable ecological and social states." and, for example, "For some, wolves are critical to ecosystem health and an essential part of nature, for others they are a symbol of government overreach threatening their livelihoods and cultural values."
Measuring ecosystem health requires extensive goal-driven environmental sampling. For example, a vision for ecosystem health of Lake Superior was developed by a public forum and a series of objectives were prepared for protection of habitat and maintenance of populations of some 70 indigenous fish species. A suite of 80 lake health indicators was developed for the Great Lakes Basin including monitoring native fish species, exotic species, water levels, phosphorus levels, toxic chemicals, phytoplankton, zooplankton, fish tissue contaminants, etc.
Some authors have attempted broad definitions of ecosystem health, such as benchmarking as healthy the historical ecosystem state "prior to the onset of anthropogenic stress." A difficulty is that the historical composition of many human-altered ecosystems is unknown or unknowable. Also, fossil and pollen records indicate that the species that occupy an ecosystem reshuffle through time, so it is difficult to identify one snapshot in time as optimum or "healthy.".
A commonly cited broad definition states that a healthy ecosystem has three attributes:
productivity,
resilience, and
"organization" (including biodiversity).
While this captures significant ecosystem properties, a generalization is elusive as those properties do not necessarily co-vary in nature. For example, there is not necessarily a clear or consistent relationship between productivity and species richness. Similarly, the relationship between resilience and diversity is complex, and ecosystem stability may depend upon one or a few species rather than overall diversity. And some undesirable ecosystems are highly productive. “If species richness is our major normative target, then we should convert the Amazon rainforest even faster into pasture.”
"Resilience is not desirable per se. There can be highly resilient states of ecosystems which are very undesirable from some human perspectives, such as algal-dominated coral reefs." Ecological resilience is a "capacity" that varies depending upon which properties of the ecosystem are to be studied and depending upon what kinds of disturbances are considered and how they are to be quantified. Approaches to assessing it "face high uncertainties and still require a considerable amount of empirical and theoretical research."
Other authors have sought a numerical index of ecosystem health that would permit quantitative comparisons among ecosystems and within ecosystems over time. One such system employs ratings of the three properties mentioned above: Health = system vigor x system organization x system resilience. Ecologist Glenn Suter argues that such indices employ "nonsense units," the indices have "no meaning; they cannot be predicted, so they are not applicable to most regulatory problems; they have no diagnostic power; effects of one component are eclipsed by responses of other components, and the reason for a high or low index value is unknown."
“Another way to measure ecosystem health" is using complex systems concepts such as criticality, meaning that a healthy ecosystem is in some sort of balance between adaptability (randomness) and robustness (order) . Nevertheless, the universality of criticality is still under examination and is known as the Criticality Hypothesis, which states that systems in a dynamic regime shifting between order and disorder, attain the highest level of computational capabilities and achieve an optimal trade-off between robustness and flexibility. Recent results in cell and evolutionary biology, neuroscience and computer science have great interest in the criticality hypothesis, emphasizing its role as a viable candidate general law in the realm of adaptive complex systems (see and references therein).
Health indicators
Health metrics are determined by stakeholder goals, which drive ecosystem definition. An ecosystem is an abstraction. "Ecosystems cannot be identified or found in nature. Instead, they must be delimited by an observer. This can be done in many different ways for the same chunk of nature, depending on the specific perspectives of interest."
Ecosystem definition determines the acceptable range of variability (reference conditions) and determines measurement variables. The latter are used as indicators of ecosystem structure and function, and can be used as indicators of "health".
An indicator is a variable, such as a chemical or biological property, that when measured, is used to infer trends in another (unmeasured) environmental variable or cluster of unmeasured variables (the indicandum). For example, rising mortality rate of canaries in a coal mine is an indicator of rising carbon monoxide levels. Rising chlorophyll-a levels in a lake may signal eutrophication.
Ecosystem assessments employ two kinds of indicators, descriptive indicators and normative indicators. "Indicators can be used descriptively for a scientific purpose or normatively for a political purpose."
Used descriptively, high chlorophyll-a is an indicator of eutrophication, but it may also be used as an ecosystem health indicator. When used as a normative (health) indicator, it indicates a rank on a health scale, a rank that can vary widely depending on societal preferences as to what is desirable. A high chlorophyll-a level in a natural successional wetland might be viewed as healthy whereas a human-impacted wetland with the same indicator value may be judged unhealthy.
Estimation of ecosystem health has been criticized for intermingling the two types of environmental indicators. A health indicator is a normative indicator, and if conflated with descriptive indicators "implies that normative values can be measured objectively, which is certainly not true. Thus, implicit values are insinuated to the reader, a situation which has to be avoided."
The very act of selecting indicators of any kind is biased by the observer's perspective and separation of goals from descriptions has been advocated as a step toward transparency: "A separation of descriptive and normative indicators is essential from the perspective of the philosophy of science ... Goals and values cannot be deduced directly from descriptions ... a fact that is emphasized repeatedly in the literature of environmental ethics ... Hence, we advise always specifying the definition of indicators and propose clearly distinguishing ecological indicators in science from policy indicators used for decision-making processes."
And integration of multiple, possibly conflicting, normative indicators into a single measure of "ecosystem health" is problematic. Using 56 indicators, "determining environmental status and assessing marine ecosystems health in an integrative way is still one of the grand challenges in marine ecosystems ecology, research and management"
Another issue with indicators is validity. Good indicators must have an independently validated high predictive value, that is high sensitivity (high probability of indicating a significant change in the indicandum) and high specificity (low probability of wrongly indicating a change). The reliability of various health metrics has been questioned and "what combination of measurements should be used to evaluate ecosystems is a matter of current scientific debate." Most attempts to identify ecological indicators have been correlative rather than derived from prospective testing of their predictive value and the selection process for many indicators has been based upon weak evidence or has been lacking in evidence.
In some cases no reliable indicators are known: "We found no examples of invertebrates successfully used in [forest] monitoring programs. Their richness and abundance ensure that they play significant roles in ecosystem function but thwart focus on a few key species." And, "Reviews of species-based monitoring approaches reveal that no single species, nor even a group of species, accurately reflects entire communities. Understanding the response of a single species may not provide reliable predictions about a group of species even when the group is a few very similar species."
Relationship to human health: the health paradox
A trade-off between human health and the "health" of nature has been termed the "health paradox" and it illuminates how human values drive perceptions of ecosystem health.
Human health has benefited by sacrificing the "health" of wild ecosystems, such as dismantling and damming of wild valleys, destruction of mosquito-bearing wetlands, diversion of water for irrigation, conversion of wilderness to farmland, timber removal, and extirpation of tigers, whales, ferrets, and wolves.
There has been an acrimonious schism among conservationists and resource managers over the question of whether to "ratchet back human domination of the biosphere" or whether to embrace it. These two perspectives have been characterized as utilitarian vs protectionist.
The utilitarian view treats human health and well-being as criteria of ecosystem health. For example, destruction of wetlands to control malaria mosquitoes "resulted in an improvement in ecosystem health."
The protectionist view treats humans as an invasive species: "If there was ever a species that qualified as an invasive pest, it is Homo sapiens,"
Proponents of the utilitarian view argue that "healthy ecosystems are characterized by their capability to sustain healthy human populations," and "healthy ecosystems must be economically viable," as it is "unhealthy" ecosystems that are likely to result in increases in contamination, infectious diseases, fires, floods, crop failures and fishery collapse.
Protectionists argue that privileging of human health is a conflict of interest as humans have demolished massive numbers of ecosystems to maintain their welfare, also disease and parasitism are historically normal in pre-industrial nature. Diseases and parasites promote ecosystem functioning, driving biodiversity and productivity, and parasites may constitute a significant fraction of ecosystem biomass.
The very choice of the word "health" applied to ecology has been questioned as lacking in neutrality in a BioScience article on responsible use of scientific language: "Some conservationists fear that these terms could endorse human domination of the planet ... and could exacerbate the shifting cognitive baseline whereby humans tend to become accustomed to new and often degraded ecosystems and thus forget the nature of the past."
Criticism of the concept and proposed alternatives
Criticism of ecosystem health largely targets the failure of proponents to explicitly distinguish the normative (policy preference) dimension from the descriptive (scientific information) dimension, and has included the following:
Ecosystem health is in the eye of the beholder. It is an economic, political or ethical judgement rather than a scientific measure of environmental quality. Health ratings are shaped by the goals and preferences of environmental stakeholders. "There is no scientific basis for demarcating ecosystem health." "At the core of debates over the utility of ecosystem health is a struggle over which societal preferences will take precedence."
Ecosystem health is an example of normative science, and "using normative science in policy deliberations is stealth advocacy." "Normative science is a corruption of science and should not be tolerated in the scientific community — without exception."
Health is a metaphor, not a property of an ecosystem. Health is an abstraction. It implies "good", an optimum condition, but in nature ecosystems are ever-changing transitory assemblages with no identifiable optimum.
Use of human health and well-being as a criterion of ecosystem health introduces an arrogance and a conflict of interest into environmental assessment, as human population growth has caused much environmental damage.
Ecosystem health masquerades as an operational goal because environmental managers "may be reluctant to define their goals clearly."
It is a vague concept. It is "undefinable in a rigorous sense and is, therefore, acceptable only as conveying a vague sense of well-being." "Currently there are many, often contradictory, definitions of ecosystem health," that "are open to so much abuse and misuse that they represent a threat to the environment."
"There are in general no clear definitions of what proponents of the concept mean by 'ecosystem'."
There is conflicting usage with various government forestry agencies having long had programs or departments of “forest health” meaning absence of tree disease and fire damage, whereas “ecosystem health” may embrace the roles of disease and fire. “Fire is a vital and natural part of the functioning of numerous forest ecosystems.”
The public can be deceived by the term ecosystem health which may camouflage the ramifications of a policy goal and be employed to pejoratively rank policy choices. "The most pervasive misuse of ecosystem health and similar normative notions is insertion of personal values under the guise of 'scientific' impartiality."
Alternatives have been proposed for the term ecosystem health, including more neutral language such as ecosystem status, ecosystem prognosis, and ecosystem sustainability. Another alternative to the use of a health metaphor is to "express exactly and clearly the public policy and the management objective", to employ habitat descriptors and real properties of ecosystems. An example of a policy statement is "The maintenance of viable natural populations of wildlife and ecological functions always takes precedence over any human use of wildlife." An example of a goal is "Maintain viable populations of all native species in situ." An example of a management objective is "Maintain self-sustaining populations of lake whitefish within the range of abundance observed during 1990-99."
Kurt Jax presented an ecosystem assessment format that avoids imposing a preconceived notion of normality, that avoids the muddling of normative and descriptive, and that gives serious attention to ecosystem definition. (1) Societal purposes for the ecosystem are negotiated by stakeholders, (2) a functioning ecosystem is defined with emphasis on phenomena relevant to stakeholder goals, (3) benchmark reference conditions and permissible variation of the system are established, (4) measurement variables are chosen for use as indicators, and (5) the time scale and spatial scale of assessment are decided.
Related terms
Ecological health has been used as a medical term in reference to human allergy and multiple chemical sensitivity and as a public health term for programs to modify health risks (diabetes, obesity, smoking, etc.). Human health itself, when viewed in its broadest sense, is viewed as having ecological foundations. It is also an urban planning term in reference to "green" cities (composting, recycling), and has been used loosely with regard to various environmental issues, and as the condition of human-disturbed environmental sites. Ecosystem integrity implies a condition of an ecosystem exposed to a minimum of human influence. Ecohealth is the relationship of human health to the environment, including the effect of climate change, wars, food production, urbanization, and ecosystem structure and function. Ecosystem management and ecosystem-based management refer to the sustainable management of ecosystems and in some cases may employ the terms ecosystem health or ecosystem integrity as a goal. The practice of natural resource management has evolved as societal priorities have changed and, as a consequence, the working definition of ecosystem health, along with the overall management goals, have evolved as well.
References
Conservation biology
Ecology
Environmental health
Natural resources
Nature
Public health | Ecosystem health | Biology | 3,537 |
47,168,664 | https://en.wikipedia.org/wiki/Penicillium%20primulinum | Penicillium primulinum is an anamorph species of fungus in the genus Penicillium.
References
Further reading
primulinum
Fungi described in 1927
Fungus species | Penicillium primulinum | Biology | 40 |
6,303,113 | https://en.wikipedia.org/wiki/Artifact%20%28software%20development%29 | An artifact is one of many kinds of tangible by-products produced during the development of software. Some artifacts (e.g., use cases, class diagrams, requirements and design documents) help describe the function, architecture, and design of software. Other artifacts are concerned with the process of development itself—such as project plans, business cases, and risk assessments.
The term artifact in connection with software development is largely associated with specific development methods or processes e.g., Unified Process. This usage of the term may have originated with those methods.
Build tools often refer to source code compiled for testing as an artifact, because the executable is necessary to carrying out the testing plan. Without the executable to test, the testing plan artifact is limited to non-execution based testing. In non-execution based testing, the artifacts are the walkthroughs, inspections and correctness proofs. On the other hand, execution based testing requires at minimum two artifacts: a test suite and the executable. Artifact occasionally may refer to the released code (in the case of a code library) or released executable (in the case of a program) produced, but more commonly an artifact is the byproduct of software development rather than the product itself. Open source code libraries often contain a testing harness to allow contributors to ensure their changes do not cause regression bugs in the code library.
Much of what are considered artifacts is software documentation.
In end-user development an artifact is either an application or a complex data object that is created by an end-user without the need to know a general programming language. Artifacts describe automated behavior or control sequences, such as database requests or grammar rules, or user-generated content.
Artifacts vary in their maintainability. Maintainability is primarily affected by the role the artifact fulfills. The role can be either practical or symbolic. In the earliest stages of software development, artifacts may be created by the design team to serve a symbolic role to show the project sponsor how serious the contractor is about meeting the project's needs. Symbolic artifacts often convey information poorly, but are impressive-looking. Symbolic enhance understanding. Generally speaking, Illuminated Scrolls are also considered unmaintainable due to the diligence it requires to preserve the symbolic quality. For this reason, once Illuminated Scrolls are shown to the project sponsor and approved, they are replaced by artifacts which serve a practical role. Practical artifacts usually need to be maintained throughout the project lifecycle, and, as such, are generally highly maintainable.
Artifacts are significant from a project management perspective as deliverables. The deliverables of a software project are likely to be the same as its artifacts with the addition of the software itself.
The sense of artifacts as byproducts is similar to the use of the term artifact in science to refer to something that arises from the process in hand rather than the issue itself, i.e., a result of interest that stems from the means rather than the end.
To collect, organize and manage artifacts, a software development folder may be utilized.
See also
Artifact (UML)
References
Further reading
Software development | Artifact (software development) | Technology,Engineering | 636 |
23,971 | https://en.wikipedia.org/wiki/Pilus | A pilus (Latin for 'hair'; : pili) is a hair-like cell-surface appendage found on many bacteria and archaea. The terms pilus and fimbria (Latin for 'fringe'; plural: fimbriae) can be used interchangeably, although some researchers reserve the term pilus for the appendage required for bacterial conjugation. All conjugative pili are primarily composed of pilin – fibrous proteins, which are oligomeric.
Dozens of these structures can exist on the bacterial and archaeal surface. Some bacteria, viruses or bacteriophages attach to receptors on pili at the start of their reproductive cycle.
Pili are antigenic. They are also fragile and constantly replaced, sometimes with pili of different composition, resulting in altered antigenicity. Specific host responses to old pili structures are not effective on the new structure. Recombination between genes of some (but not all) pili code for variable (V) and constant (C) regions of the pili (similar to immunoglobulin diversity). As the primary antigenic determinants, virulence factors and impunity factors on the cell surface of a number of species of gram-negative and some gram-positive bacteria, including Enterobacteriaceae, Pseudomonadaceae, and Neisseriaceae, there has been much interest in the study of pili as an organelle of adhesion and as a vaccine component. The first detailed study of pili was done by Brinton and co-workers who demonstrated the existence of two distinct phases within one bacterial strain: pileated (p+) and non-pileated)
Types by function
A few names are given to different types of pili by their function. The classification does not always overlap with the structural or evolutionary-based types, as convergent evolution occurs.
Conjugative pili
Conjugative pili allow for the transfer of DNA between bacteria, in the process of bacterial conjugation. They are sometimes called "sex pili", in analogy to sexual reproduction, because they allow for the exchange of genes via the formation of "mating pairs". Perhaps the most well-studied is the F-pilus of Escherichia coli, encoded by the F sex factor.
A sex pilus is typically 6 to 7 nm in diameter. During conjugation, a pilus emerging from the donor bacterium ensnares the recipient bacterium, draws it in close, and eventually triggers the formation of a mating bridge, which establishes direct contact and the formation of a controlled pore that allows transfer of DNA from the donor to the recipient. Typically, the DNA transferred consists of the genes required to make and transfer pili (often encoded on a plasmid), and so is a kind of selfish DNA; however, other pieces of DNA are often co-transferred and this can result in dissemination of genetic traits throughout a bacterial population, such as antibiotic resistance. The connection established by the F-pilus is extremely mechanically and thermochemically resistant thanks to the robust properties of the F-pilus, which ensures successful gene transfer in a variety of environments. Not all bacteria can make conjugative pili, but conjugation can occur between bacteria of different species.
Hyperthermophilic archaea encode pili structurally similar to the bacterial conjugative pili. However, unlike in bacteria, where conjugation apparatus typically mediates the transfer of mobile genetic elements, such as plasmids or transposons, the conjugative machinery of hyperthermophilic archaea, called Ced (Crenarchaeal system for exchange of DNA) and Ted (Thermoproteales system for exchange of DNA), appears to be responsible for the transfer of cellular DNA between members of the same species. It has been suggested that in these archaea the conjugation machinery has been fully domesticated for promoting DNA repair through homologous recombination rather than spread of mobile genetic elements.
Fimbriae
Fimbria (Latin for 'fringe', : fimbriae) is a term used for a short pilus, an appendage that is used to attach the bacterium to a surface, sometimes also called an "attachment pilus" or adhesive pilus. The term "fimbria" can refer to many different (structural) types of pilus. Indeed, many different types of pili have been used for adhesion, a case of convergent evolution. The Gene Ontology system does not treat fimbriae as a distinct type of appendage, using the generic pilus (GO:0009289) type instead.
This appendage ranges from 3–10 nanometers in diameter and can be as much as several micrometers long. Fimbriae are used by bacteria to adhere to one another and to adhere to animal cells and some inanimate objects. A bacterium can have as many as 1,000 fimbriae. Fimbriae are only visible with the use of an electron microscope. They may be straight or flexible.
Fimbriae possess adhesins which attach them to some sort of substratum so that the bacteria can withstand shear forces and obtain nutrients. For example, E. coli uses them to attach to mannose receptors.
Some aerobic bacteria form a very thin layer at the surface of a broth culture. This layer, called a pellicle, consists of many aerobic bacteria that adhere to the surface by their fimbriae. Thus, fimbriae allow the aerobic bacteria to remain both on the broth, from which they take nutrients, and near the air.
Fimbriae are required for the formation of biofilm, as they attach bacteria to host surfaces for colonization during infection. Fimbriae are either located at the poles of a cell or are evenly spread over its entire surface.
This term was also used in a lax sense to refer to all pili, by those who use "pilus" to specifically refer to sex pili.
Types by assembling system or structure
Transfer
The Tra (transfer) family includes all known sex pili (as of 2010). They are related to the type IV secretion system (T4SS). They can be classified into the F-like type (after the F-pilus) and the P-like type. Like their secretion counterparts, the pilus injects material, DNA in this case, into another cell.
Type IV pili
Some pili, called type IV pili (T4P), generate motile forces. The external ends of the pili adhere to a solid substrate, either the surface to which the bacterium is attached or to other bacteria. Then, when the pili contract, they pull the bacterium forward like a grappling hook. Movement produced by type IV pili is typically jerky, so it is called twitching motility, as opposed to other forms of bacterial motility such as that produced by flagella. However, some bacteria, for example Myxococcus xanthus, exhibit gliding motility. Bacterial type IV pili are similar in structure to the component proteins of archaella (archaeal flagella), and both are related to the Type II secretion system (T2SS); they are unified by the group of Type IV filament systems. Besides archaella, many archaea produce adhesive type 4 pili, which enable archaeal cells to adhere to different substrates. The N-terminal alpha-helical portions of the archaeal type 4 pilins and archaellins are homologous to the corresponding regions of bacterial T4P; however, the C-terminal beta-strand-rich domains appear to be unrelated in bacterial and archaeal pilins.
Genetic transformation is the process by which a recipient bacterial cell takes up DNA from a neighboring cell and integrates this DNA into its genome by homologous recombination. In Neisseria meningitidis (also called meningococcus), DNA transformation requires the presence of short DNA uptake sequences (DUSs) which are 9-10 monomers residing in coding regions of the donor DNA. Specific recognition of DUSs is mediated by a type IV pilin. Menningococcal type IV pili bind DNA through the minor pilin ComP via an electropositive stripe that is predicted to be exposed on the filament's surface. ComP displays an exquisite binding preference for selective DUSs. The distribution of DUSs within the N. meningitides genome favors certain genes, suggesting that there is a bias for genes involved in genomic maintenance and repair.
This family was originally identified as "type IV fimbriae" by their appearance under the microscope. This classification survived as it happens to correspond to a clade. It has been shown that some archaeal type IV pilins can exist in 4 different conformations, yielding two pili with dramatically different structures. Remarkably, the two pili were produced by the same secretion machinery. However, which of the two pili is formed appears to depend on the growth conditions, suggesting that the two pili are functionally distinct.
Type 1 fimbriae
Another type are called type 1 fimbriae. They contain FimH adhesins at the "tips". The chaperone-usher pathway is responsible for moving many types of fimbriae out of the cell, including type 1 fimbriae and the P fimbriae.
Curli
"Gram-negative bacteria assemble functional amyloid surface fibers called curli." Curli are a type of fimbriae. Curli are composed of proteins called curlins. Some of the genes involved are CsgA, CsgB, CsgC, CsgD, CsgE, CsgF, and CsgG.
Virulence
Pili are responsible for virulence in the pathogenic strains of many bacteria, including E. coli, Vibrio cholerae, and many strains of Streptococcus. This is because the presence of pili greatly enhances bacteria's ability to bind to body tissues, which then increases replication rates and ability to interact with the host organism. If a species of bacteria has multiple strains but only some are pathogenic, it is likely that the pathogenic strains will have pili while the nonpathogenic strains do not.
The development of attachment pili may then result in the development of further virulence traits. Fimbriae are one of the primary mechanisms of virulence for E. coli, Bordetella pertussis, Staphylococcus and Streptococcus bacteria. Their presence greatly enhances the bacteria's ability to attach to the host and cause disease. Nonpathogenic strains of V. cholerae first evolved pili, allowing them to bind to human tissues and form microcolonies. These pili then served as binding sites for the lysogenic bacteriophage that carries the disease-causing toxin. The gene for this toxin, once incorporated into the bacterium's genome, is expressed when the gene coding for the pilus is expressed (hence the name "toxin mediated pilus").
See also
Bacterial nanowires
Flagellum
Sortase
P fimbriae
PilZ domain
References
External links
Organelles
Bacteria
Prokaryotic cell anatomy | Pilus | Biology | 2,375 |
6,836,056 | https://en.wikipedia.org/wiki/HBsAg | HBsAg (also known as the Australia antigen) is the surface antigen of the hepatitis B virus (HBV). Its presence in blood indicates existing hepatitis B infection.
Structure and function
The viral envelope of an enveloped virus has different surface proteins from the rest of the virus which act as antigens. These antigens are recognized by antibody proteins that bind specifically to one of these surface proteins.
The full-length HBsAg is called the L (for "large") form. It consists of a preS loop, a first transmembrane helix (TM1), a cytosolic loop (CYL), another TM helix (TM2), an antigenic loop (AGL), followed by two TM helices (TM3 and TM4). The preS loop can either be on the outside (lumen), or be located in the cytosol with the TM1 helix not actually penetrating the membrane. The M ("medium") form has a truncated preS; the part of preS1 unique to L is called preS1, while the part shared by L and M is called preS2. preS2 is always located in the lumen. The S ("small") form has no preS2.
HBsAg forms the shell of the virus. Furthermore, it contains parts that are recognized by the cellular receptor of the virus NTCP in preS1, which causes the causes the virus to tightly bind to the cell. How the virus convinces the cell to take the virus in after binding via endocytosis is unknown. It also serves to release the contents of the virion into the cell through membrane fusion. The part responsible for fusion is also located in preS1.
HBsAg self-assembles into viral shells even when no contents are present. Such an empty shell is called a virus-like particle or a small spherical subviral particle.
Immunoassay
Today, these antigen-proteins can be genetically manufactured (e.g. transgene E. coli) to produce material for a simple antigen test, which detects the presence of HBV.
It is present in the sera of patients with viral hepatitis B (with or without clinical symptoms). Patients who developed antibodies against HBsAg (anti-HBsAg seroconversion) are usually considered non-infectious. HBsAg detection by immunoassay is used in blood screening, to establish a diagnosis of hepatitis B infection in the clinical setting (in combination with other disease markers) and to monitor antiviral treatment.
In histopathology, the presence of HBsAg is more commonly demonstrated by the use of the Shikata orcein technique, which uses a natural dye to bind to the antigen in infected liver cells.
Positive HBsAg tests can be due to recent vaccination against Hepatitis B virus but this positivity is unlikely to persist beyond 14 days post-vaccination.
Applications
HBsAg made through recombinant DNA is used to make the hepatitis B vaccine. It has a very good efficacy of about 95%, with protection lasting for more than 30 years, even after the anti-HbsAg antigen titers have fallen.
The RTS,S also makes use of HBsAg. It is a mixture of a version of malaria surface antigen grafted to HBsAg (RTS) and ordinary HBsAg (S), both made through recombinant DNA. Much like ordinary HBsAg, these two are able to assemble into virus-like particles that are soluble in water.
History
It is commonly referred to as the Australia Antigen. This is because it was first isolated by the American research physician and Nobel Prize winner Baruch S. Blumberg in the serum of an Australian Aboriginal person. It was discovered to be part of the virus that caused serum hepatitis by virologist Alfred Prince in 1968.
Heptavax, a "first-generation" hepatitis B vaccine in the 1980s, was made from HBsAg extracted from the blood plasma of hepatitis patients. More modern vaccines are made from recombinant HBsAg grown in yeast.
See also
HBcAg
HBeAg
References
Viral structural proteins
Hepatitis B virus
Antigens | HBsAg | Chemistry | 880 |
1,569,192 | https://en.wikipedia.org/wiki/HD%2027894 | HD 27894 is a single star with a system of orbiting exoplanets, located in the southern constellation of Reticulum. It is too faint to be seen with the naked eye at an apparent visual magnitude of 9.36. This system lies at a distance of 142.5 light years from the Sun, as determined via parallax measurements, and is drifting further away with a radial velocity of 83 km/s.
The spectrum of HD 27894 presents as a K-type main-sequence star, an orange dwarf, with a stellar classification of K2 V. This is a quiescent solar-type star that displays no significant magnetic activity in its chromosphere and is spinning slowly with a rotation period of roughly 44 days. The abundance of iron in the star is much higher than in the Sun, an indicator that it is metal-rich. It has 83% of the mass of the Sun and 79% of the Sun's radius. The star is radiating 33% of the luminosity of the Sun from its photosphere at an effective temperature of 4,923 K.
Planetary system
In 2005, the Geneva Extrasolar Planet Search Team announced the discovery of an extrasolar planet orbiting the star. In 2017, the discovery of two additional exoplanets was announced. One is very close to the star like the one discovered earlier, while the other one orbits the star at a much larger distance. It is the first system where such a large gap between orbital distances has been found. In 2022, the inclination and true mass of HD 27894 d were measured via astrometry. The study only found strong evidence for planets b and d.
See also
List of extrasolar planets
References
K-type main-sequence stars
Planetary systems with three confirmed planets
Reticulum
Durchmusterung objects
027894
020277 | HD 27894 | Astronomy | 383 |
3,656,208 | https://en.wikipedia.org/wiki/Space%20Shuttle%20design%20process | Before the Apollo 11 Moon landing in 1969, NASA began studies of Space Shuttle designs as early as October 1968. The early studies were denoted "Phase A", and in June 1970, "Phase B", which were more detailed and specific. The primary intended use of the Phase A Space Shuttle was supporting the future space station, ferrying a minimum crew of four and about of cargo, and being able to be rapidly turned around for future flights, with larger payloads like space station modules being lifted by the Saturn V.
Two designs emerged as front-runners. One was designed by engineers at the Manned Spaceflight Center, and championed especially by George Mueller. This was a two-stage system with delta-winged spacecraft, and generally complex. An attempt to re-simplify was made in the form of the DC-3, designed by Maxime Faget, who had designed the Mercury capsule among other vehicles. Numerous offerings from a variety of commercial companies were also offered but generally fell by the wayside as each NASA lab pushed for its own version.
All of this was taking place in the midst of other NASA teams proposing a wide variety of post-Apollo missions, a number of which would cost as much as Apollo or more. As each of these projects fought for funding, the NASA budget was at the same time being severely constrained. Three were eventually presented to United States Vice President Spiro Agnew in 1969. The shuttle project rose to the top, largely due to tireless campaigning by its supporters. By 1970 the shuttle had been selected as the one major project for the short-term post-Apollo time frame.
When funding for the program came into question, there were concerns that the project might be canceled. This became especially pressing as it became clear that the Saturn V would no longer be produced, which meant that the payload to orbit needed to be increased in both mass - all the way to - and size to supplement its heavy-lift capabilities, necessary for planned interplanetary probes and space station modules, which meant a bigger and costlier vehicle was needed during Phase B. Therefore, NASA tried to interest the US Air Force and a variety of other customers in using the shuttle for their missions as well. To lower the development costs of the proposed designs, boosters were added, a throw-away fuel tank was adopted, and many other changes were made that greatly lowered the reusability and greatly added to the vehicle and operational costs.
Decision-making process
In 1969, United States Vice President Spiro Agnew chaired the National Aeronautics and Space Council, which discussed post-Apollo options for human space activities. The recommendations of the Council would heavily influence the decisions of the administration. The Council considered four major options:
A human mission to Mars
Follow-on lunar program
A low Earth orbital infrastructure program
Discontinuing human space activities
Based on the advice of the Space Council, President Nixon made the decision to pursue the low Earth orbital infrastructure option. This program mainly consisted of the construction of a space station, along with the development of a Space Shuttle. Funding restrictions precluded pursuing the development of both programs simultaneously, however. NASA chose to develop the Space Shuttle program first, and then planned to use the shuttle in order to construct and service a space station.
Shuttle design debate
During the early shuttle studies, there was a debate over the optimal shuttle design that best-balanced capability, development cost, and operational cost. Initially, a fully reusable design was preferred. This involved a very large winged crewed booster which would carry a smaller winged crewed orbiter. The booster vehicle would lift the orbiter to a certain altitude and speed, then separate. The booster would return and land horizontally, while the orbiter continued into low Earth orbit. After completing its mission, the winged orbiter would re-enter and land horizontally on a runway. The idea was that full reusability would promote lower operating costs.
However, further studies showed a huge booster was needed to lift an orbiter with the desired payload capability. In space and aviation systems, the cost is closely related to mass, so this meant the overall vehicle cost would be very high. Both booster and orbiter would have rocket engines plus jet engines for use within the atmosphere, plus separate fuel and control systems for each propulsion mode. In addition, there were concurrent discussions about how much funding would be available to develop the program.
Another competing approach was maintaining the Saturn V production line and using its large payload capacity to launch a space station in a few payloads rather than many smaller shuttle payloads. A related concept was servicing the space station using the Air Force Titan III-M to launch a larger Gemini capsule, called "Big Gemini", or a smaller "glider" version of the shuttle with no main engines and a payload bay.
The shuttle supporters answered that given enough launches, a reusable system would have lower overall costs than disposable rockets. If dividing total program costs over a given number of launches, a high shuttle launch rate would result in lower pre-launch costs. This in turn would make the shuttle cost-competitive with or superior to expendable launchers. Some theoretical studies mentioned 55 shuttle launches per year; however, the final design chosen did not support that launch rate. In particular, the maximum external tank production rate was limited to 24 tanks per year at NASA's Michoud Assembly Facility.
The combined space station and Air Force payload requirements were not sufficient to reach desired shuttle launch rates. Therefore, the plan was for all future U.S. space launches—space stations, Air Force, commercial satellites, and scientific research—to use only the Space Shuttle. Most other expendable boosters would be phased out.
The reusable booster was eventually abandoned due to several factors: high price (combined with limited funding), technical complexity, and development risk. Instead, a partially (not fully) reusable design was selected, where an external propellant tank was discarded for each launch, and the booster rockets and shuttle orbiter were refurbished for reuse.
Initially, the orbiter was to carry its own liquid propellant. However, studies showed carrying the propellant in an external tank allowed a larger payload bay in an otherwise much smaller craft. It also meant throwing away the tank after each launch, but this was a relatively small portion of operating costs.
Earlier designs assumed the winged orbiter would also have jet engines to assist maneuvering in the atmosphere after re-entering. However NASA ultimately chose a gliding orbiter, based partially on experience from previous rocket-then-glide vehicles such as the X-15 and lifting bodies. Omitting the jet engines and their fuel would reduce complexity and increase payload.
Another decision was the size of the crew. Some said that the shuttle should not carry more than four, the most that could use ejection seats. A commander, pilot, mission specialist, and payload specialist were sufficient for any mission. NASA expected to carry more space flight participants as payload specialists, so designed the vehicle to carry more.
The last remaining debate was over the nature of the boosters. NASA examined four solutions to this problem: development of the existing Saturn lower stage, simple pressure-fed liquid-fuel engines of a new design, a large single solid rocket, or two (or more) smaller ones. Engineers at NASA's Marshall Space Flight Center (where the Saturn V development was managed) were particularly concerned about solid rocket reliability for crewed missions.
Air Force involvement
During the mid-1960s the United States Air Force had both of its major piloted space projects, X-20 Dyna-Soar and Manned Orbiting Laboratory, canceled. This demonstrated its need to cooperate with NASA to place military astronauts and payloads in orbit. The Air Force launched more than 200 satellite reconnaissance missions between 1959 and 1970, and the military's large volume of payloads would be valuable in making the shuttle more economical. In turn, by serving Air Force needs, the Shuttle became a truly national system, carrying all military as well as civilian payloads.
NASA sought Air Force support for the shuttle. After the Six-Day War and the Soviet invasion of Czechoslovakia exposed limitations in the United States satellite reconnaissance network, Air Force involvement emphasized the ability to launch spy satellites southward into polar orbit from Vandenberg AFB. This required higher energies than for lower inclination orbits. However, to be able to return to Earth after one orbit, despite the Earth rotating 1,000 miles beneath the orbital track, required a larger delta wing size than the earlier simple "DC-3" shuttle. In addition, the straight-wing configuration favored by Max Faget would've requred the vehicle to fly in a stall for most of the reentry and had issues during launch aborts, a situation disliked by NASA. It is a common misconception that the delta wing was solely by demand of the USAF, however that is merely a myth.
Despite the potential benefits for the Air Force, the military was satisfied with its expendable boosters, and had less need for the shuttle than NASA. Because the space agency needed outside support, the Defense Department (DoD) and the National Reconnaissance Office (NRO) gained primary control over the design process. For example, NASA planned a cargo bay, but NRO specified a bay because it expected future intelligence satellites to become larger. When Faget again proposed a wide payload bay, the military almost immediately insisted on retaining the width. The Air Force also gained the equivalent of the use of one of the shuttles for free despite not paying for the shuttle's development or construction. In exchange for the NASA concessions, the Air Force testified to the Senate Space Committee on the shuttle's behalf in March 1971.
As another incentive for the military to use the shuttle, Congress reportedly told DoD that it would not pay for any satellites not designed to fit into the shuttle cargo bay. Although NRO did not redesign existing satellites for the shuttle, the vehicle retained the ability to retrieve large cargos such as the KH-9 HEXAGON from orbit for refurbishment, and the agency studied resupplying the satellite in space.
Potential military use of the shuttle—including the possibility of using it to verify Soviet compliance with the SALT II treaty—probably caused President Jimmy Carter to not cancel the shuttle in 1979 and 1980, when the program was years behind schedule and hundreds of millions of dollars over budget. The Air Force planned on having its own fleet of shuttles and re-built a separate launch facility originally derived from the canceled Manned Orbiting Laboratory program at Vandenberg called Space Launch Complex Six (SLC-6). However, for various reasons, due in large part to the loss of Space Shuttle Challenger on January 28, 1986, work on SLC-6 was eventually discontinued and no shuttle launches from that location ever took place. SLC-6 was eventually used for launching the Lockheed Martin-built Athena expendable launch vehicles, which included the successful IKONOS commercial Earth observation satellite in September 1999 before being reconfigured once again to handle the new generation of Boeing Delta IV's. The first launch of the Delta IV heavy from SLC-6 occurred in June 2006, launching NROL-22, a classified satellite for the U.S. National Reconnaissance Office (NRO).
Final design
While NASA would likely have chosen liquid boosters had it had complete control over the design, the Office of Management and Budget insisted on less expensive solid boosters due to their lower projected development costs. While a liquid-fueled booster design provided better performance, lower per-flight costs, less environmental impact and less developmental risk, solid boosters were seen as requiring less funding to develop at a time when the Shuttle program had many different elements competing for limited development funds. The final design which was selected was a winged orbiter with three liquid-fueled engines, a large expendable external tank which held liquid propellant for these engines, and two reusable solid rocket boosters.
In the spring of 1972 Lockheed Aircraft, McDonnell Douglas, Grumman, and North American Rockwell submitted proposals to build the shuttle. The NASA selection group thought that Lockheed's shuttle was too complex and too expensive, and the company had no experience with building crewed spacecraft. McDonnell Douglas's was too expensive and had technical issues. Grumman had an excellent design which also seemed too expensive. North American's shuttle had the lowest cost and most realistic cost projections, its design was the easiest for ongoing maintenance, and the Apollo 13 accident involving North American's command and service module demonstrated its experience with electrical system failures. NASA announced its choice of North American on July 26, 1972.
The Space Shuttle program used the HAL/S programming language. The first microprocessor used was the 8088 and later the 80386. The Space Shuttle orbiter avionics computer was the IBM AP-101.
Retrospection
Opinions differ on the lessons of the Shuttle. It was developed with the original development cost and time estimates given to President Richard M. Nixon in 1971, at a cost of billion in 1971 dollars (equivalent to $ in ) versus an original $5.15billion estimate. The operational costs, flight rate, payload capacity, and reliability were different than anticipated, however.
See also
Buran (spacecraft)
Single-stage-to-orbit
Space Shuttle abort modes
Space Shuttle program
SpaceX Starship design process
Studied Space Shuttle designs
References
Further reading
Dr. Wernher Von Braun – "The Spaceplane that can put YOU in orbit" (Popular Science, July 1970)
External links
Astronautix Space Shuttle article
NASA: The Space Shuttle Decision
INTRODUCTION TO FUTURE LAUNCH VEHICLE PLANS [1963–2001], M. Lindroos
10 Space Shuttles which never flew (Lockheed Starclipper, Chrysler SERV, Phase B Shuttles, Rockwell C-1057, Shuttle C, Air Launched Sortie Vehicle (ALSV), Hermes, Buran, Shuttle II, Lockheed Martin VentureStar)
Spacecraft design
design | Space Shuttle design process | Engineering | 2,843 |
52,071,403 | https://en.wikipedia.org/wiki/NGC%20314 | NGC 314 is a lenticular galaxy in the constellation Sculptor. It was discovered on September 27, 1834 by John Herschel.
References
0314
18340927
Sculptor (constellation)
Lenticular galaxies
003395 | NGC 314 | Astronomy | 45 |
2,959,989 | https://en.wikipedia.org/wiki/Phosgenite | Phosgenite is a rare mineral consisting of lead carbonate chloride, (PbCl)2CO3. The tetragonal crystals are prismatic or tabular in habit: they are usually colorless and transparent, and have a brilliant adamantine lustre. Sometimes the crystals have a curious helical twist about the tetrad or principal axis. The hardness is 3 and the specific gravity 6.3. The mineral is rather sectile, and consequently was earlier known as corneous lead, (German Hornblei).
Name and occurrence
The name phosgenite was given by August Breithaupt in 1820, after phosgene, carbon oxychloride, because the mineral contains the elements carbon, oxygen, and chlorine.
It was found associated with anglesite and matlockite in cavities within altered galena in a lead mine at Cromford, near Matlock: hence its common name cromfordite. Crystals are also found in galena at Monteponi near Iglesias in Sardinia, and near Dundas in Tasmania. It has also been reported from Laurium, Greece; Tarnowitz, Poland; the Altai district, Siberia; the Touissit mine, near Oujda, Morocco; Sidi Amor ben Salem, Tunisia; Tsumeb, Namibia; Broken Hill, New South Wales; and Boleo, near Santa Rosalía, Baja California Sur. In the US it has been reported from the Terrible mine, Custer County, Colorado; the Stevenson-Bennett mine, Organ Mountains, Doña Ana County, New Mexico; and the Mammoth mine, Tiger, Pinal County, Arizona.
Crystals of phosgenite, and also of the corresponding bromine compound PbBr2CO3, have been prepared artificially.
See also
Barstowite, another lead chloride carbonate
References
Carbonate minerals
Halide minerals
Lead minerals
Luminescent minerals
Minerals in space group 127
Tetragonal minerals | Phosgenite | Chemistry | 401 |
23,774,097 | https://en.wikipedia.org/wiki/Placebo%20button | A placebo button is a push-button or other control that appears to have functionality but has no physical effect when pressed. Such buttons can appear to work, by lighting up or otherwise reacting, which rewards the user by giving them an illusion of control. They are commonly placed in situations where it would have once been useful to have such a button but the system now operates automatically, such as a manual thermostat in a temperature-regulated office. Were the control removed entirely, some users would feel frustrated at the awareness they were not in control.
Office thermostats
It has been reported that the temperature set point adjustment on thermostats in many office buildings in the United States is non-functional, installed to give tenants' employees a similar illusion of control. In some cases, they act as input devices to a central control computer; in others, they serve no purpose other than to keep employees contented.
A common implementation in buildings with an HVAC central control computer is to allow the thermostats to provide a graded level of control. Temperatures in such a system are governed by the central controller's settings, which are typically set by the building maintenance staff or HVAC engineers. The individual thermostats in various offices provide the controller with a temperature reading of the zone (provided the thermocouples are not installed as inline duct sensors), but also serve as modifiers for the central controller's set point. While the thermostat may include settings from, for example, , the actual effect of the thermostat is to apply "pressure" to the central controller's set point. Thus, if the controller's setting is , setting the thermostat to its maximum warm or cool settings will deflect the output temperature, generally by only a few degrees Fahrenheit (about two degrees Celsius) at most. So, although the thermostat can be set to its lowest marking of , in reality, it may change the HVAC system's output temperature only to . In this case, the thermostat has a "swing" of 2 °C (4 °F): it can alter the produced temperature from the main controller's set point by a maximum of 1 °C (2 °F) in either direction. Consequently, while not purely a placebo, the thermostat in this setup does not provide the level of control that is expected, but the combination of the lower setting number and the feeling of a slight change in temperature can induce the office occupants to believe that the temperature was significantly decreased.
Placebo thermostats work on two psychological principles, which are classical conditioning and the placebo effect. First, placebo thermostats work in accordance with classical conditioning. Classical conditioning was first discovered by Ivan Pavlov and is a type of learning which pairs a stimulus with a physiological response. Applied to placebo thermostats, this is when the employee adjusts the thermostat and hears the noise of hissing or a fan running and consequently physically feels more content. This is due to the countless trials involving the thermostat in their own home, which actually works. The employee has paired the sound of hissing or a fan running to being more physically content due to the actual temperature change and therefore when they experience the noise at work they feel the same way even though there is no change in temperature. As long as individuals get the result they are looking for (noise associated with temperature change) they will continue with the practice (changing the placebo thermostat). Additionally, placebo thermostats work due to the placebo effect. The placebo effect works on the basis that individuals will experience what they believe they will experience. This is attributed to Expectancy theory, which states that the placebo effect is mediated by overt expectancies. The most common example is in medical testing: inactive sugar pills are given to patients who are told they are actually medicine. Some patients will experience relief from symptoms regardless. According to expectancy theory, if people believe they are going to experience a temperature change after changing a placebo thermostat they may psychologically experience one without an actual change happening. Both psychological concepts of classical conditioning and the placebo effect may play a role in the effectiveness of placebo thermostats.
Walk buttons
Many walk buttons at pedestrian crossings were once functional in New York City, but now serve as placebo buttons.
In the United Kingdom and Hong Kong, pedestrian push-buttons on crossings using the Split Cycle Offset Optimisation Technique may or may not have any real effect on crossing timings, depending on their location and the time of day, and some junctions may be completely automated, with push-buttons which do not have any effect at all. In other areas the buttons have an effect only during the night. Some do not affect the actual lights timing but requires the button having been pressed to activate pedestrian green lights.
London Underground train door buttons
London Underground 1992 stock, 1995 stock and 1996 stock include door control buttons. The doors are normally driver operated, but a switch in the driving cab can hand control to passengers once the driver activates the buttons, much like mainline railway stock. In addition, London Underground D stock used on the District line were built with door open buttons which worked much like those of the 1992, 1995 and 1996 stock. These buttons were subsequently removed when the stock was refurbished.
See also
Illusion of control
Affordance
References
Deception
Magical thinking
Switches
Pedestrian crossings
User interfaces
User interface techniques | Placebo button | Technology | 1,129 |
42,568,964 | https://en.wikipedia.org/wiki/Korean%20Standards%20Association | The KSA, formerly known as Korean Standards Association (,) is a public organization that is under the Ministry of Trade, Industry and Energy (MOTIE).of the Republic of Korea.
The KSA was established in 1962 pursuant to Article 32 of the Industrial Standardization Act.
The Chairman and CEO is Lee Sang-jin. At the end of the fiscal year 2017, sales profit amounted to ~.
See also
Government
Economy of South Korea
Ministry of Trade, Industry and Energy
KATS and KASTO, other Korean standards associations
References
External links
.
Korea Accreditation Board
Standards organizations in South Korea
Certification marks
Product certification
ISO member bodies
Government agencies of South Korea
Government agencies established in 1962
1962 establishments in South Korea | Korean Standards Association | Mathematics | 141 |
6,533,332 | https://en.wikipedia.org/wiki/Watercut%20meter | A water cut meter measures the water content (cut) of crude oil and hydrocarbons as they flow through a pipeline. While the title "Water cut" has been traditionally used, the current API naming is Water Cut Analyser or WCA as OWD or On-Line Water Determination is trademarked. The API and ISO committees have not yet come out with a standard for these devices. Though the API is currently in late stages of balloting a draft. There are however standards in place for fiscal automatic sampling of crude oil namely API 8.2 and ISO 3171.
Water cut meters are typically used in the petroleum industry to measure the water cut of oil flowing from a well, produced oil from a separator, crude oil transfer in pipelines and in loading tankers.
There are several technologies used. The main technologies are dielectric measurements using radio or microwave frequency and NIR measurements and less common are gamma ray based instruments.
The water cut is the ratio of water produced compared to the volume of total liquids produced from an oil well. The water cut in waterdrive reservoirs can reach very high values.
References
Measuring instruments | Watercut meter | Technology,Engineering | 231 |
51,319,038 | https://en.wikipedia.org/wiki/Bigyromonada | Bigyromonada is a recently described non-photosynthetic lineage of stramenopiles that at present contains two classes.
Description
Bigyromonads are characterized by biciliate cells that feed on bacteria through phagotrophy. They are marine organisms.
Taxonomy
Subphylum Bigyromonada Cavalier-Smith 1998
Class Developea Karpov & Aleoshin 2016 ex Cavalier-Smith 2017
Order Developayellales Doweld 2001 [Developayellida Cavalier-Smith 1987]
Family Developayellaceae Cavalier-Smith 1997 [Developayellidae]
Developayella Tong 1995
Developayella elegans Tong 1995
Develorapax Karpov & Aleoshin 2016
Develorapax marinus Karpov & Aleoshin 2016
Class Pirsonea
Order Pirsoniales Cavalier-Smith 1998 [Pirsoniida Cavalier-Smith & Chao 2006]
Family Pirsoniaceae Cavalier-Smith 1998
Pirsonia Schnepf, Debres & Elbrachter 1990
P. diadema Kühn 1996
P. eucampiae Kühn 1996
P. formosa Kühn 1996
P. guinardie Schnepf, Debres & Elbrachter 1990
P. mucosa Kühn 1996
P. punctigerae
P. verrucosa Kühn 1996
References
External links
Heterokont classes
Heterokonts | Bigyromonada | Biology | 290 |
52,772,188 | https://en.wikipedia.org/wiki/Drosomycin | Drosomycin is an antifungal peptide from Drosophila melanogaster and was the first antifungal peptide isolated from insects. Drosomycin is induced by infection by the Toll signalling pathway, while expression in surface epithelia like the respiratory tract is instead controlled by the immune deficiency pathway (Imd). This means that drosomycin, alongside other antimicrobial peptides (AMPs) such as cecropins, diptericin, drosocin, metchnikowin and attacin, serves as a first line defence upon septic injury. However drosomycin is also expressed constitutively to a lesser extent in different tissues and throughout development.
Structure
Drosomycin is a 44-residue defensin-like peptide containing four disulfide bridges. These bridges stabilize a structure involving one α-helix and three β-sheets. Owing to these four disulfide bridges, drosomycin is resistant to degradation and the action of proteases. The cysteine stabilized αβ motif of drosomycin is also found in Drosophila defensin, and some plant defensins. Drosomycin has greater sequence similarity with these plant defensins (up to 40%), than with other insect defensins. The structure was discovered in 1997 by Landon and his colleagues The αβ motif of drosomycin is also found in a scorpion neurotoxin, and drosomycin potentiates the action of this neurotoxin on nerve excitation.
Drosomycin multigene family
At the nucleotide level, drosomycin is a 387 bp-long gene (Drs) which lies on Muller element 3L, very near six other drosomycin-like (Drsl) genes. These various drosomycins are referred to as the drosomycin multigene family. However, only drosomycin itself is part of the systemic immune response, while the other genes are regulated in different fashions. The antimicrobial activity of these various drosomycin-like peptides also differs. In 2015 Gao and Zhu found that in some Drosophila species (D. takahashii) some of these genes have been duplicated and this Drosophila has 11 genes in the drosomycin multigene family in total.
Function
It seems that drosomycin has about three major functions on fungi, the first is partial lysis of hyphae, the second is inhibition of spore germination (in higher concentrations of drosomycin), and the last is delaying of hypha growth, which leads to hyphae branching (at lower concentrations of drosomycin). The exact mechanism of function to fungi still has to be clarified. In 2019, Hanson and colleagues generated the first drosomycin mutant, finding that indeed flies lacking drosomycin were more susceptible to fungal infection.
References
Peptides
Antifungals
Defensins | Drosomycin | Chemistry | 631 |
52,048,474 | https://en.wikipedia.org/wiki/NGC%20306 | NGC 306 is an open cluster in the Small Magellanic Cloud. It is located in the constellation Tucana. It was discovered on October 4, 1836, by John Herschel.
References
0306
18361004
Tucana
Small Magellanic Cloud
Open clusters | NGC 306 | Astronomy | 55 |
18,630,254 | https://en.wikipedia.org/wiki/Hyacinthe%20de%20Valroger | Hyacinthe de Valroger, CO (6 January 1814, at Caen – 10 October 1876), was a French Catholic priest and Oratorian.
Career
As a young man, Valroger first studied medicine, but later entered the seminary and was ordained a priest in 1837, after which he made Director of the minor seminary of Bayeux. In 1847 he became a titular canon of Bayeux Cathedral. In 1852 he joined Joseph Gratry in the work of restoring the French Oratory, where he became professor of theology, Master of novices and assistant Superior General.
De Valroger believed that the theory of evolution could be reconciled with the Book of Genesis. He was critical of Darwinism but did not entirely reject evolution. He has been described as a "theistic vitalist".
He criticized natural theories of the origin of life. He embraced a spiritual theory of spontaneous generation. He argued against the idea of abiogenesis, claiming that there was an intervention of "intelligence" (which he equated with God) acting upon the organization of living matter.
Selected publications
Besides many articles in Catholic reviews he published:
"Etudes critiques sur le rationalisme contemporain" (Paris, 1846);
"Essai sur la crédibilité de l'histoire évangélique en réponse au Dr. Strauss' (Paris, 1847);
"Du christianisme et du paganisme dans l'enseignement" (Paris, 1852);
"Introduction historique et critique aux livres du Nouveau Testament" (Paris, 1861);
"L'âge du monde et de l'homme d'après la Bible et l'église" (Paris, 1869);
"La genèse des espèces, études philosophiques et religieuses" (Genesis of Species, Philosophical and Religious Studies on Natural History and Contemporary Naturalists, Paris, 1873);
"Pensées philosophiques et religieuses du Comte de Maistre" (Paris, 1879).
References
Attribution
1814 births
1876 deaths
Clergy from Caen
19th-century French Roman Catholic priests
French Oratory
Theistic evolutionists
Vitalists | Hyacinthe de Valroger | Biology | 454 |
78,153,179 | https://en.wikipedia.org/wiki/Examples%20of%20anonymous%20functions |
Examples of anonymous functions
Numerous languages support anonymous functions, or something similar.
APL
Only some dialects support anonymous functions, either as dfns, in the tacit style or a combination of both.
f←{⍵×⍵} As a dfn
f 1 2 3
1 4 9
g←⊢×⊢ As a tacit 3-train (fork)
g 1 2 3
1 4 9
h←×⍨ As a derived tacit function
h 1 2 3
1 4 9
C (non-standard extension)
The anonymous function is not supported by standard C programming language, but supported by some C dialects, such as GCC and Clang.
GCC
The GNU Compiler Collection (GCC) supports anonymous functions, mixed by nested functions and statement expressions. It has the form:
( { return_type anonymous_functions_name (parameters) { function_body } anonymous_functions_name; } )
The following example works only with GCC. Because of how macros are expanded, the l_body cannot contain any commas outside of parentheses; GCC treats the comma as a delimiter between macro arguments.
The argument l_ret_type can be removed if __typeof__ is available; in the example below using __typeof__ on array would return testtype *, which can be dereferenced for the actual value if needed.
#include <stdio.h>
//* this is the definition of the anonymous function */
#define lambda(l_ret_type, l_arguments, l_body) \
({ \
l_ret_type l_anonymous_functions_name l_arguments \
l_body \
&l_anonymous_functions_name; \
})
#define forEachInArray(fe_arrType, fe_arr, fe_fn_body) \
{ \
int i=0; \
for(;i<sizeof(fe_arr)/sizeof(fe_arrType);i++) { fe_arr[i] = fe_fn_body(&fe_arr[i]); } \
}
typedef struct
{
int a;
int b;
} testtype;
void printout(const testtype * array)
{
int i;
for ( i = 0; i < 3; ++ i )
printf("%d %d\n", array[i].a, array[i].b);
printf("\n");
}
int main(void)
{
testtype array[] = { {0,1}, {2,3}, {4,5} };
printout(array);
/* the anonymous function is given as function for the foreach */
forEachInArray(testtype, array,
lambda (testtype, (void *item),
{
int temp = (*( testtype *) item).a;
(*( testtype *) item).a = (*( testtype *) item).b;
(*( testtype *) item).b = temp;
return (*( testtype *) item);
}));
printout(array);
return 0;
}
Clang (C, C++, Objective-C, Objective-C++)
Clang supports anonymous functions, called blocks, which have the form:
^return_type ( parameters ) { function_body }
The type of the blocks above is return_type (^)(parameters).
Using the aforementioned blocks extension and Grand Central Dispatch (libdispatch), the code could look simpler:
#include <stdio.h>
#include <dispatch/dispatch.h>
int main(void) {
void (^count_loop)() = ^{
for (int i = 0; i < 100; i++)
printf("%d\n", i);
printf("ah ah ah\n");
};
/* Pass as a parameter to another function */
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), count_loop);
/* Invoke directly */
count_loop();
return 0;
}
The code with blocks should be compiled with -fblocks and linked with -lBlocksRuntime
C++ (since C++11)
C++11 supports anonymous functions (technically function objects), called lambda expressions, which have the form:
[ captures ] ( params ) specs requires (optional) { body }
where "specs" is of the form "specifiers exception attr trailing-return-type in that order; each of these components is optional". If it is absent, the return type is deduced from return statements as if for a function with declared return type auto.
This is an example lambda expression:
[](int x, int y) { return x + y; }
C++11 also supports closures, here called captures. Captures are defined between square brackets [and ] in the declaration of lambda expression. The mechanism allows these variables to be captured by value or by reference. The following table demonstrates this:
[] // No captures, the lambda is implicitly convertible to a function pointer.
[x, &y] // x is captured by value and y is captured by reference.
[&] // Any external variable is implicitly captured by reference if used
[=] // Any external variable is implicitly captured by value if used.
[&, x] // x is captured by value. Other variables will be captured by reference.
[=, &z] // z is captured by reference. Other variables will be captured by value.
Variables captured by value are constant by default. Adding mutable after the parameter list makes them non-constant.
C++14 and newer versions support init-capture, for example:
std::unique_ptr<int> ptr = std::make_unique<int>(42);
[ptr]{ /* ... */ }; // copy assignment is deleted for a unique pointer
[ptr = std::move(ptr)]{ /* ... */ }; // ok
auto counter = [i = 0]() mutable { return i++; }; // mutable is required to modify 'i'
counter(); // 0
counter(); // 1
counter(); // 2
The following two examples demonstrate use of a lambda expression:
std::vector<int> some_list{ 1, 2, 3, 4, 5 };
int total = 0;
std::for_each(begin(some_list), end(some_list),
[&total](int x) { total += x; });
// Note that std::accumulate would be a way better alternative here...
This computes the total of all elements in the list. The variable total is stored as a part of the lambda function's closure. Since it is a reference to the stack variable total, it can change its value.
std::vector<int> some_list{ 1, 2, 3, 4, 5 };
int total = 0;
int value = 5;
std::for_each(begin(some_list), end(some_list),
[&total, value, this](int x) { total += x * value * this->some_func(); });
This will cause total to be stored as a reference, but value will be stored as a copy.
The capture of this is special. It can only be captured by value, not by reference. However in C++17, the current object can be captured by value (denoted by *this), or can be captured by reference (denoted by this). this can only be captured if the closest enclosing function is a non-static member function. The lambda will have the same access as the member that created it, in terms of protected/private members.
If this is captured, either explicitly or implicitly, then the scope of the enclosed class members is also tested. Accessing members of this does not need explicit use of this-> syntax.
The specific internal implementation can vary, but the expectation is that a lambda function that captures everything by reference will store the actual stack pointer of the function it is created in, rather than individual references to stack variables. However, because most lambda functions are small and local in scope, they are likely candidates for inlining, and thus need no added storage for references.
If a closure object containing references to local variables is invoked after the innermost block scope of its creation, the behaviour is undefined.
Lambda functions are function objects of an implementation-dependent type; this type's name is only available to the compiler. If the user wishes to take a lambda function as a parameter, the parameter type must be a template type, or they must create a std::function or a similar object to capture the lambda value. The use of the auto keyword can help store the lambda function,
auto my_lambda_func = [&](int x) { /*...*/ };
auto my_onheap_lambda_func = new auto([=](int x) { /*...*/ });
Here is an example of storing anonymous functions in variables, vectors, and arrays; and passing them as named parameters:
#include <functional>
#include <iostream>
#include <vector>
double eval(std::function<double(double)> f, double x = 2.0) {
return f(x);
}
int main() {
std::function<double(double)> f0 = [](double x) { return 1; };
auto f1 = [](double x) { return x; };
decltype(f0) fa[3] = {f0, f1, [](double x) { return x * x; }};
std::vector<decltype(f0)> fv = {f0, f1};
fv.push_back([](double x) { return x * x; });
for (size_t i = 0; i < fv.size(); i++) {
std::cout << fv[i](2.0) << std::endl;
}
for (size_t i = 0; i < 3; i++) {
std::cout << fa[i](2.0) << std::endl;
}
for (auto& f : fv) {
std::cout << f(2.0) << std::endl;
}
for (auto& f : fa) {
std::cout << f(2.0) << std::endl;
}
std::cout << eval(f0) << std::endl;
std::cout << eval(f1) << std::endl;
std::cout << eval([](double x) { return x * x; }) << std::endl;
}
A lambda expression with an empty capture specification ([]) can be implicitly converted into a function pointer with the same type as the lambda was declared with. So this is legal:
auto a_lambda_func = [](int x) -> void { /*...*/ };
void (* func_ptr)(int) = a_lambda_func;
func_ptr(4); //calls the lambda.
Since C++17, a lambda can be declared constexpr, and since C++20, consteval with the usual semantics. These specifiers go after the parameter list, like mutable. Starting from C++23, the lambda can also be static if it has no captures. The static and mutable specifiers are not allowed to be combined.
Also since C++23 a lambda expression can be recursive through explicit this as first parameter:
auto fibonacci = [](this auto self, int n) { return n <= 1 ? n : self(n - 1) + self(n - 2); };
fibonacci(7); // 13
In addition to that, C++23 modified the syntax so that the parentheses can be omitted in the case of a lambda that takes no arguments even if the lambda has a specifier. It also made it so that an attribute specifier sequence that appears before the parameter list, lambda specifiers, or noexcept specifier (there must be one of them) applies to the function call operator or operator template of the closure type. Otherwise, it applies to the type of the function call operator or operator template. Previously, such a sequence always applied to the type of the function call operator or operator template of the closure type making e.g the [[noreturn]] attribute impossible to use with lambdas.
The Boost library provides its own syntax for lambda functions as well, using the following syntax:
for_each(a.begin(), a.end(), std::cout << _1 << ' ');
Since C++14, the function parameters of a lambda can be declared with auto. The resulting lambda is called a generic lambda and is essentially an anonymous function template since the rules for type deduction of the auto parameters are the rules of template argument deduction. As of C++20, template parameters can also be declared explicitly with the following syntax:
[ captures ] < tparams > requires (optional) ( params ) specs requires (optional) { body }
C#
In C#, support for anonymous functions has deepened through the various versions of the language compiler. The language v3.0, released in November 2007 with .NET Framework v3.5, has full support of anonymous functions. C# names them lambda expressions, following the original version of anonymous functions, the lambda calculus.
// the first int is the x' type
// the second int is the return type
//
Func<int,int> foo = x => x * x;
Console.WriteLine(foo(7));
While the function is anonymous, it cannot be assigned to an implicitly typed variable, because the lambda syntax may be used for denoting an anonymous function or an expression tree, and the choice cannot automatically be decided by the compiler. E.g., this does not work:
// will NOT compile!
var foo = (int x) => x * x;
However, a lambda expression can take part in type inference and can be used as a method argument, e.g. to use anonymous functions with the Map capability available with System.Collections.Generic.List (in the ConvertAll() method):
// Initialize the list:
var values = new List<int>() { 7, 13, 4, 9, 3 };
// Map the anonymous function over all elements in the list, return the new list
var foo = values.ConvertAll(d => d * d) ;
// the result of the foo variable is of type System.Collections.Generic.List<Int32>
Prior versions of C# had more limited support for anonymous functions. C# v1.0, introduced in February 2002 with the .NET Framework v1.0, provided partial anonymous function support through the use of delegates. C# names them lambda expressions, following the original version of anonymous functions, the lambda calculus. This construct is somewhat similar to PHP delegates. In C# 1.0, delegates are like function pointers that refer to an explicitly named method within a class. (But unlike PHP, the name is unneeded at the time the delegate is used.) C# v2.0, released in November 2005 with the .NET Framework v2.0, introduced the concept of anonymous methods as a way to write unnamed inline statement blocks that can be executed in a delegate invocation. C# 3.0 continues to support these constructs, but also supports the lambda expression construct.
This example will compile in C# 3.0, and exhibits the three forms:
public class TestDriver
{
delegate int SquareDelegate(int d);
static int Square(int d)
{
return d * d;
}
static void Main(string[] args)
{
// C# 1.0: Original delegate syntax needed
// initializing with a named method.
SquareDelegate A = new SquareDelegate(Square);
System.Console.WriteLine(A(3));
// C# 2.0: A delegate can be initialized with
// inline code, called an "anonymous method". This
// method takes an int as an input parameter.
SquareDelegate B = delegate(int d) { return d * d; };
System.Console.WriteLine(B(5));
// C# 3.0. A delegate can be initialized with
// a lambda expression. The lambda takes an int, and returns an int.
// The type of x is inferred by the compiler.
SquareDelegate C = x => x * x;
System.Console.WriteLine(C(7));
// C# 3.0. A delegate that accepts one input and
// returns one output can also be implicitly declared with the Func<> type.
System.Func<int,int> D = x => x * x;
System.Console.WriteLine(D(9));
}
}
In the case of the C# 2.0 version, the C# compiler takes the code block of the anonymous function and creates a static private function. Internally, the function gets a generated name, of course; this generated name is based on the name of the method in which the Delegate is declared. But the name is not exposed to application code except by using reflection.
In the case of the C# 3.0 version, the same mechanism applies.
ColdFusion Markup Language (CFML)
Using the keyword:
fn = function(){
// statements
};
Or using an arrow function:
fn = () => {
// statements
};
fn = () => singleExpression // singleExpression is implicitly returned. There is no need for the braces or the return keyword
fn = singleParam => { // if the arrow function has only one parameter, there's no need for parentheses
// statements
}
fn = (x, y) => { // if the arrow function has zero or multiple parameters, one needs to use parentheses
// statements
}
CFML supports any statements within the function's definition, not simply expressions.
CFML supports recursive anonymous functions:
factorial = function(n){
return n > 1 ? n * factorial(n-1) : 1;
};
CFML anonymous functions implement closure.
D
D uses inline delegates to implement anonymous functions. The full syntax for an inline delegate is
return_type delegate(arguments){/*body*/}
If unambiguous, the return type and the keyword delegate can be omitted.
(x){return x*x;}
delegate (x){return x*x;} // if more verbosity is needed
(int x){return x*x;} // if parameter type cannot be inferred
delegate (int x){return x*x;} // ditto
delegate double(int x){return x*x;} // if return type must be forced manually
Since version 2.0, D allocates closures on the heap unless the compiler can prove it is unnecessary; the scope keyword can be used for forcing stack allocation.
Since version 2.058, it is possible to use shorthand notation:
x => x*x;
(int x) => x*x;
(x,y) => x*y;
(int x, int y) => x*y;
An anonymous function can be assigned to a variable and used like this:
auto sqr = (double x){return x*x;};
double y = sqr(4);
Dart
Dart supports anonymous functions.
var sqr = (x) => x * x;
print(sqr(5));
or
print(((x) => x * x)(5));
Delphi
Delphi introduced anonymous functions in version 2009.
program demo;
type
TSimpleProcedure = reference to procedure;
TSimpleFunction = reference to function(const x: string): Integer;
var
x1: TSimpleProcedure;
y1: TSimpleFunction;
begin
x1 := procedure
begin
Writeln('Hello World');
end;
x1; //invoke anonymous method just defined
y1 := function(const x: string): Integer
begin
Result := Length(x);
end;
Writeln(y1('bar'));
end.
PascalABC.NET
PascalABC.NET supports anonymous functions using lambda syntax
begin
var n := 10000000;
var pp := (1..n)
.Select(x -> (Random, Random))
.Where(p -> Sqr(p[0]) + Sqr(p[1]) < 1)
.Count / n * 4;
Print(pp);
end.
Elixir
Elixir uses the closure fn for anonymous functions.
sum = fn(a, b) -> a + b end
sum.(4, 3)
#=> 7
square = fn(x) -> x * x end
Enum.map [1, 2, 3, 4], square
#=> [1, 4, 9, 16]
Erlang
Erlang uses a syntax for anonymous functions similar to that of named functions.
% Anonymous function bound to the Square variable
Square = fun(X) -> X * X end.
% Named function with the same functionality
square(X) -> X * X.
Go
Go supports anonymous functions.
foo := func(x int) int {
return x * x
}
fmt.Println(foo(10))
Haskell
Haskell uses a concise syntax for anonymous functions (lambda expressions). The backslash is supposed to resemble λ.
\x -> x * x
Lambda expressions are fully integrated with the type inference engine, and support all the syntax and features of "ordinary" functions (except for the use of multiple definitions for pattern-matching, since the argument list is only specified once).
map (\x -> x * x) [1..5] -- returns [1, 4, 9, 16, 25]
The following are all equivalent:
f x y = x + y
f x = \y -> x + y
f = \x y -> x + y
Haxe
In Haxe, anonymous functions are called lambda, and use the syntax function(argument-list) expression; .
var f = function(x) return x*x;
f(8); // 64
(function(x,y) return x+y)(5,6); // 11
Java
Java supports anonymous functions, named Lambda Expressions, starting with JDK 8.
A lambda expression consists of a comma separated list of the formal parameters enclosed in parentheses, an arrow token (->), and a body. Data types of the parameters can always be omitted, as can the parentheses if there is only one parameter. The body can consist of one statement or a statement block.
// with no parameter
() -> System.out.println("Hello, world.")
// with one parameter (this example is an identity function).
a -> a
// with one expression
(a, b) -> a + b
// with explicit type information
(long id, String name) -> "id: " + id + ", name:" + name
// with a code block
(a, b) -> { return a + b; }
// with multiple statements in the lambda body. It needs a code block.
// This example also includes two nested lambda expressions (the first one is also a closure).
(id, defaultPrice) -> {
Optional<Product> product = productList.stream().filter(p -> p.getId() == id).findFirst();
return product.map(p -> p.getPrice()).orElse(defaultPrice);
}
Lambda expressions are converted to "functional interfaces" (defined as interfaces that contain only one abstract method in addition to one or more default or static methods), as in the following example:
public class Calculator {
interface IntegerMath {
int operation(int a, int b);
default IntegerMath swap() {
return (a, b) -> operation(b, a);
}
}
private static int apply(int a, int b, IntegerMath op) {
return op.operation(a, b);
}
public static void main(String... args) {
IntegerMath addition = (a, b) -> a + b;
IntegerMath subtraction = (a, b) -> a - b;
System.out.println("40 + 2 = " + apply(40, 2, addition));
System.out.println("20 - 10 = " + apply(20, 10, subtraction));
System.out.println("10 - 20 = " + apply(20, 10, subtraction.swap()));
}
}
In this example, a functional interface called IntegerMath is declared. Lambda expressions that implement IntegerMath are passed to the apply() method to be executed. Default methods like swap define methods on functions.
Java 8 introduced another mechanism named method reference (the :: operator) to create a lambda on an existing method. A method reference does not indicate the number or types of arguments because those are extracted from the abstract method of the functional interface.
IntBinaryOperator sum = Integer::sum;
In the example above, the functional interface IntBinaryOperator declares an abstract method int applyAsInt(int, int), so the compiler looks for a method int sum(int, int) in the class java.lang.Integer.
Differences compared to Anonymous Classes
Anonymous classes of lambda-compatible interfaces are similar, but not exactly equivalent, to lambda expressions.
To illustrate, in the following example, and are both instances of that add their two parameters:
IntegerMath anonymousClass = new IntegerMath() {
@Override
public int operation(int a, int b) {
return a + b;
}
};
IntegerMath lambdaExpression = (a, b) -> a + b;
The main difference here is that the lambda expression does not necessarily need to allocate a new instance for the , and can return the same instance every time this code is run.
Additionally, in the OpenJDK implementation at least, lambdas are compiled to instructions, with the lambda body inserted as a static method into the surrounding class, rather than generating a new class file entirely.
Java limitations
Java 8 lambdas have the following limitations:
Lambdas can throw checked exceptions, but such lambdas will not work with the interfaces used by the Collection API.
Variables that are in-scope where the lambda is declared may only be accessed inside the lambda if they are effectively final, i.e. if the variable is not mutated inside or outside of the lambda scope.
JavaScript
JavaScript/ECMAScript supports anonymous functions.
alert((function(x){
return x * x;
})(10));
ES6 supports "arrow function" syntax, where a => symbol separates the anonymous function's parameter list from the body:
alert((x => x * x)(10));
This construct is often used in Bookmarklets. For example, to change the title of the current document (visible in its window's title bar) to its URL, the following bookmarklet may seem to work.
document.title=location.href;
However, as the assignment statement returns a value (the URL itself), many browsers actually create a new page to display this value.
Instead, an anonymous function, that does not return a value, can be used:
(function(){document.title=location.href;})();
The function statement in the first (outer) pair of parentheses declares an anonymous function, which is then executed when used with the last pair of parentheses. This is almost equivalent to the following, which populates the environment with f unlike an anonymous function.
var f = function(){document.title=location.href;}; f();
Use void() to avoid new pages for arbitrary anonymous functions:
void(function(){return document.title=location.href;}());
or just:
void(document.title=location.href);
JavaScript has syntactic subtleties for the semantics of defining, invoking and evaluating anonymous functions. These subliminal nuances are a direct consequence of the evaluation of parenthetical expressions. The following constructs which are called immediately-invoked function expression illustrate this:
(function(){ ... }()) and
(function(){ ... })()
Representing "function(){ ... }" by f, the form of the constructs are
a parenthetical within a parenthetical (f()) and a parenthetical applied to a parenthetical (f)().
Note the general syntactic ambiguity of a parenthetical expression, parenthesized arguments to a function and the parentheses around the formal parameters in a function definition. In particular, JavaScript defines a , (comma) operator in the context of a parenthetical expression. It is no mere coincidence that the syntactic forms coincide for an expression and a function's arguments (ignoring the function formal parameter syntax)! If f is not identified in the constructs above, they become (()) and ()(). The first provides no syntactic hint of any resident function but the second MUST evaluate the first parenthetical as a function to be legal JavaScript. (Aside: for instance, the ()'s could be ([],{},42,"abc",function(){}) as long as the expression evaluates to a function.)
Also, a function is an Object instance (likewise objects are Function instances) and the object literal notation brackets, {} for braced code, are used when defining a function this way (as opposed to using new Function(...)). In a very broad non-rigorous sense (especially since global bindings are compromised), an arbitrary sequence of braced JavaScript statements, {stuff}, can be considered to be a fixed point of
(function(){( function(){( ... {( function(){stuff}() )} ... )}() )}() )
More correctly but with caveats,
( function(){stuff}() ) ~=
A_Fixed_Point_of(
function(){ return function(){ return ... { return function(){stuff}() } ... }() }()
)
Note the implications of the anonymous function in the JavaScript fragments that follow:
function(){ ... }() without surrounding ()'s is generally not legal
(f=function(){ ... }) does not "forget" f globally unlike (function f(){ ... })
Performance metrics to analyze the space and time complexities of function calls, call stack, etc. in a JavaScript interpreter engine implement easily with these last anonymous function constructs. From the implications of the results, it is possible to deduce some of an engine's recursive versus iterative implementation details, especially tail-recursion.
Julia
In Julia anonymous functions are defined using the syntax (arguments)->(expression),
julia> f = x -> x*x; f(8)
64
julia> ((x,y)->x+y)(5,6)
11
Kotlin
Kotlin supports anonymous functions with the syntax {arguments -> expression},
val sum = { x: Int, y: Int -> x + y }
sum(5,6) // returns 11
val even = { x: Int -> x%2==0}
even(4) // returns true
Lisp
Lisp and Scheme support anonymous functions using the "lambda" construct, which is a reference to lambda calculus. Clojure supports anonymous functions with the "fn" special form and #() reader syntax.
(lambda (arg) (* arg arg))
Common Lisp
Common Lisp has the concept of lambda expressions. A lambda expression is written as a list with the symbol "lambda" as its first element. The list then contains the argument list, documentation or declarations and a function body. Lambda expressions can be used inside lambda forms and with the special operator "function".
(function (lambda (arg) (do-something arg)))
"function" can be abbreviated as #'. Also, macro lambda exists, which expands into a function form:
; using sharp quote
#'(lambda (arg) (do-something arg))
; using the lambda macro:
(lambda (arg) (do-something arg))
One typical use of anonymous functions in Common Lisp is to pass them to higher-order functions like mapcar, which applies a function to each element of a list and returns a list of the results.
(mapcar #'(lambda (x) (* x x))
'(1 2 3 4))
; -> (1 4 9 16)
The lambda form in Common Lisp allows a lambda expression to be written in a function call:
((lambda (x y)
(+ (sqrt x) (sqrt y)))
10.0
12.0)
Anonymous functions in Common Lisp can also later be given global names:
(setf (symbol-function 'sqr)
(lambda (x) (* x x)))
; which allows us to call it using the name SQR:
(sqr 10.0)
Scheme
Scheme's named functions is simply syntactic sugar for anonymous functions bound to names:
(define (somename arg)
(do-something arg))
expands (and is equivalent) to
(define somename
(lambda (arg)
(do-something arg)))
Clojure
Clojure supports anonymous functions through the "fn" special form:
(fn [x] (+ x 3))
There is also a reader syntax to define a lambda:
#(+ % %2%3) ; Defines an anonymous function that takes three arguments and sums them.
Like Scheme, Clojure's "named functions" are simply syntactic sugar for lambdas bound to names:
(defn func [arg] (+ 3 arg))
expands to:
(def func (fn [arg] (+ 3 arg)))
Lua
In Lua (much as in Scheme) all functions are anonymous. A named function in Lua is simply a variable holding a reference to a function object.
Thus, in Lua
function foo(x) return 2*x end
is just syntactical sugar for
foo = function(x) return 2*x end
An example of using anonymous functions for reverse-order sorting:
table.sort(network, function(a,b)
return a.name > b.name
end)
Wolfram Language, Mathematica
The Wolfram Language is the programming language of Mathematica. Anonymous functions are important in programming the latter. There are several ways to create them. Below are a few anonymous functions that increment a number. The first is the most common. #1 refers to the first argument and & marks the end of the anonymous function.
#1+1&
Function[x,x+1]
x \[Function] x+1
So, for instance:
f:= #1^2&;f[8]
64
#1+#2&[5,6]
11
Also, Mathematica has an added construct to make recursive anonymous functions. The symbol '#0' refers to the entire function. The following function calculates the factorial of its input:
If[#1 == 1, 1, #1 * #0[#1-1]]&
For example, 6 factorial would be:
If[#1 == 1, 1, #1 * #0[#1-1]]&[6]
720
MATLAB, Octave
Anonymous functions in MATLAB or Octave are defined using the syntax @(argument-list)expression. Any variables that are not found in the argument list are inherited from the enclosing scope and are captured by value.
>> f = @(x)x*x; f(8)
ans = 64
>> (@(x,y)x+y)(5,6) % Only works in Octave
ans = 11
Maxima
In Maxima anonymous functions are defined using the syntax lambda(argument-list,expression),
f: lambda([x],x*x); f(8);
64
lambda([x,y],x+y)(5,6);
11
ML
The various dialects of ML support anonymous functions.
OCaml
Anonymous functions in OCaml are functions without a declared name. Here is an example of an anonymous function that multiplies its input by two:
fun x -> x*2
In the example, fun is a keyword indicating that the function is an anonymous function. We are passing in an argument x and -> to separate the argument from the body.
F#
F# supports anonymous functions, as follows:
(fun x -> x * x) 20 // 400
Standard ML
Standard ML supports anonymous functions, as follows:
fn arg => arg * arg
Nim
Nim supports multi-line multi-expression anonymous functions.
var anon = proc (var1, var2: int): int = var1 + var2
assert anon(1, 2) == 3
Multi-line example:
var anon = func (x: int): bool =
if x > 0:
result = true
else:
result = false
assert anon(9)
Anonymous functions may be passed as input parameters of other functions:
var cities = @["Frankfurt", "Tokyo", "New York"]
cities.sort(
proc (x, y: string): int = cmp(x.len, y.len)
)
An anonymous function is basically a function without a name.
Perl
Perl 5
Perl 5 supports anonymous functions, as follows:
(sub { print "I got called\n" })->(); # 1. fully anonymous, called as created
my $squarer = sub { my $x = shift; $x * $x }; # 2. assigned to a variable
sub curry {
my ($sub, @args) = @_;
return sub { $sub->(@args, @_) }; # 3. as a return value of another function
}
# example of currying in Perl programming
sub sum { my $tot = 0; $tot += $_ for @_; $tot } # returns the sum of its arguments
my $curried = curry \&sum, 5, 7, 9;
print $curried->(1,2,3), "\n"; # prints 27 ( = 5 + 7 + 9 + 1 + 2 + 3 )
Other constructs take bare blocks as arguments, which serve a function similar to lambda functions of one parameter, but do not have the same parameter-passing convention as functions -- @_ is not set.
my @squares = map { $_ * $_ } 1..10; # map and grep don't use the 'sub' keyword
my @square2 = map $_ * $_, 1..10; # braces unneeded for one expression
my @bad_example = map { print for @_ } 1..10; # values not passed like normal Perl function
PHP
Before 4.0.1, PHP had no anonymous function support.
PHP 4.0.1 to 5.3
PHP 4.0.1 introduced the create_function which was the initial anonymous function support. This function call makes a new randomly named function and returns its name (as a string)
$foo = create_function('$x', 'return $x*$x;');
$bar = create_function("\$x", "return \$x*\$x;");
echo $foo(10);
The argument list and function body must be in single quotes, or the dollar signs must be escaped.
Otherwise, PHP assumes "$x" means the variable $x and will substitute it into the string (despite possibly not existing) instead of leaving "$x" in the string.
For functions with quotes or functions with many variables, it can get quite tedious to ensure the intended function body is what PHP interprets.
Each invocation of create_function makes a new function, which exists for the rest of the program, and cannot be garbage collected, using memory in the program irreversibly. If this is used to create anonymous functions many times, e.g., in a loop, it can cause problems such as memory bloat.
PHP 5.3
PHP 5.3 added a new class called Closure and magic method __invoke() that makes a class instance invocable.
$x = 3;
$func = function($z) { return $z * 2; };
echo $func($x); // prints 6
In this example, $func is an instance of Closure and echo $func($x) is equivalent to echo $func->__invoke($x).
PHP 5.3 mimics anonymous functions but it does not support true anonymous functions because PHP functions are still not first-class objects.
PHP 5.3 does support closures but the variables must be explicitly indicated as such:
$x = 3;
$func = function() use(&$x) { $x *= 2; };
$func();
echo $x; // prints 6
The variable $x is bound by reference so the invocation of $func modifies it and the changes are visible outside of the function.
PHP 7.4
Arrow functions were introduced in PHP 7.4
$x = 3;
$func = fn($z) => $z * 2;
echo $func($x); // prints 6
Prolog's dialects
Logtalk
Logtalk uses the following syntax for anonymous predicates (lambda expressions):
{FreeVar1, FreeVar2, ...}/[LambdaParameter1, LambdaParameter2, ...]>>Goal
A simple example with no free variables and using a list mapping predicate is:
| ?- meta::map([X,Y]>>(Y is 2*X), [1,2,3], Ys).
Ys = [2,4,6]
yes
Currying is also supported. The above example can be written as:
| ?- meta::map([X]>>([Y]>>(Y is 2*X)), [1,2,3], Ys).
Ys = [2,4,6]
yes
Visual Prolog
Anonymous functions (in general anonymous predicates) were introduced in Visual Prolog in version 7.2. Anonymous predicates can capture values from the context. If created in an object member, it can also access the object state (by capturing This).
mkAdder returns an anonymous function, which has captured the argument X in the closure. The returned function is a function that adds X to its argument:
clauses
mkAdder(X) = { (Y) = X+Y }.
Python
Python supports simple anonymous functions through the lambda form. The executable body of the lambda must be an expression and can't be a statement, which is a restriction that limits its utility. The value returned by the lambda is the value of the contained expression. Lambda forms can be used anywhere ordinary functions can. However these restrictions make it a very limited version of a normal function. Here is an example:
>>> foo = lambda x: x * x
>>> foo(10)
100
In general, the Python convention encourages the use of named functions defined in the same scope as one might typically use an anonymous function in other languages. This is acceptable as locally defined functions implement the full power of closures and are almost as efficient as the use of a lambda in Python. In this example, the built-in power function can be said to have been curried:
>>> def make_pow(n):
... def fixed_exponent_pow(x):
... return pow(x, n)
... return fixed_exponent_pow
...
>>> sqr = make_pow(2)
>>> sqr(10)
100
>>> cub = make_pow(3)
>>> cub(10)
1000
R
In R the anonymous functions are defined using the syntax function(argument-list)expression , which has shorthand since version 4.1.0 \, akin to Haskell.
> f <- function(x)x*x; f(8)
[1] 64
> (function(x,y)x+y)(5,6)
[1] 11
> # Since R 4.1.0
> (\(x,y) x+y)(5, 6)
[1] 11
Raku
In Raku, all blocks (even those associated with if, while, etc.) are anonymous functions. A block that is not used as an rvalue is executed immediately.
fully anonymous, called as created
{ say "I got called" };
assigned to a variable
my $squarer1 = -> $x { $x * $x }; # 2a. pointy block
my $squarer2 = { $^x * $^x }; # 2b. twigil
my $squarer3 = { my $x = shift @_; $x * $x }; # 2c. Perl 5 style
currying
sub add ($m, $n) { $m + $n }
my $seven = add(3, 4);
my $add_one = &add.assuming(m => 1);
my $eight = $add_one($seven);
WhateverCode object
my $w = * - 1; # WhateverCode object
my $b = { $_ - 1 }; # same functionality, but as Callable block
Ruby
Ruby supports anonymous functions by using a syntactical structure called block. There are two data types for blocks in Ruby. Procs behave similarly to closures, whereas lambdas behave more analogous to an anonymous function. When passed to a method, a block is converted into a Proc in some circumstances.
# Example 1:
# Purely anonymous functions using blocks.
ex = [16.2, 24.1, 48.3, 32.4, 8.5]
=> [16.2, 24.1, 48.3, 32.4, 8.5]
ex.sort_by { |x| x - x.to_i } # Sort by fractional part, ignoring integer part.
=> [24.1, 16.2, 48.3, 32.4, 8.5]
# Example 2:
# First-class functions as an explicit object of Proc -
ex = Proc.new { puts "Hello, world!" }
=> #<Proc:0x007ff4598705a0@(irb):7>
ex.call
Hello, world!
=> nil
# Example 3:
# Function that returns lambda function object with parameters
def multiple_of?(n)
lambda{|x| x % n == 0}
end
=> nil
multiple_four = multiple_of?(4)
=> #<Proc:0x007ff458b45f88@(irb):12 (lambda)>
multiple_four.call(16)
=> true
multiple_four[15]
=> false
Rust
In Rust, anonymous functions are called closures. They are defined using the following syntax:
|<parameter-name>: <type>| -> <return-type> { <body> };
For example:
let f = |x: i32| -> i32 { x * 2 };
With type inference, however, the compiler is able to infer the type of each parameter and the return type, so the above form can be written as:
let f = |x| { x * 2 };
With closures with a single expression (i.e. a body with one line) and implicit return type, the curly braces may be omitted:
let f = |x| x * 2;
Closures with no input parameter are written like so:
let f = || println!("Hello, world!");
Closures may be passed as input parameters of functions that expect a function pointer:
// A function which takes a function pointer as an argument and calls it with
// the value '5'.
fn apply(f: fn(i32) -> i32) -> i32 {
// No semicolon, to indicate an implicit return
f(5)
}
fn main() {
// Defining the closure
let f = |x| x * 2;
println!("{}", apply(f)); // 10
println!("{}", f(5)); // 10
}
However, one may need complex rules to describe how values in the body of the closure are captured. They are implemented using the Fn, FnMut, and FnOnce traits:
Fn: the closure captures by reference (&T). They are used for functions that can still be called if they only have reference access (with &) to their environment.
FnMut: the closure captures by mutable reference (&mut T). They are used for functions that can be called if they have mutable reference access (with &mut) to their environment.
FnOnce: the closure captures by value (T). They are used for functions that are only called once.
With these traits, the compiler will capture variables in the least restrictive manner possible. They help govern how values are moved around between scopes, which is largely important since Rust follows a lifetime construct to ensure values are "borrowed" and moved in a predictable and explicit manner.
The following demonstrates how one may pass a closure as an input parameter using the Fn trait:
// A function that takes a value of type F (which is defined as
// a generic type that implements the 'Fn' trait, e.g. a closure)
// and calls it with the value '5'.
fn apply_by_ref<F>(f: F) -> i32
where F: Fn(i32) -> i32
{
f(5)
}
fn main() {
let f = |x| {
println!("I got the value: {}", x);
x * 2
};
// Applies the function before printing its return value
println!("5 * 2 = {}", apply_by_ref(f));
}
// ~~ Program output ~~
// I got the value: 5
// 5 * 2 = 10
The previous function definition can also be shortened for convenience as follows:
fn apply_by_ref(f: impl Fn(i32) -> i32) -> i32 {
f(5)
}
Scala
In Scala, anonymous functions use the following syntax:
(x: Int, y: Int) => x + y
In certain contexts, like when an anonymous function is a parameter being passed to another function, the compiler can infer the types of the parameters of the anonymous function and they can be omitted in the syntax. In such contexts, it is also possible to use a shorthand for anonymous functions using the underscore character to introduce unnamed parameters.
val list = List(1, 2, 3, 4)
list.reduceLeft( (x, y) => x + y )
// Here, the compiler can infer that the types of x and y are both Int.
// Thus, it needs no type annotations on the parameters of the anonymous function.
list.reduceLeft( _ + _ )
// Each underscore stands for a new unnamed parameter in the anonymous function.
// This results in an even shorter equivalent to the anonymous function above.
Smalltalk
In Smalltalk anonymous functions are called blocks and they are invoked (called) by sending them a "value" message. If several arguments are to be passed, a "value:...value:" message with a corresponding number of value arguments must be used.
For example, in GNU Smalltalk,
st> f:=[:x|x*x]. f value: 8 .
64
st> [:x :y|x+y] value: 5 value: 6 .
11
Smalltalk blocks are technically closures, allowing them to outlive their defining scope and still refer to the variables declared therein.
st> f := [:a|[:n|a+n]] value: 100 .
a BlockClosure
"returns the inner block, which adds 100 (captured in "a" variable) to its argument."
st> f value: 1 .
101
st> f value: 2 .
102
Swift
In Swift, anonymous functions are called closures. The syntax has following form:
{ (parameters) -> returnType in
statement
}
For example:
{ (s1: String, s2: String) -> Bool in
return s1 > s2
}
For sake of brevity and expressiveness, the parameter types and return type can be omitted if these can be inferred:
{ s1, s2 in return s1 > s2 }
Similarly, Swift also supports implicit return statements for one-statement closures:
{ s1, s2 in s1 > s2 }
Finally, the parameter names can be omitted as well; when omitted, the parameters are referenced using shorthand argument names, consisting of the $ symbol followed by their position (e.g. $0, $1, $2, etc.):
{ $0 > $1 }
Tcl
In Tcl, applying the anonymous squaring function to 2 looks as follows:
apply {x {expr {$x*$x}}} 2
# returns 4
This example involves two candidates for what it means to be a function in Tcl. The most generic is usually called a command prefix, and if the variable f holds such a function, then the way to perform the function application f(x) would be
{*}$f $x
where {*} is the expansion prefix (new in Tcl 8.5). The command prefix in the above example is
apply {x {expr {$x*$x}}} Command names can be bound to command prefixes by means of the interp alias command. Command prefixes support currying. Command prefixes are very common in Tcl APIs.
The other candidate for "function" in Tcl is usually called a lambda, and appears as the {x {expr {$x*$x}}} part of the above example. This is the part which caches the compiled form of the anonymous function, but it can only be invoked by being passed to the apply command. Lambdas do not support currying, unless paired with an apply to form a command prefix. Lambdas are rare in Tcl APIs.
Vala
In Vala, anonymous functions are supported as lambda expressions.
delegate int IntOp (int x, int y);
void main () {
IntOp foo = (x, y) => x * y;
stdout.printf("%d\n", foo(10,5));
}
Visual Basic .NET
Visual Basic .NET 2008 introduced anonymous functions through the lambda form. Combined with implicit typing, VB provides an economical syntax for anonymous functions. As with Python, in VB.NET, anonymous functions must be defined on one line; they cannot be compound statements. Further, an anonymous function in VB.NET must truly be a VB.NET Function - it must return a value.
Dim foo = Function(x) x * x
Console.WriteLine(foo(10))
Visual Basic.NET 2010 added support for multiline lambda expressions and anonymous functions without a return value. For example, a function for use in a Thread.
Dim t As New System.Threading.Thread(Sub ()
For n As Integer = 0 To 10 'Count to 10
Console.WriteLine(n) 'Print each number
Next
End Sub
)
t.Start()
References
Functions and mappings | Examples of anonymous functions | Mathematics | 12,349 |
73,955,301 | https://en.wikipedia.org/wiki/Paquier%20Event | The Paquier Event (OAE1b) was an oceanic anoxic event (OAE) that occurred around 111 million years ago (Ma), in the Albian geologic stage, during a climatic interval of Earth's history known as the Middle Cretaceous Hothouse (MKH).
Timeline
OAE1b had three main subevents: the Kilian, Paquier, and Leenhardt. The Kilian subevent was defined by a negative δ13C excursion from about 2-2.5% to 0.5-1.5% followed by a gradual δ13C rise in the Atlantic Ocean, though the magnitude of these carbon isotope fluctuations was higher in areas like the Basque-Cantabrian Basin. The Paquier subevent was the most extreme subevent of OAE1b, exhibiting a δ13C drop of ~3% in marine organic matter and of 1.5-2% in marine carbonate, which was succeeded by a gradual positive δ13C excursion. The Leenhardt subevent was the last OAE1b subevent and is associated in the eastern Tethys Ocean with a negative δ13C excursion of 0.09‰ to -0.48‰ followed by a positive δ13C excursion to 0.58%, although the magnitude of the carbon isotope shifts varies considerably in other marine regions, the negative δ13C excursion being around 1% in the Atlantic and western Tethys but ~4% in the Basque-Cantabrian Basin and ~3% in the Andean Basin.
Causes
Pulsed volcanic activity of the Kerguelen Plateau is suggested to be the cause of OAE1b based on mercury anomalies recorded from this interval. Five different mercury anomalies relative to total organic carbon are known from strata from the Jiuquan Basin spanning the OAE1b interval, strongly supporting a causal relationship with massive volcanism. Prominent negative osmium isotope excursions coeval with biotic changes among planktonic foraminifera further confirm the occurrence of multiple episodes of submarine volcanic activity over the course of OAE1b. Nonetheless, volcanism is not unequivocally supported as OAE1b's mainspring. Mercury anomalies associated with OAE1b have been interpreted by some to reflect mineralisation associated with salt diapirism instead of volcanism. Another line of evidence contradicting the volcanism hypothesis involves the massive diachrony between thallium isotope records and intervals of deoxygenation.
Global warming intensified chemical weathering, leading to increased terrestrial inputs of organic matter into oceans and lakes. This promoted eutrophication that rapidly depleted bodies of water of dissolved oxygen. A contemporary increase in 187Os/188Os reflects an increase in continentally derived, radiogenic osmium sources in the ocean, confirming an increase in terrestrial runoff.
Alternatively, rather than volcanism, some research points to orbital cycles as the governing cause of OAE1b. It has been hypothesised that enhanced monsoonal activity modulated by Earth's axial precession drove the development of OAE1b. Evidence supporting this explanation includes regular variations in detrital and weathering indices between humid intervals of high weathering and anoxia and drier intervals of decreased weathering and better oxygenated waters; these variations are suggested to correspond to precession cycles. A different analysis of orbital forcing purports the long eccentricity cycle as the most significant orbital driver of monsoonal modulation. δ18O records in planktic foraminifera from the Boreal Ocean show a 100 kyr periodicity, indicating that the short eccentricity cycle governed the ingression of hot Tethyan waters into the Boreal Ocean and consequent Boreal warming. The 405 kyr eccentricity cycle appears to have dominated the advance and retreat of anoxia in the Vocontian Basin.
The tectonic isolation of the Atlantic and Tethys Oceans restricted their ventilation, enabling their stagnation and facilitating ideal conditions for thermohaline stratification, which would in turn promote the widespread development of anoxia during a speedily warming climate.
OAE1b's coincidence with a peak in a 5-6 Myr oscillation in marine phosphorus accumulation suggests that enhanced phosphorus regeneration may have been one of the causal factors behind the development of widespread anoxia. As more phosphorus built up in marine environments and caused spikes in biological productivity and decreases in dissolved oxygen, it caused a strong positive feedback loop in which phosphorus deposited on the seafloor was recycled back into the water column at faster rates, facilitating further increase in productivity and decrease in seawater oxygen content. Eventually, a negative feedback loop of increased atmospheric oxygen terminated this phosphorus spike and the OAE itself by causing increased wildfire activity and a consequent decline in vegetation and chemical weathering.
Effects
Unlike other OAEs during the MKH, such as the OAE1a and the OAE2, OAE1b was not associated with an extinction event of benthic foraminifera. Identical benthic foraminiferal assemblages occur both below and above the black shales deposited in association with OAE1b, indicating that this OAE was limited in its geographic and bathymetric extent. Although some parts of the ocean floor became devoid of life, benthic foraminifera survived in refugia and recolonised previously abandoned areas after the OAE with no faunal turnover. Planktonic foraminifera, however, significantly declined during OAE1b. In the eastern Pacific, the Paquier Level of OAE1b is associated with the demise of heterozoan-dominated carbonate production.
As with other OAEs, OAE1b left its mark on the geologic record in the form of widespread and abundant deposition of black shales.
See also
Jenkyns Event
Selli Event
Breistroffer Event
Bonarelli Event
References
Albian Stage
Anoxic events | Paquier Event | Chemistry | 1,248 |
217,741 | https://en.wikipedia.org/wiki/Write%E2%80%93read%20conflict | In computer science, in the field of databases, write–read conflict (also known as reading uncommitted data and dirty read), is a computational anomaly associated with interleaved execution of transactions. Specifically, a write–read conflict occurs when "a transaction requests to write an entity, for which an unclosed transaction has already made a read request."
Given a schedule S
T2 could read a database object A, modified by T1 which hasn't committed. This is a dirty or inconsistent read.
T1 may write some value into A which makes the database inconsistent. It is possible that interleaved execution can expose this inconsistency and lead to an inconsistent final database state, violating ACID rules.
Strict 2PL overcomes this inconsistency by locking T2 out from performing a Read/Write on A. Note however that Strict 2PL can have a number of drawbacks, such as the possibility of deadlocks.
See also
Concurrency control
Read–write conflict
Write–write conflict
References
Data management
Transaction processing | Write–read conflict | Technology | 214 |
14,838,867 | https://en.wikipedia.org/wiki/Journal%20of%20Algebra | Journal of Algebra (ISSN 0021-8693) is an international mathematical research journal in algebra. An imprint of Academic Press, it is published by Elsevier. Journal of Algebra was founded by Graham Higman, who was its editor from 1964 to 1984. From 1985 until 2000, Walter Feit served as its editor-in-chief.
In 2004, Journal of Algebra announced (vol. 276, no. 1 and 2) the creation of a new section on computational algebra, with a separate editorial board. The first issue completely devoted to computational algebra was vol. 292, no. 1 (October 2005).
The Editor-in-Chief of the Journal of Algebra is Michel Broué, Université Paris Diderot, and Gerhard Hiß, Rheinisch-Westfälische Technische Hochschule Aachen (RWTH) is Editor of the computational algebra section.
See also
Susan Montgomery, an editor of the journal
External links
Journal of Algebra at ScienceDirect
Algebra journals
Academic journals established in 1964 | Journal of Algebra | Mathematics | 209 |
351,088 | https://en.wikipedia.org/wiki/Transparency%20%28telecommunication%29 | In telecommunications, transparency can refer to:
The property of an entity that allows another entity to pass through it without altering either of the entities.
The property that allows a transmission system or channel to accept, at its input, unmodified user information, and deliver corresponding user information at its output, unchanged in form or information content. The user information may be changed internally within the transmission system, but it is restored to its original form prior to the output without the involvement of the user.
The quality of a data communications system or device that uses a bit-oriented link protocol that does not depend on the bit sequence structure used by the data source.
Some communication systems are not transparent.
Non-transparent communication systems have one or both of the following problems:
user data may be incorrectly interpreted as internal commands. For example, modems with a Time Independent Escape Sequence or 20th century Signaling System No. 5 and R2 signalling telephone systems, which occasionally incorrectly interpreted user data (from a "blue box") as commands.
output "user data" may not always be the same as input user data. For example, many early email systems were not 8-bit clean; they seemed to transfer typical short text messages properly, but converted "unusual" characters (the control characters, the "high ASCII" characters) in an irreversible way into some other "usual" character. Many of these systems also changed user data in other irreversible ways – such as inserting linefeeds to make sure each line is less than some maximum length, and inserting a ">" at the beginning of every line that begins with "From ". Until 8BITMIME, a variety of binary-to-text encoding techniques have been overlaid on top of such systems to restore transparency – to make sure that any possible file can be transferred so that the final output "user data" is actually identical to the original user data.
References
See also
In-band signaling
out-of-band communication
Telecommunications engineering | Transparency (telecommunication) | Engineering | 409 |
30,939,251 | https://en.wikipedia.org/wiki/Coccomyces%20clavatus | Coccomyces clavatus is a species of foliicolous fungus found on fallen phylloclades of Phyllocladus alpinus in New Zealand.
The ascocarps are angular, up to 0.8 mm in diameter, forming within pale yellow lesions. The asci have a broad apex and the paraphyses are unbranched. This species is very similar to Coccomyces phyllocladi, found on the same host, and can only be distinguished by the smaller, clavate ascospores.
References
Leotiomycetes
Fungi described in 1986
Fungus species | Coccomyces clavatus | Biology | 131 |
3,632,584 | https://en.wikipedia.org/wiki/Digi-Comp%20II | The Digi-Comp II was a toy computer invented by John "Jack" Thomas Godfrey (1924–2009) in 1965 and manufactured by E.S.R., Inc. in the late 1960s, that used marbles rolling down a ramp to perform basic calculations.
Description
A two-level masonite platform with blue plastic guides served as the medium for a supply of marbles that rolled down an inclined plane, moving plastic cams as they went. The red plastic cams played the part of flip-flops in an electronic computer - as a marble passed one of the cams, it would flip the cam around - in one position, the cam would allow the marble to pass in one direction, in the other position, it would cause the marble to drop through a hole and roll to the collection of marbles at the bottom of the machine. The original Digi-Comp II platform measured .
The Digi-Comp II was not programmable, unlike the Digi-Comp I, an earlier offering in the E.S.R. product line that used an assortment of plastic slides, tubes, and bent metal wires to solve simple logic problems. However, the Digi-Comp II is more suitable for public display, since the only removable elements are the moving balls.
Computational power
Computer scientist Scott Aaronson analyzed the computational power of the Digi-Comp II. There are several ways to mathematically model the device's computational capabilities. A natural abstraction is a directed acyclic graph (DAG) in which each internal vertex has an out-degree of 2, representing a toggle cam that routes balls to one of two other vertices. A fixed number of balls are placed at a designated source vertex, and the decision problem is to determine whether any balls ever reach a designated sink vertex.
Aaronson showed that this decision problem, given as inputs a description of the DAG and the number of balls to run (encoded in unary), is complete under log-space reduction for CC, the class of problems log-space reducible to the stable marriage problem. He also showed that the variant of the problem in which the number of balls is encoded in binary, allowing the machine to run for an exponentially longer time, is still in the P class of complexity.
Reproductions
A slightly downscaled reproduction of the Digi-Comp II, made from plywood, is available from Evil Mad Scientist since 2011. This reproduction uses steel pachinko balls, and measures .
In 2011, Evil Mad Scientist also created a giant variant measuring around in size that uses billiard balls. The Stata Center at the Massachusetts Institute of Technology displays one copy of the giant version for hands-on operation by visitors.
See also
Geniac
Dr. Nim - a Nim-playing game, based on the Digi-Comp II mechanism
Turing Tumble
WDR paper computer
CARDboard Illustrative Aid to Computation
References
External links
MIT CSAIL VIDEO: How the Digi-Comp II works – Brief hands-on demonstration of operation
The Old Computer Museum - Collection of old analog, digital and mechanical computers.
web simulator, from System Source Computer Museum
Extra-large recreation, video showing the multiplication of 13 × 3 on a scaled-up re-creation.
Original Instruction Manual
Digi-Comp II Replica - Instructions and files for creating your own Digi-Comp II
Mechanical computers
Educational toys | Digi-Comp II | Physics,Technology | 699 |
54,330,259 | https://en.wikipedia.org/wiki/Dichomitus%20eucalypti | Dichomitus eucalypti is a crust fungus that was described as a new species in 1985 by Norwegian mycologist Leif Ryvarden. The fruit body of the fungus measures 1–2 cm in diameter, and has a white to pale cream pore surface with small round pores numbering 2–3 per millimetre. D. eucalypti has a dimitic hyphal structure, containing both generative and binding hyphae. Generative hyphae are thin walled with clamps, and measure 2.5–4 μm in diameter. Found in the context, the binding hyphae are solid, hyaline, and measure 2–5 μm. Spores are more or less cylindrical, thin-walled and hyaline, and have dimensions of 7–8.5 by 3–4 μm.
The type was collected in George Gill Range (Northern Territory, Australia), where it was growing on river red gum (Eucalyptus camaldulensis). At the time of its description, D. eucalypti was, in addition to D. epitephrus and D. leucoplacus, the third species of Dichomitus found in Australia.
References
Fungi described in 1985
Fungi of Australia
Polyporaceae
Taxa named by Leif Ryvarden
Fungus species | Dichomitus eucalypti | Biology | 274 |
78,370,025 | https://en.wikipedia.org/wiki/Norma%20Nilotica%20%28constellation%29 | Norma Nilotica (Genitive Normae Nilotica, Abbreviation NoN) is an obsolete constellation, or asterism, no longer in use by astronomers. Its name means "The Nile's Ruler" in Latin. It was created by Alexander Jamieson and first appeared in his book A Celestial Atlas, published in 1822. It subsequently appeared in Urania's Mirror (1824) and Elijah Hinsdale Burritt's 1835 book
Atlas Designed to Illustrate the Geography of the Heavens. The constellation is depicted as a measuring rod (or nilometer) held in the left hand of the water carrier Aquarius. Depicting Aquarius with a nilometer references the ancient Egyptian association of Aquarius with the flooding of the Nile river.
Up until 1928, when the IAU set boundaries for the constellations which covered the entire celestial sphere, stars which were not included within a constellation listed by Ptolemy were sometimes used for creating new constellations. Norma Nilotica was essentially a line extending from 9 Aquarii (just north of Capricornus) northwest to 3 Aquarii. Today, all of its stars fall within the modern boundaries of Aquarius.
Norma Nilotica is mentioned in Henry Melville's 1874 book Veritas. Revelation of mysteries, biblical, historical and social, by means of the Median and Persian laws which contains a multipage prose description of the constellations including:
Then comes the left hand of Aquarius, or the Greek Neptune or Hebrew Moses. In his hand
is the celebrated rod: it is the 24-inch gauge of the masons, and on it are marked or notched
the twenty-four hours. The present name is Norma Nilotica.
Charles Augustus Young mentioned the constellation in very briefly in his 1903 book Lessons in Astronomy, Including Uranography wherein he wrote:
Norma Nilotica, the rule with which the height of the Nile was measured, lies west of Scorpio, while Ara lies due south of Eta and Theta. Both are old Ptolemaic constellations, but are small and of little importance, at least to observers in our latitudes.
Note that this passage contains two errors: Norma Nilotica is not west of Scorpio and is not a Ptolemaic constellation.
Gallery
Aquarius (constellation)
Former constellations | Norma Nilotica (constellation) | Astronomy | 470 |
1,473,483 | https://en.wikipedia.org/wiki/Public%20data%20network | A public data network (PDN) is a network established and operated by a telecommunications administration, or a recognized private operating agency, for the specific purpose of providing data transmission services for the public.
The first public packet switching networks were RETD in Spain (1972), the experimental RCP network in France (1972) and Telenet in the United States (1975). "Public data network" was the common name given to the collection of X.25 providers, the first of which were Telenet in the U.S. and DATAPAC in Canada (both in 1976), and Transpac in France (in 1978). The International Packet Switched Service (IPSS) was the first commercial and international packet-switched network (1978). The networks were interconnected with gateways using X.75. These combined networks had large global coverage during the 1980s and into the 1990s. The networks later provided the infrastructure for the early Internet.
Description
In communications, a PDN is a circuit- or packet-switched network that is available to the public and that can transmit data in digital form. A PDN provider is a company that provides access to a PDN and that provides any of X.25, Frame Relay, or cell relay (ATM) services. Access to a PDN generally includes a guaranteed bandwidth, known as the committed information rate (CIR). Costs for the access depend on the guaranteed rate. PDN providers differ in how they charge for temporary increases in required bandwidth (known as surges). Some use the amount of overrun; others use the surge duration.
Public switched data network
A public switched data network (PSDN) is a network for providing data services via a system of multiple wide area networks, similar in concept to the public switched telephone network (PSTN). A PSDN may use a variety of switching technologies, including packet switching, circuit switching, and message switching. A packet-switched PSDN may also be called a packet-switched data network.
Originally the term PSDN referred only to Packet Switch Stream (PSS), an X.25-based packet-switched network in the United Kingdom, mostly used to provide leased-line connections between local area networks and the Internet using permanent virtual circuits (PVCs). Today, the term may refer not only to Frame Relay and Asynchronous Transfer Mode (ATM), both providing PVCs, but also to Internet Protocol (IP), GPRS, and other packet-switching techniques.
Whilst there are several technologies that are superficially similar to the PSDN, such as Integrated Services Digital Network (ISDN) and the digital subscriber line (DSL) technologies, they are not examples of it. ISDN utilizes the PSTN circuit-switched network, and DSL uses point-to-point circuit switching communications overlaid on the PSTN local loop (copper wires), usually utilized for access to a packet-switched broadband IP network.
Public data transmission service
A public data transmission service is a data transmission service that is established and operated by a telecommunication administration, or a recognized private operating agency, and uses a public data network. A public data transmission service may include Circuit Switched Data, packet-switched, and leased line data transmission.
History
Public packet switching networks came into operation in the 1970s. The first were RETD in Spain, in 1972; the experimental RCP in France, also in 1972; Telenet in the United States, which began operation with proprietary protocols in 1975; EIN in the EEC in 1976; and EPSS in the United Kingdom in 1976 (in development since 1969).
Telenet adopted X.25 protocols shortly after they were published in 1976 while DATAPAC in Canada was the first public data network specifically designed for X.25, also in 1976. Many other PDNs adopted X.25 when they came into operation, including Transpac in France in 1978, Euronet in the EEC in 1979, Packet Switch Stream in the United Kingdom in 1980, and AUSTPAC in Australia in 1982. Iberpac in Spain adopted X.25 in the 1980s. Tymnet and CompuServe in the United States also adopted X.25.
The International Packet Switched Service (IPSS) was the first commercial and international packet-switched network. It was a collaboration between British and American telecom companies that became operational in 1978.
The SITA Data Transport Network for airlines adopted X.25 in 1981, becoming the world's most extensive packet-switching network.
The networks were interconnected with gateways using X.75. These combined networks had large global coverage during the 1980s and into the 1990s.
Over time, other packet-switching technologies, including Frame Relay (FR) and Asynchronous Transfer Mode (ATM) gradually replaced X.25.
Many of these networks later adopted TCP/IP and provided the infrastructure for the early Internet.
See also
History of the Internet
International Network Working Group
National research and education network
Protocol Wars
OSI model
X.25 § History
References
Sources
Telecommunications
Data network
X.25 | Public data network | Technology | 1,042 |
62,094,228 | https://en.wikipedia.org/wiki/Tank%20leak%20detection | Tank leak detection is implemented to alert the operator to a suspected release from any part of a storage tank system, what enables to prevent from soil contamination and loss of product.
In many countries regulated UST are required to have an approved leak detection method so that leaks are discovered quickly and the release is stopped in time.
Leak detection standards in Europe
European Committee for Standardization EN 13160 shows five different classes (technical methods) of leak detection systems to be used on tanks and pipes.
The number of the class indicates the effectiveness of the installed leak detection system. Class 1 being the highest and class 5 being the lowest level.
Class 1
System inherently safe. Leak is detected before any liquid enters the environment. These systems detect a leak above or below the liquid level of a double wall system. Once a leak is detected, fuel can be removed from the tank before any product enters the environment.
Class 2
System that monitors pressure of a liquid filling the interstitial space of a double wall system. The system alarms on any leak. However, once the tank is breached, the liquid contaminates the product or flows into the ground - in both situations contamination cannot be prevented.
Class 3
Liquid/vapour sensors are placed at the lowest point in a system and detect the presence of liquid or hydrocarbon vapour within the interstitial space. Once a leak is detected an alarm will sound. The sensors cannot detect the failure of outer wall. The product may enter the environment.
Class 4
The system analyses rates of change in tank contents (i.e. leakage into or out of the tank). If a leak is found when operating on a single wall system, the product will always be released to the environment before the leak is detected.
For tanks there are 2 subclasses of the system.
4a
System based on fuel reconciliation (measurement of amount sold through the dispenser against the amount that goes out of the tank according to the tank gauge).Any discrepancies release an alarm.
4b
Detection of tank leak in quiet periods (liquid level is changing while the tank is not dispensing fuel).
Class 5
In this system monitoring wells with installed sensors are located around the tank site. The sensors detect a leak from the installation.
As in case of class 4, the product will always be released to the environment before the leak is detected.
Leak detection standards in the USA
In the USA, the Environmental Protection Agency (EPA) requires owners and operators detect releases from their UST systems. EPA allows three categories of release detection: interstitial, internal, and external. These three categories include seven release detection methods.
Interstitial method – secondary containment with interstitial monitoring; secondary containment and under-dispenser containment
Internal methods – automatic tank gauging (ATG) systems; statistical inventory reconciliation (SIR); continuous in-tank leak detection
External method – monitoring for vapors in the soil; monitoring for liquids on the groundwater
Leak detection methods
Automatic Tank Gauging (ATG) – the basic function of the system is to monitor the fuel level tanks permanently to see if the tank is leaking. A probe installed in the tank is linked electronically to a nearby control device where received data (product level and temperature) are recorded and automatically analyzed. These systems automatically calculate the changes in product volume that can indicate a leaking tank.
The ATG must be operated in one of the following modes:
Inventory mode – activities of an in-service tank together with deliveries are recorded.
Test mode – the test is performed when the tank is shut down and there is no dispensing or delivery. The product level and temperature are measured for at least one hour. However, some systems, known as continuous ATGS, do not require the tank to be taken out of service to perform a test.
There are methods combining automatic tank gauges with statistical inventory reconciliation where gauge provides liquid level and temperature data to a computer running SIR software, which performs the analysis to detect leaks.
Statistical Inventory Reconciliation (SIR)
SIR was born in the early 1980s. In SIR methods statistical techniques are applied to inventory, delivery and dispensed data collected over time and are used to determine whether or not a tank system is leaking.
On a regular basis, information about the current tank level and complete records of withdrawals and deliveries to UST are proceeded and calculated with the use of computer program that performs a statistical analysis of received data.
Replacing simple arithmetic with appropriate statistical procedures allows the leak detection capability of inventory reconciliation to be considerably improved. SIR vendors must demonstrate that they can detect leaks of 0.2 gallons per hour in order to be acceptable as a monthly leak detection method.
Such solution enables not only detected tank leakage but also possible theft, over-dispensing or short deliveries.
Vapour Monitoring
Vapour Monitoring detects fumes from leaked product in the soil around the leaked tank. It can be categorised into 2 types.
Active Monitoring where special tracer chemical added to the UST are detected.
Passive Monitoring measures product vapours in the soil around the UST.
Special monitoring wells or sampling points must be placed in the tank backfill. A minimum of two wells is recommended for a single tank excavation. Three or more wells are recommended for an excavation with two or more tanks.
Used equipment can immediately analyse a gathered vapour or only gather a sample which is then analysed in the laboratory.
The system is not inherently safe - by the time the vapor sensors go to alarm, the contamination has likely already occurred.
Interstitial Monitoring
The method requires a secondary containment, it can be a double wall of the UST where the outer tank wall provides a barrier between the inner tank and the environment.
Interstitial methods include the use of a hydrocarbon-sensitive sensor cables or probes connected to a monitoring console. Once the hydrocarbons is detected an alarm goes off.
The other method is vacuum monitoring where vapour sensor monitors interstitial spaces of the tank. In case of the leakage the vacuum of the space begins to change.
It is also possible to partially fill the interstitial space of the tank with a monitoring fluid (brine or glycol solutions ). Once the level of the fluid changes, a leak may be present.
Monitoring for Contamination in Groundwater
Monitoring wells are placed close to the UST and allow continuous measurements for leaked product. This methods enables to detect the presence of liquid product floating on the groundwater.
The wells can be monitored periodically (at least once every 30-days) with hand-held equipment or with the use of permanently installed monitoring devices.
This method cannot be used at sites where groundwater is more than 20 feet below the surface and the subsurface soil or backfill material (or both) consists of gravels, coarse to medium sands, or other similarly permeable materials.
A minimum of two wells is recommended for a single tank excavation. Three or more wells are recommended for an excavation with two or more tanks.
Product is released to the environment before a leak is detected.
Manual Tank Gauging
The method requires keeping the tank undisturbed (no liquid is added/subtracted) for a designated period (e.g. 36hours). The length of the testing period depends on the size of the tank and whether the method is used alone or in combination with tank tightness testing. During this period the contents of the tank are measured manually twice, at the beginning and at the end of the period.
Significant changes in the volume of the tank’s contents over the test period can indicate a possible leak.
References
Fuels | Tank leak detection | Chemistry | 1,544 |
40,799,331 | https://en.wikipedia.org/wiki/Gyratory%20equipment | Gyratory equipment, used in mechanical screening and sieving is based on a circular motion of the machine. Unlike other methods, gyratory screen operates in a gentler manner and is more suited to handle fragile things, enabling it to produce finer products. This method is applicable for both wet and dry screening.
A distinct difference to other techniques is that the gyratory motion applied here depends on eccentric weights instead of vibrations, which can be varied based on individual process requirement.
History
In the early 1930s, most vibratory separators had a rectangular or square design employing simple reciprocating movement. After the introduction of machines utilizing gyratory motion with orbital movements, there was a huge change in machinery industry due to the much greater screen area usage and capacity per unit mesh area.
Design
The gyratory equipment contains decks of cards on top of each other with the coarsest screen on top and the finest below. The feed is inserted from the top and gyratory motion triggers the penetration of particles into the next deck through screen openings.
Casings are inclined at relatively low angles (< 15°) to the horizontal plane, with gyrations occurring in the vertical plane. The eccentric masses can be varied in such as the increase of top eccentric mass leads to an increase in horizontal throw, promoting the discharge of oversize materials. Increment in bottom eccentric mass boosts the material turn over on the screen surface, maximizing the quantity of undersize-material penetration. Oversize materials are discharged via tangential outlet.
The option to select number of decks enables gyratory equipment to accurately separate materials consisting particles that are very close in size. This advantage is unrivalled and proves to be significant in the powder processing industry where fine materials are involved. High separating efficiency and ease of maintenance puts gyratory screening ahead compared to other processes in terms of product quality.
Existing gyratory equipment designs are already on the market, more to come with further development. Recent studies have shown that potential improvements are available for cost-saving and effective separation process.
Applications
Common applications include separation used in the process industry, food industry, chemical industry and pharmaceuticals. This includes screening, classification, sifting, fiber recovery, filtration, and scalping. Gyratory screening is capable of separating finer materials as compared to other methods, and is therefore more suitable to treat fragile materials. Several applications in respective industries are shown in the table below.
General and industrial heavy duty models are available for gyratory equipment, with wooden frames for general models aiming to save cost. Industrial heavy duty models are constructed with carbon steel or stainless steel. Screen capacities vary with model sizes over a huge range to satisfy individual application requirements such as material size, bulk density, moisture contamination, etc. Models consist up to seven decks with screenings up to 325 meshes, allowing it to perform accurate separations for the finest materials. This feature comes in handy in the powder processing industry where fine powders with relatively close sizes are involved. Screens openings for different decks are to be calculated accurately to ensure accurate separation.
General models, installed with wooden frames indicating lesser reinforcements, are used for applications involving materials with distinct difference in sizes. An example for this is the removal of impurities from wood chips for biomass fuel production. In this case, the desired product will be discharged at the coarsest screen, leaving smaller impurities to sink to the bottom frames. These models are selected for more economical purposes and are less common.
Comparison to other methods
Advantages
Low running cost
The low amount of power required to run a gyratory screener enables an overall low cost of operation for this machine. This is due to the relatively lower energy required for gyratory motion compared to vibrating a massive frame. The low running cost as well as the low purchasing cost of gyratory equipment make it one of the more commonly used machines for solid-solid mechanical separation.
Ideal for multi-fraction separation
As a gyratory screening machine employs the use of smaller stacked screen frames, the screens can be accurately placed to the precise requirements of each separation. This puts a gyratory screener at an advantage over a number of other mechanical screening devices, as many other devices would require the use of additional equipment to cope with a different type of feed.
Flexible range of applications
A gyratory screener can be used in many situations, regardless of whether the solid-solid mixture to be separated consists of a binary mixture, or a multi-fraction mixture. This is because the flexibility of usage of the gyratory sifter screens eliminates the need for excess screen materials, cleaners or other forms of additional apparatus.
Good efficiency and quality of separation
The lack of vertical motion in the mechanism of a gyratory sifter, coupled with its relatively gentle motion enables a higher accuracy in the separation of materials in the solid-solid mixture. The longer stroke involved in gyratory machines allows the finer particles to settle down and spread out. This, coupled with the horizontal motion used maximises the opportunity for the finer particles to pass, thus enhancing the quality and efficiency of separation.
Easily maintained
Most modern day gyratory screening machines employ the use of screen cleaners, which act to prevent any clogging of the gyratory sifters. The motion and mechanism of a gyratory screener enables more energy to be imparted onto the cleaners, thus actively preventing the occurrence of build-up on the gyratory sifters. In the long run, the prevention of build-up in the sifters would enable the gyratory screener to have a longer lifespan.
Low screen blinding
Vibration at the vertical component by the bottom eccentric weight significantly reduces screen blinding. Additional ball trays and Kleen rings can further reduce screen blinding.
Limitations
Large amount of floor space
The large area of the gyratory screen requires a large floor space to be reserved. This may cause logistical problems in cases where space needs to be optimised and efficiently used.
Relatively difficult to operate
The gyratory sifter has a complex flow pattern, as well a complex drive mechanism, which is more complex than most other sifters. This could pose problems, as the complexity of the operating mechanism makes the unit harder to operate.
Susceptible to lumps and agglomerates in the feed
The gyratory sifter operates at a gentle pace, and has a non-robust motion during operation. The gentle motion involved will not break up any lumps or agglomerates found in the feed. Thus, the lumps in the feed would be discarded in the top frame discharge, along with other large particles.
Operating characteristics
Gyratory equipment is divided to a top and a bottom unit. The unit on top consists of screening frames supported with rugged springs attached to the circular base, which allows free vibration of the top unit. Secondary support springs are attached to for heavy duty operation, preventing the vibration of the top unit from reaching the floor. The base of the machine (bottom unit) consists of top and bottom eccentric weights attached to a heavy duty motor. Minimum energy is consumed with the installation of double extended shafts on the motors, which are attached to both the top and bottom eccentric weights. Screen decks can be mounted on top of another within the machine assembly with spacing frames connected together via stainless steel quick release clamps.
There are large amounts of gyratory equipment designs available with some possible design characteristics include:
Feed rates of 1–50,000 kg/h
Screen layers up to 7
Operating frequency of rotation at 700–1450 rpm
Screening area of 1800–24,800 cm2
Screen diameter of 600–1500 mm
Power consumption of 5.5–7.5 kW
Mesh openings of 20 μm – 20 mm
Construction material
Gyratory equipment is capable of handling feeds of 500 tons/(h·m2) with separation efficiency up to 98% for dry processes, with feed materials to be separated not below a diameter of 4 μm.
Wet processes in the other hand can only manage a relatively high efficiency (85%) if the moisture content is above 70%.
Eccentric weights can be varied accordingly to obtain desired ratio of coarse vs fine products.
Assessment of characteristics
Separation efficiency factor is given by the equation:
where is the fraction of undersize in oversize and is the mass of oversize in feed.
However, correction coefficient factor is to be included in the event of multiple decks are involved, as stated in the table below.
This is due to the error carried forward for every deck. Efficiency factor is multiplied by the correction factor to obtain a more accurate estimate.
The degree of removal of wet processes is lower than their dry counterparts, which is explained by the change in physico-mechanical properties of the body.
The trend of the curve displays that feed materials with a moisture content above 70% is more suited for gyratory screening.
Both top and bottom eccentric weights play a big role in sorting a ratio of coarse versus fine products. Kinetic moment produced by the additional eccentric weights changes the oscillation swing, hence producing outputs of different rates and compositions. Increasing the upper eccentric weight promotes discharge of the coarse material. An increase in the lower eccentric weight maximizes the quantity discharged below. The relationships are demonstrated in the table below for a fixed design:
The kinetic moment is linked to eccentric weights with the equations:
where is the lower or upper wheel position (rad), is the phase angle (rad), is the mass of wheel, is the motor shaft input speed (rpm) and is the force transfer coefficient.
Gyratory equipment is only invalid if two or more materials to be separated are finer than 4 μm, which varies with different machine dimensions. The proposed value of 4 μm was calculated using the dimensions of the largest available model with the largest possible gyration radius. The critical velocity, which cannot be exceeded by the materials or else the operation fails, is given by the equation:
where is the length of side of aperture and is the particle diameter.
Gyration inertia formulae allow the calculations for different models with different dimensions.
Design heuristics
Typical gyratory equipment operation circulates around eccentric weight and screen frames. Materials are distributed along the screen surface and undersize materials are allowed to penetrate the screen. A rule of thumb is to be followed for high separation efficiency and smooth operation:
Slope angle of screen: 10–12° is opted for of highest separation efficiency with desired screening capacity, as displayed in the table below.
Area of screen: Larger screen area indicates a higher screening capacity. The largest area available is given as approximately 25,000 cm2, a larger area will cause severe uneven distributions of material along the screen surface.
Number of screen layers: More layers of screening are required for mixtures with materials with close sizes. Screen comes in various types, dimensions and material. Fewer decks is preferred to maintain quality output (< 4).
Material of frames: Wooden frames are opted for economical operations with simple handling. Carbon steel or stainless steel is selected for heavy duty operations for finer products.
Operating frequency: Higher revolution per minute indicates more rapid rate of separation at the expense of more energy required, which is provided by a motor.
Feed points: Single feed port is preferred over multiple feed ports as a far more complicated control is involved despite higher separation coefficient.
Pre-treatments: It is recommended to prescreen oversize materials that are larger than a 5 mm aperture size to prevent damaging the equipment. Simple sieving equipment is sufficient to screen materials with such big sizes.
Post-treatment and waste production
Post-treatment
Screening can be carried out in dry or wet basis. Wet screening often requires post treatment, drying as a preparation for the downstream process. In most cases, drying is often used in the final stage of the process, however this can be varied due to the need of the process. Drying process involves the removal of water or other solutes, whereby most of the process are done by vaporization with the aid of heat supply. Thus, efficiency of heat supply equipment plays an important role to optimize the drying process.
Furthermore, this treatment can be applied on the waste stream prior the disposal. Drying greatly reduce the total volume mass of the solid waste, which simplify the handling process and reduce the transportation cost.
The list below states the examples of dryers available for industrial process:
Rotary dryers
Tunnel dryers
Tray or shelf dryers
Drum dryers
Spray dryers
Waste production
Gyratory screener separates solids from liquid or other dry solids according to the particle sizes. Screening is one of the crucial pre-treatment to several industries, such as chemical, food, mining, pharmaceutical, and waste.
The table above presents the waste stream for several processes that are commonly use in different industries. The example given for chemical industry is the powdered detergent production where gyratory screener is used to filter out the oversized granules found in end product to improve product appearance and dissolve rate. Citrus juice production is the example of food industry. Gyratory screener available in multi-layered planes eliminates all the wastes in several stages. Juice sacs are the desirable element to produce citrus juice. Screening in food industry significantly increases the product quality. Among the ore processing, gyratory screener is used after crushing to filter out the oversized ore particles. These unfavourable particles can be regarded as waste or recycle back to the process. Similarly, in pharmaceutical industry, gyratory screener removes undissolved particles from liquid pharmaceuticals or fine powder that stick on the capsule surface to ease the capsule stamp. As for wastewater treatment, removal of coarse solid wastes from the wastewater stream is exclusively to protect the downstream equipment from damages. Fine solid waste removal acts as pre-treatment for the process, more specifically a primary clarification. The overall screening process enhances system performance, minimize the cost and reduce the need for cleaning of the filter in other equipment.
The waste materials usually travel through a discharge chute for disposal depending on the design of the gyratory screener. There will be at least one outlet for every deck of gyratory screener.
References
Industrial processes
Mining equipment | Gyratory equipment | Engineering | 2,883 |
12,017,661 | https://en.wikipedia.org/wiki/Antibiosis | Antibiosis, also referred to as antagonism, a process of biological interaction between two or more organisms that is detrimental to at least one of them; it can also be an antagonistic association between an organism and the metabolic substances produced by another. Antibiosis can occur through a variety of mechanisms, with "injury, death, reduced longevity, or reduced reproduction of the pest" being common. The process of antibiosis is either reversible or irreversible, and is caused by the production of volatile organic compounds by plant-growth-promoting rhizobacterium (PGPR). Antibiosis is one of two forms of amensalism, the other form being competition. Primary examples of antibiosis include "antibacterial activity against bacteria, fungus, nematodes, insects, and occasionally against plants and algae".
Examples of Antibiosis
Antibiosis in biotech and medical treatment
The study of antibiosis and its role in antibiotics has led to the expansion of knowledge in the field of microbiology. Molecular processes such cell wall synthesis and recycling, for example, have become better understood through the study of how antibiotics affect beta-lactam development through the antibiosis relationship and interaction of the particular drugs with the bacteria subjected to the compound. For example, the Penicillium fungi responds to bacterial infections by producing penicillin, which is toxic to bacteria and is commonly used in medical settings as an effective treatment for bacterial infections. Penicillin belongs to the beta-lactam antibiotic class.
Host plant resistance through antibiosis
Antibiosis is typically studied in host plant populations and extends to the insects which feed upon them. Antibiosis can be seen in certain vegetables, as antibiosis mechanisms have been found in Brassica species to protect against cabbage whitefly.
"Antibiosis resistance affects the biology of the insect so pest abundance and subsequent damage is reduced compared to that which would have occurred if the insect was on a susceptible crop variety. Antibiosis resistance often results in increased mortality or reduced longevity and reproduction of the insect."
During a study of antibiosis, it was determined that the key to achieving effective antibiosis relies on the organism being sessile. "When you give antibiotic-producing bacteria a structured medium, they affix to substrate, grow clonally, and produce a 'no man's land,' absent competitors, where the antibiotics diffuse outward." Antibiosis is most effective when resources are neither plentiful nor sparse. Antibiosis should be considered as the median on the scale of resource, due to its ideal performance.
Other examples
The black walnut, Juglans nigra, produces a secretion called juglone, which is toxic to a variety of flowers, herbaceous plants, and field crops. This toxic secretion creates an area surrounding the black walnut tree that is uninhabitable to most species.
In many environments, antibiosis can promote mutualisms and/or competition between species in an ecosystem. Attine ants provide an example of a more complex antibiosis mechanism. Attine ants maintain cultivations of Leucocoprinus fungi as their primary source of consumption, however, a parasitic fungal genus, Escovopsis, feeds on Leucocoprinus and disrupts the food system of the ants. In response to this, attine ants encourage growth of the Pseudonocardia actinomycete, as it produces an antimicrobial compound that suppresses the parasitic Escovopsis. The attine ants, Leucocoprinus fungi and Pseudonocardia actinomycete all benefit in this interaction, however it is detrimental to the Escovopsis fungi.
See also
Antibiotic
Biological pest control
Biotechnology
Symbiosis
Plant defense against herbivory
Secondary metabolite
Competition (biology)
References
Further reading
External links
Biological interactions
Antibiotics
pt:Relação ecológica#Amensalismo ou antibiose | Antibiosis | Biology | 818 |
522,176 | https://en.wikipedia.org/wiki/Imine | In organic chemistry, an imine ( or ) is a functional group or organic compound containing a carbon–nitrogen double bond (). The nitrogen atom can be attached to a hydrogen or an organic group (R). The carbon atom has two additional single bonds. Imines are common in synthetic and naturally occurring compounds and they participate in many reactions.
Distinction is sometimes made between aldimines and ketimines, derived from aldehydes and ketones, respectively.
Structure
In imines the five core atoms (C2C=NX, ketimine; and C(H)C=NX, aldimine; X = H or C) are coplanar. Planarity results from the sp2-hybridization of the mutually double-bonded carbon and the nitrogen atoms. The C=N distance is 1.29–1.31 Å for nonconjugated imines and 1.35 Å for conjugated imines. By contrast, C−N distances in amines and nitriles are 1.47 and 1.16 Å respectively. Rotation about the C=N bond is slow. Using NMR spectroscopy, both E and Z isomers of aldimines have been detected. Owing to steric effects, the E isomer is favored.
Nomenclature and classification
The term "imine" was coined in 1883 by the German chemist Albert Ladenburg.
Usually imines refer to compounds with the general formula R2C=NR, as discussed below. In the older literature, imine refers to the aza-analogue of an epoxide. Thus, ethylenimine is the three-membered ring species aziridine C2H4NH. The relationship of imines to amines having double and single bonds can be correlated with imides and amides, as in succinimide vs acetamide.
Imines are related to ketones and aldehydes by replacement of the oxygen with an NR group. When R = H, the compound is a primary imine, when R is hydrocarbyl, the compound is a secondary imine. If this group is not a hydrogen atom, then the compound can sometimes be referred to as a Schiff base. When R3 is OH, the imine is called an oxime, and when R3 is NH2 the imine is called a hydrazone.
A primary imine in which C is attached to both a hydrocarbyl and a H (derived from an aldehyde) is called a primary aldimine; a secondary imine with such groups is called a secondary aldimine. A primary imine in which C is attached to two hydrocarbyls (derived from a ketone) is called a primary ketimine; a secondary imine with such groups is called a secondary ketimine.
N-Sulfinyl imines are a special class of imines having a sulfinyl group attached to the nitrogen atom.
Synthesis of imines
Carbonyl-amine condensation
Imines are typically prepared by the condensation of primary amines and aldehydes. Ketones undergo similar reactions, but less commonly than aldehydes. In terms of mechanism, such reactions proceed via the nucleophilic addition giving a hemiaminal -C(OH)(NHR)- intermediate, followed by an elimination of water to yield the imine (see alkylimino-de-oxo-bisubstitution for a detailed mechanism). The equilibrium in this reaction usually favors the carbonyl compound and amine, so that azeotropic distillation or use of a dehydrating agent, such as molecular sieves or magnesium sulfate, is required to favor of imine formation. In recent years, several reagents such as Tris(2,2,2-trifluoroethyl)borate [B(OCH2CF3)3], pyrrolidine or titanium ethoxide [Ti(OEt)4] have been shown to catalyse imine formation.
Rarer than primary amines is the use of ammonia to give a primary imine. In the case of hexafluoroacetone, the hemiaminal intermediate can be isolated.
From nitriles
Primary ketimines can be synthesized via a Grignard reaction with a nitrile. This method is known as Moureu-Mignonac ketimine synthesis. For example, benzophenone imine can also be synthesized by addition of phenylmagnesium bromide to benzonitrile followed by careful hydrolysis (lest the imine be hydrolyzed):
C6H5CN + C6H5MgBr → (C6H5)2C=NMgBr
(C6H5)2C=NMgBr + H2O → (C6H5)2C=NH + MgBr(OH)
Specialized methods
Several other methods exist for the synthesis of imines.
Reaction of organic azides with metal carbenoids (produced from diazocarbonyl compounds).
The reaction of iminophosphoranes and organic azides in an Aza-Wittig-reaction.
Condensation of carbon acids with nitroso compounds.
The rearrangement of trityl N-haloamines in the Stieglitz rearrangement.
By reaction of alkenes with hydrazoic acid in the Schmidt reaction.
By reaction of a nitrile, hydrochloric acid, and an arene in the Hoesch reaction.
Multicomponent synthesis of 3-thiazolines in the Asinger reaction.
Thermal decomposition of oximes.
Reactions
Hydrolysis
The chief reaction of imines, often undesirable, is their hydrolysis back to the amine and the carbonyl precursor.
R2C=NR' + H2O R2C=O + R'NH2
Precursors to heterocycles
Imines are widely used as intermediates in the synthesis of heterocycles.
Aromatic imines react with an enol ether to a quinoline in the Povarov reaction.
Imines react, thermally, with ketenes in [2+2] cycloadditions to form β-lactams in the Staudinger synthesis. Several variants have been described.
Imine react with dienes in the Imine Diels-Alder reaction to a tetrahydropyridine.
tosylimines react with α,β-unsaturated carbonyl compound to give allylic amines in the Aza-Baylis–Hillman reaction.
Acid-base reactions
Somewhat like the parent amines, imines are mildly basic and reversibly protonate to give iminium salts:
R2C=NR' + H+ [R2C=NHR']+
Alternatively, primary imines are sufficiently acidic to allow N-alkylation, as illustrated with benzophenone imine:
(C6H5)2C=NH + CH3Li → (C6H5)2C=NLi + CH4
(C6H5)2C=NLi + CH3I → (C6H5)2C=NCH3 + LiI
Lewis acid-base reactions
Imines are common ligands in coordination chemistry. Particularly popular examples are found with Schiff base ligands derived from salicylaldehyde, the salen ligands. Metal-catalyzed reactions of imines proceed through such complexes. In classical coordination complexes, imines bind metals through nitrogen. For low-valent metals, η2-imine ligands are observed.
Nucleophilic additions
Very analogous to ketones and aldehydes, primary imines are susceptible to attack by carbanion equivalents. The method allow for the synthesis of secondary amines:
R2C=NR' + R"Li → R2R"CN(Li)R'
R2R"CN(Li)R' + H2O → R2R"CNHR' + LiOH
This can be expanded to include enolisable carbons in the Mannich reaction, which is a straightforward and commonly used approach for producing β-amino-carbonyl compounds.
Imine reductions
Imines are reduced via reductive amination. An imine can be reduced to an amine via hydrogenation for example in a synthesis of m-tolylbenzylamine:
Other reducing agents are lithium aluminium hydride and sodium borohydride.
The asymmetric reduction of imines has been achieved by hydrosilylation using a rhodium-DIOP catalyst. Many systems have since been investigated.
Owing to their enhanced electrophilicity, iminium derivatives are particularly susceptible to reduction to the amines. Such reductions can be achieved by transfer hydrogenation or by the stoichiometric action of sodium cyanoborohydride. Since imines derived from unsymmetrical ketones are prochiral, their reduction defines a route to chiral amines.
Polymerisation
Unhindered aldimines tend to cyclize, as illustrated by the condensation of methylamine and formaldehyde, which gives the hexahydro-1,3,5-triazine.
Imine polymers (polyimines) can be synthesised from multivalent aldehydes and amines. The polymerisation reaction proceeds directly when the aldehyde and amine monomers are mixed together at room temperature. In most cases, (small) amounts of solvent may still be required. Polyimines are particularly interesting materials because of their application as vitrimers. Owing to the dynamic covalent nature of the imine bonds, polyimines can be recycled relatively easily. Furthermore, polyimines are known for their self-healing behaviour.
Miscellaneous reactions
Akin to pinacol couplings, imines are susceptible to reductive coupling leading to 1,2-diamines.
Imine are oxidized with meta-chloroperoxybenzoic acid (mCPBA) to give an oxaziridines.
Imines are intermediates in the alkylation of amines with formic acid in the Eschweiler-Clarke reaction.
A rearrangement in carbohydrate chemistry involving an imine is the Amadori rearrangement.
A methylene transfer reaction of an imine by an unstabilised sulphonium ylide can give an aziridine system.
Imine react with dialkylphosphite in the Pudovik reaction and Kabachnik–Fields reaction
Biological role
Imines are common in nature. The pyridoxal phosphate-dependent enzymes (PLP enzymes) catalyze myriad reactions involving aldimines (or Schiff bases). Cyclic imines are also substrates for many imine reductase enzymes.
See also
Enamine
Schiff base
Carboximidate
Oxazolidine
Other functional groups with a C=N double bond: oximes, hydrazones
Other functional groups with a C N triple bond: nitriles, isonitriles
References
Functional groups | Imine | Chemistry | 2,325 |
1,025,345 | https://en.wikipedia.org/wiki/Defense%20Message%20System | The Defense Message System or Defense Messaging System (DMS) is a deployment of secure electronic mail and directory services in the United States Department of Defense. DMS was intended to replace the AUTODIN network, and is based on implementations of the OSI X.400 mail, X.500 directory and X.509 public key certificates, with several extensions to meet the specific needs of military messaging.
DMS is sometimes operated in conjunction with third-party products, such as the Navy's DMDS (Defense Message Dissemination System), a profiling system that takes a message and forwards it, based on message criteria, to parties that are required to take action on a message. This combination has met with success with the upper echelons of command, since parties do not have to wait for messaging center operators to route the messages to the proper channels for action. The Navy also uses Navy Regional Enterprise Messaging System (NREMS). NREMS uses an AMHS backend to send secure Organizational Messages via a web interface to Naval commands.
The US Army's version of DMS is run solely on an AMHS platform both for CONUS and OCONUS operations. The Pentagon Telecommunications Center (PTC) is the hub for CONUS operations and there are several AMHS sites OCONUS for strategic messaging. In the tactical environment the Army deploys an independent Tactical Message Systems (TMS) that is also built on an AMHS platform for secure messaging capability in austere environments when communications with OCONUS AMHS sites are unavailable.
DMS has been coordinated by the Defense Information Systems Agency (DISA), and testing began in 1995. DMS has many third-party vendor products, such as DMDS, DMDS Proxy MR, CP-XP (the CommPower XML Portal), AMHS (Automated Message Handling System), MMHS, and CMS 1.0.
See also
Defense Switched Network
GOSIP
External links
Defense Message System at Defense Information Systems Agency
Defense Messaging System at Joint Interoperability Test Command
Defense Message System at GlobalSecurity.org
Navy Moving Towards Web-based Naval Messaging
PM DMS-Army streamlines tactical message system, receives defense acquisition executive recognition
Military communications
Telecommunications equipment of the Cold War
Email | Defense Message System | Engineering | 463 |
12,155,770 | https://en.wikipedia.org/wiki/Green%20measure | In mathematics — specifically, in stochastic analysis — the Green measure is a measure associated to an Itō diffusion. There is an associated Green formula representing suitably smooth functions in terms of the Green measure and first exit times of the diffusion. The concepts are named after the British mathematician George Green and are generalizations of the classical Green's function and Green formula to the stochastic case using Dynkin's formula.
Notation
Let X be an Rn-valued Itō diffusion satisfying an Itō stochastic differential equation of the form
Let Px denote the law of X given the initial condition X0 = x, and let Ex denote expectation with respect to Px. Let LX be the infinitesimal generator of X, i.e.
Let D ⊆ Rn be an open, bounded domain; let τD be the first exit time of X from D:
The Green measure
Intuitively, the Green measure of a Borel set H (with respect to a point x and domain D) is the expected length of time that X, having started at x, stays in H before it leaves the domain D. That is, the Green measure of X with respect to D at x, denoted G(x, ⋅), is defined for Borel sets H ⊆ Rn by
or for bounded, continuous functions f : D → R by
The name "Green measure" comes from the fact that if X is Brownian motion, then
where G(x, y) is Green's function for the operator LX (which, in the case of Brownian motion, is Δ, where Δ is the Laplace operator) on the domain D.
The Green formula
Suppose that Ex[τD] < +∞ for all x ∈ D, and let f : Rn → R be of smoothness class C2 with compact support. Then
In particular, for C2 functions f with support compactly embedded in D,
The proof of Green's formula is an easy application of Dynkin's formula and the definition of the Green measure:
References
(See Section 9)
Measures (measure theory)
Stochastic differential equations | Green measure | Physics,Mathematics | 429 |
40,159,314 | https://en.wikipedia.org/wiki/Relativistic%20chaos | In physics, relativistic chaos is the application of chaos theory to dynamical systems described primarily by general relativity, and also special relativity.
Barrow (1982) showed that the Einstein equations exhibit chaotic behaviour and modelled the Mixmaster universe as a dynamical system. Later work showed that relativistic chaos is coordinate invariant (Motter 2003).
See also
Quantum chaos
References
Chaos theory
General relativity
Mathematical physics | Relativistic chaos | Physics,Mathematics | 84 |
514,534 | https://en.wikipedia.org/wiki/Canonical%20transformation | In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates that preserves the form of Hamilton's equations. This is sometimes known as form invariance. Although Hamilton's equations are preserved, it need not preserve the explicit form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations (a useful method for calculating conserved quantities) and Liouville's theorem (itself the basis for classical statistical mechanics).
Since Lagrangian mechanics is based on generalized coordinates, transformations of the coordinates do not affect the form of Lagrange's equations and, hence, do not affect the form of Hamilton's equations if the momentum is simultaneously changed by a Legendre transformation into
where are the new co‑ordinates, grouped in canonical conjugate pairs of momenta and corresponding positions for with being the number of degrees of freedom in both co‑ordinate systems.
Therefore, coordinate transformations (also called point transformations) are a type of canonical transformation. However, the class of canonical transformations is much broader, since the old generalized coordinates, momenta and even time may be combined to form the new generalized coordinates and momenta. Canonical transformations that do not include the time explicitly are called restricted canonical transformations (many textbooks consider only this type).
Modern mathematical descriptions of canonical transformations are considered under the broader topic of symplectomorphism which covers the subject with advanced mathematical prerequisites such as cotangent bundles, exterior derivatives and symplectic manifolds.
Notation
Boldface variables such as represent a list of generalized coordinates that need not transform like a vector under rotation and similarly represents the corresponding generalized momentum, e.g.,
A dot over a variable or list signifies the time derivative, e.g.,
and the equalities are read to be satisfied for all coordinates, for example:
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, e.g.,
The dot product (also known as an "inner product") maps the two coordinate lists into one variable representing a single numerical value. The coordinates after transformation are similarly labelled with for transformed generalized coordinates and for transformed generalized momentum.
Conditions for restricted canonical transformation
Restricted canonical transformations are coordinate transformations where transformed coordinates and do not have explicit time dependence, i.e., and . The functional form of Hamilton's equations is
In general, a transformation does not preserve the form of Hamilton's equations but in the absence of time dependence in transformation, some simplifications are possible. Following the formal definition for a canonical transformation, it can be shown that for this type of transformation, the new Hamiltonian (sometimes called the Kamiltonian) can be expressed as:where it differs by a partial time derivative of a function known as a generator, which reduces to being only a function of time for restricted canonical transformations.
In addition to leaving the form of the Hamiltonian unchanged, it is also permits the use of the unchanged Hamiltonian in the Hamilton's equations of motion due to the above form as:
Although canonical transformations refers to a more general set of transformations of phase space corresponding with less permissive transformations of the Hamiltonian, it provides simpler conditions to obtain results that can be further generalized. All of the following conditions, with the exception of bilinear invariance condition, can be generalized for canonical transformations, including time dependance.
Indirect conditions
Since restricted transformations have no explicit time dependence (by definition), the time derivative of a new generalized coordinate is
where is the Poisson bracket.
Similarly for the identity for the conjugate momentum, Pm using the form of the "Kamiltonian" it follows that:
Due to the form of the Hamiltonian equations of motion,
if the transformation is canonical, the two derived results must be equal, resulting in the equations:
The analogous argument for the generalized momenta Pm leads to two other sets of equations:
These are the indirect conditions to check whether a given transformation is canonical.
Symplectic condition
Sometimes the Hamiltonian relations are represented as:
Where
and . Similarly, let .
From the relation of partial derivatives, converting the relation in terms of partial derivatives with new variables gives where . Similarly for ,
Due to form of the Hamiltonian equations for ,
where can be used due to the form of Kamiltonian. Equating the two equations gives the symplectic condition as:
The left hand side of the above is called the Poisson matrix of , denoted as . Similarly, a Lagrange matrix of can be constructed as . It can be shown that the symplectic condition is also equivalent to by using the property. The set of all matrices which satisfy symplectic conditions form a symplectic group. The symplectic conditions are equivalent with indirect conditions as they both lead to the equation , which is used in both of the derivations.
Invariance of the Poisson bracket
The Poisson bracket which is defined as:can be represented in matrix form as:Hence using partial derivative relations and symplectic condition gives:
The symplectic condition can also be recovered by taking and which shows that . Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that , which is also the result of explicitly calculating the matrix element by expanding it.
Invariance of the Lagrange bracket
The Lagrange bracket which is defined as:
can be represented in matrix form as:
Using similar derivation, gives:
The symplectic condition can also be recovered by taking and which shows that . Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that , which is also the result of explicitly calculating the matrix element by expanding it.
Bilinear invariance conditions
These set of conditions only apply to restricted canonical transformations or canonical transformations that are independent of time variable.
Consider arbitrary variations of two kinds, in a single pair of generalized coordinate and the corresponding momentum:
The area of the infinitesimal parallelogram is given by:
It follows from the symplectic condition that the infinitesimal area is conserved under canonical transformation:
Note that the new coordinates need not be completely oriented in one coordinate momentum plane.
Hence, the condition is more generally stated as an invariance of the form under canonical transformation, expanded as:If the above is obeyed for any arbitrary variations, it would be only possible if the indirect conditions are met.
The form of the equation, is also known as a symplectic product of the vectors and and the bilinear invariance condition can be stated as a local conservation of the symplectic product.
Liouville's theorem
The indirect conditions allow us to prove Liouville's theorem, which states that the volume in phase space is conserved under canonical transformations, i.e.,
By calculus, the latter integral must equal the former times the determinant of Jacobian Where
Exploiting the "division" property of Jacobians yields
Eliminating the repeated variables gives
Application of the indirect conditions above yields .
Generating function approach
To guarantee a valid transformation between and , we may resort to a direct generating function approach. Both sets of variables must obey Hamilton's principle. That is the action integral over the Lagrangians and , obtained from the respective Hamiltonian via an "inverse" Legendre transformation, must be stationary in both cases (so that one can use the Euler–Lagrange equations to arrive at Hamiltonian equations of motion of the designated form; as it is shown for example here):
One way for both variational integral equalities to be satisfied is to have
Lagrangians are not unique: one can always multiply by a constant and add a total time derivative and yield the same equations of motion (as discussed on Wikibooks). In general, the scaling factor is set equal to one; canonical transformations for which are called extended canonical transformations. is kept, otherwise the problem would be rendered trivial and there would be not much freedom for the new canonical variables to differ from the old ones.
Here is a generating function of one old canonical coordinate ( or ), one new canonical coordinate ( or ) and (possibly) the time . Thus, there are four basic types of generating functions (although mixtures of these four types can exist), depending on the choice of variables. As will be shown below, the generating function will define a transformation from old to new canonical coordinates, and any such transformation is guaranteed to be canonical.
The various generating functions and its properties tabulated below is discussed in detail:
Type 1 generating function
The type 1 generating function depends only on the old and new generalized coordinates
To derive the implicit transformation, we expand the defining equation above
Since the new and old coordinates are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized coordinates and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized momenta in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation
yields a formula for as a function of the new canonical coordinates .
In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let
This results in swapping the generalized coordinates for the momenta and vice versa
and . This example illustrates how independent the coordinates and momenta are in the Hamiltonian formulation; they are equivalent variables.
Type 2 generating function
The type 2 generating function depends only on the old generalized coordinates and the new generalized momenta
where the terms represent a Legendre transformation to change the right-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above
Since the old coordinates and new momenta are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized momenta and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized coordinates in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation
yields a formula for as a function of the new canonical coordinates .
In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let
where is a set of functions. This results in a point transformation of the generalized coordinates
Type 3 generating function
The type 3 generating function depends only on the old generalized momenta and the new generalized coordinates
where the terms represent a Legendre transformation to change the left-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above
Since the new and old coordinates are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized coordinates and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized momenta in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation yields a formula for as a function of the new canonical coordinates .
In practice, this procedure is easier than it sounds, because the generating function is usually simple.
Type 4 generating function
The type 4 generating function depends only on the old and new generalized momenta
where the terms represent a Legendre transformation to change both sides of the equation below. To derive the implicit transformation, we expand the defining equation above
Since the new and old coordinates are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized momenta and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized coordinates in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation
yields a formula for as a function of the new canonical coordinates .
Restrictions on generating functions
For example, using generating function of second kind: and , the first set of equations consisting of variables , and has to be inverted to get . This process is possible when the matrix defined by is non-singular.
Hence, restrictions are placed on generating functions to have the matrices: , , and , being non-singular.
Limitations of generating functions
Since is non-singular, it implies that is also non-singular. Since the matrix is inverse of , the transformations of type 2 generating functions always have a non-singular matrix. Similarly, it can be stated that type 1 and type 4 generating functions always have a non-singular matrix whereas type 2 and type 3 generating functions always have a non-singular matrix. Hence, the canonical transformations resulting from these generating functions are not completely general.
In other words, since and are each independent functions, it follows that to have generating function of the form and or and , the corresponding Jacobian matrices and are restricted to be non singular, ensuring that the generating function is a function of independent variables. However, as a feature of canonical transformations, it is always possible to choose such independent functions from sets or , to form a generating function representation of canonical transformations, including the time variable. Hence, it can be proved that every finite canonical transformation can be given as a closed but implicit form that is a variant of the given four simple forms.
Canonical transformation conditions
Canonical transformation relations
From: , calculate :
Since the left hand side is which is independent of dynamics of the particles, equating coefficients of and to zero, canonical transformation rules are obtained. This step is equivalent to equating the left hand side as .
Similarly:
Similarly the canonical transformation rules are obtained by equating the left hand side as .
The above two relations can be combined in matrix form as: (which will also retain same form for extended canonical transformation) where the result , has been used. The canonical transformation relations are hence said to be equivalent to in this context.
The canonical transformation relations can now be restated to include time dependance:Since and , if and do not explicitly depend on time, can be taken. The analysis of restricted canonical transformations is hence consistent with this generalization.
Symplectic condition
Applying transformation of co-ordinates formula for , in Hamiltonian's equations gives:
Similarly for :or:Where the last terms of each equation cancel due to condition from canonical transformations. Hence leaving the symplectic relation: which is also equivalent with the condition . It follows from the above two equations that the symplectic condition implies the equation , from which the indirect conditions can be recovered. Thus, symplectic conditions and indirect conditions can be said to be equivalent in the context of using generating functions.
Invariance of the Poisson and Lagrange brackets
Since and where the symplectic condition is used in the last equalities. Using , the equalities and are obtained which imply the invariance of Poisson and Lagrange brackets.
Extended canonical transformation
Canonical transformation relations
By solving for:with various forms of generating function, the relation between K and H goes as instead, which also applies for case.
All results presented below can also be obtained by replacing , and from known solutions, since it retains the form of Hamilton's equations. The extended canonical transformations are hence said to be result of a canonical transformation () and a trivial canonical transformation () which has (for the given example, which satisfies the condition).
Using same steps previously used in previous generalization, with in the general case, and retaining the equation , extended canonical transformation partial differential relations are obtained as:
Symplectic condition
Following the same steps to derive the symplectic conditions, as: and
where using instead gives:The second part of each equation cancel. Hence the condition for extended canonical transformation instead becomes: .
Poisson and Lagrange brackets
The Poisson brackets are changed as follows:whereas, the Lagrange brackets are changed as:
Hence, the Poisson bracket scales by the inverse of whereas the Lagrange bracket scales by a factor of .
Infinitesimal canonical transformation
Consider the canonical transformation that depends on a continuous parameter , as follows:
For infinitesimal values of , the corresponding transformations are called as infinitesimal canonical transformations which are also known as differential canonical transformations.
Consider the following generating function:
Since for , has the resulting canonical transformation, and , this type of generating function can be used for infinitesimal canonical transformation by restricting to an infinitesimal value. From the conditions of generators of second type:Since , changing the variables of the function to and neglecting terms of higher order of , gives:Infinitesimal canonical transformations can also be derived using the matrix form of the symplectic condition.
Active canonical transformations
In the passive view of transformations, the coordinate system is changed without the physical system changing, whereas in the active view of transformation, the coordinate system is retained and the physical system is said to undergo transformations. Thus, using the relations from infinitesimal canonical transformations, the change in the system states under active view of the canonical transformation is said to be:
or as in matrix form.
For any function , it changes under active view of the transformation according to:
Considering the change of Hamiltonians in the active view, i.e., for a fixed point,where are mapped to the point, by the infinitesimal canonical transformation, and similar change of variables for to is considered up-to first order of . Hence, if the Hamiltonian is invariant for infinitesimal canonical transformations, its generator is a constant of motion.
Examples of ICT
Time evolution
Taking and , then . Thus the continuous application of such a transformation maps the coordinates to . Hence if the Hamiltonian is time translation invariant, i.e. does not have explicit time dependence, its value is conserved for the motion.
Translation
Taking , and . Hence, the canonical momentum generates a shift in the corresponding generalized coordinate and if the Hamiltonian is invariant of translation, the momentum is a constant of motion.
Rotation
Consider an orthogonal system for an N-particle system:
Choosing the generator to be: and the infinitesimal value of , then the change in the coordinates is given for x by:
and similarly for y:
whereas the z component of all particles is unchanged: .
These transformations correspond to rotation about the z axis by angle in its first order approximation. Hence, repeated application of the infinitesimal canonical transformation generates a rotation of system of particles about the z axis. If the Hamiltonian is invariant under rotation about the z axis, the generator, the component of angular momentum along the axis of rotation, is an invariant of motion.
Motion as canonical transformation
Motion itself (or, equivalently, a shift in the time origin) is a canonical transformation. If and , then Hamilton's principle is automatically satisfiedsince a valid trajectory should always satisfy Hamilton's principle, regardless of the endpoints.
Examples
The translation where are two constant vectors is a canonical transformation. Indeed, the Jacobian matrix is the identity, which is symplectic: .
Set and , the transformation where is a rotation matrix of order 2 is canonical. Keeping in mind that special orthogonal matrices obey it's easy to see that the Jacobian is symplectic. However, this example only works in dimension 2: is the only special orthogonal group in which every matrix is symplectic. Note that the rotation here acts on and not on and independently, so these are not the same as a physical rotation of an orthogonal spatial coordinate system.
The transformation , where is an arbitrary function of , is canonical. Jacobian matrix is indeed given by which is symplectic.
Modern mathematical description
In mathematical terms, canonical coordinates are any coordinates on the phase space (cotangent bundle) of the system that allow the canonical one-form to be written as
up to a total differential (exact form). The change of variable between one set of canonical coordinates and another is a canonical transformation. The index of the generalized coordinates is written here as a superscript (), not as a subscript as done above (). The superscript conveys the contravariant transformation properties of the generalized coordinates, and does not mean that the coordinate is being raised to a power. Further details may be found at the symplectomorphism article.
History
The first major application of the canonical transformation was in 1846, by Charles Delaunay, in the study of the Earth-Moon-Sun system. This work resulted in the publication of a pair of large volumes as Mémoires by the French Academy of Sciences, in 1860 and 1867.
See also
Symplectomorphism
Hamilton–Jacobi equation
Liouville's theorem (Hamiltonian)
Mathieu transformation
Linear canonical transformation
Notes
References
Hamiltonian mechanics
Transforms | Canonical transformation | Physics,Mathematics | 4,408 |
46,680,206 | https://en.wikipedia.org/wiki/List%20of%20countries%20by%20engine%20exports | The following is a list of countries by combustion engine exports. Data are from 2022, in billions of United States dollars, as reported by The Observatory of Economic Complexity. Currently, the top ten countries are listed:
References
- The Observatory of Economic Complexity - Combustion Engines (2022)
Engine
engine exports
Engines | List of countries by engine exports | Physics,Technology | 63 |
57,017,151 | https://en.wikipedia.org/wiki/List%20of%20gases | This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately. Blue type items have an article available by clicking on the name.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
Possible
This list includes substances that may be gases. However reliable references are not available.
cis-1-Fluoro-1-propene
trans-1-Chloropropene ?
cis-1-Chloropropene ?
Perfluoro-1,2-butadiene
Perfluoro-1,2,3-butatriene −5 polymerizes
Perfluoropent-2-ene
Perfluoropent-1-ene 29-30°
Trifluoromethanesulfenylfluoride CF3SF
Difluorocarbamyl fluoride F2NCOF −52°
N-Sulfinyltrifluoromethaneamine CF3NSO 18°
(Chlorofluoromethyl)silane 373-67-1
Difluoromethylsilane 420-34-8
Trifluoromethyl sulfenic trifloromethyl ester
Pentafluoro(penta-fluorethoxy)sulfur 900001-56-6 15°
Ethenol 557-75-5 10.5° = vinyl alcohol (tautomerizes)
1,1,1,2,2,3,4,4,4-nonafluorobutane 2-10° melt −129°
trans-2H-Heptafluoro-2-butene
Pentafluoroethylhypochlorite around −10°
Trifluoromethyl pentafluoroethyl sulfide 6° 33547-10-3
1,1,1-Trifluoro-N-(trifluoromethoxy)methanamine 671-63-6 0.6°
1-Chloro-1,1,2,2,3,3-hexafluoropropane 422-55-9 16.7
1-Chloro-1,1,2,3,3,3-hexafluoropropane 359-58-0 17.15
2-Chloro-1,1,1,2,3,3-hexafluoropropane 51346-64-6 16.7°
3-Chloro-1,1,1,2,2,3-hexafluoropropane 422-57-1 16.7°
Trifluormethyl 1,2,2,2-tetrafluoroethyl ether 2356-62-9 11°
2-Chloro-1,1,1,3,3-pentafluoropropane HFC-235da 134251-06-2 8°
1,1,2,3,3-Pentafluoropropane 24270-66-4 −3.77
2,2,3,3,4,5,5-Heptafluoro oxolane
(Heptafluoropropyl)carbonimidic difluoride 378-00-7
Pentafluoroethyl carbonimidic difluoride 428-71-7
(Trifluoromethyl)carbonimidic difluoride 371-71-1 CF3N=CF2
Perfluoro[N-methyl-(propylenamine)] 680-23-9
Perfluoro-N,N-dimethylvinylamine 13821-49-3
3,3,4-Trifluoro-2,4-bis-trifluoromethyl-[1,2]oxazetidine 714-52-3
Bis(trifluoromethyl) 2,2-difluoro-vinylamine 13747-23-4
Bis(trifluoromethyl) 1,2-difluoro-vinylamine 13747-24-5
1,1,2-Trifluoro-3-(trifluoromethyl)cyclopropane 2967-53-5
Bis(trifluoromethyl) 2-fluoro-vinylamine 25211-47-6
2-Fluoro-1,3-butadiene 381-61-3
Trifluormethylcyclopropane 381-74-8
cis-1-Fluoro-1-butene 66675-34-1
trans-1-Fluoro-1-butene 66675-35-2
2-Fluoro-1-butene
3-Fluoro-1-butene
trans-1-Fluoro-2-butene
cis-2-fluoro-2-butene
trans-2-fluoro-2-butene
1-Fluoro-2-methyl-1-propene
3-Fluoro-2-methyl-1-propene
Perfluoro-2-methyl-1,3-butadiene 384-04-3
1,1,3,4,4,5,5,5-Pctafluoro-1,2-pentadiene 21972-01-0
Near misses
This list includes substances that boil just above standard condition temperatures. Numbers are boiling temperatures in °C.
1,1,2,2,3-Pentafluoropropane 25–26 °C
Dimethoxyborane 25.9 °C
1,4-Pentadiene 25.9 °C
2-Bromo-1,1,1-trifluoroethane 26 °C
1,2-Difluoroethane 26 °C
Hydrogen cyanide 26 °C
Trimethylgermane 26.2 °C
1,H-Pentafluorocyclobut-1-ene
1,H:2,H-hexafluorocyclobutane
Tetramethylsilane 26.7 °C
Chlorosyl trifluoride 27 °C
2,2-Dichloro-1,1,1-trifluoroethane 27.8 °C
Perfluoroethyl 2,2,2-trifluoroethyl ether 27.89 °C
Perfluoroethyl ethyl ether 28 °C
Perfluorocyclopentadiene C5F6 28 °C
2-Butyne 29 °C
Digermane 29 °C
Perfluoroisopropyl methyl ether 29 °C
Trifluoromethanesulfonyl chloride 29–32 °C
Perfluoropentane 29.2 °C
Rhenium(VI) fluoride 33.8 °C
Chlorodimethylsilane 34.7 °C
1,2-Difluoropropane 43 °C
1,3-Difluoropropane 40-42 °C
Dimethylarsine 36 °C
Spiro[2.2]pentane 39 °C
Ruthenium(VIII) oxide 40 °C
Nickel carbonyl 42.1 °C
Trimethylphosphine 43 °C
Unstable substances
Gallane liquid decomposes at 0 °C.
Nitroxyl and diazene are simple nitrogen compounds known to be gases but they are too unstable and short lived to be condensed.
Methanetellurol CH3TeH 25284-83-7 unstable at room temperature.
Sulfur pentafluoride isocyanide isomerises to sulfur pentafluoride cyanide.
References
Gases
Gases | List of gases | Physics,Chemistry | 1,827 |
2,467,132 | https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Chemical%20Energy%20Conversion | The Max Planck Institute for Chemical Energy Conversion (MPI CEC) is a research institute of the Max Planck Society. It is located in the German town of Mülheim.
Research
The MPI CEC investigates fundamental chemical processes in energy transformation and contributes to the development of new and efficient catalysts. Its approach to this problem is based on a profound understanding of the underlying chemical reactions and multidisciplinary.
Departments
Inorganic Spectroscopy
Director: Serena DeBeer
The department of Inorganic Spectroscopy focuses on the development and application of advanced X-ray spectroscopic tools for understanding processes in biological and chemical catalysis.
Research groups
Energy Converting Enzymes (James Birrell)
Computational Chemistry (Ragnar Björnsson)
Biochemistry of Metalloproteins (Laure Decamps)
X-ray Spectroscopy Instrumentation (Sergey Peredkov)
Chemical Synthesis (Christina Römelt)
Proteins on electrodes (Olaf Rüdiger)
Chemical Synthesis, X-ray structure analysis (Thomas Weyhermüller)
Molecular Catalysis
Director: Walter Leitner
The research of the Molecular Catalysis department focuses on the development of technologies for the conversion of renewable energy and feedstocks to sustainable fuels and chemical products.
Research groups
Multifunctional Catalytic Systems (Alexis Bordet)
Organometallic Electrocatalysis (Nicolas Kaeffer)
Multiphase Catalysis (Andreas Vorholt)
Heterogeneous Reactions
Director: Robert Schlögl
The department of Heterogeneous Reactions is researching, among other things, on a better understanding of the processes of electrocatalytic water splitting. The aim is to generate generic insight and solutions for synthesis and analysis of chemical energy conversion systems.
Research groups
Surface Structure Analysis (Mark Greiner)
Carbon Synthesis and Applications (Saskia Heumann)
Electrocatalysis (Anna Mechler)
Catalytic Technology (Holger Ruland)
Independent Research Groups
EPR Research Group (Alexander Schnegg)
Catalyst Controlled Selective Transformations and Ligand Design (Manuel van Gemmeren)
Synergistic Organometallic Catalysis (Christophe Werlé)
Wolfgang Lubitz from the Department of Biophysical Chemistry is an Emeritus Director at the Institute.
History
As one of 84 institutes in the Max Planck Society, it was first part of the neighboring Max Planck Institute for Coal Research and became independent in 1981 under the name of Max Planck Institute for Radiation Chemistry. It was renamed to Max Planck Institute for Bioinorganic Chemistry in 2003, to reflect its changing research focus. Following a significant restructuring and expansion of its departments in 2011, it was re-established in 2012 as the Max Planck Institute for Chemical Energy Conversion.
References
External links
Official site (English version)
Chemical research institutes
Mülheim
Chemical Energy Conversion
Research institutes established in 1981
Research institutes in North Rhine-Westphalia
1981 establishments in West Germany | Max Planck Institute for Chemical Energy Conversion | Chemistry | 575 |
40,328,133 | https://en.wikipedia.org/wiki/Detyrosination | Detyrosination is a form of posttranslational modification that occurs on alpha-tubulin. It consists of the removal of the C-terminal tyrosine to expose a glutamate at the newly formed C-terminus. Tubulin polymers, called microtubules, that contain detyrosinated alpha-tubulin are usually referred to as Glu-microtubules while unmodified polymers are called Tyr-microtubules.
The detyrosynating activity was first identified in the late 1970s. It is a slow acting enzyme that uses polymeric tubulin as a substrate. As a result, only stabilized microtubules accumulate this particular modification. Tubulin detyrosination is reversed by the tubulin-tyrosine ligase, which acts only on alpha-tubulin monomer. Since the majority of microtubules are very dynamic, they do not contain much detyrosinated tubulin.
See also
Polyglutamylation
Polyglycylation
Acetylation
References
Vasohibins/SVBP are tubulin carboxypeptidases (TCPs) that regulate neuron differentiation.
Aillaud C, Bosc C, Peris L, Bosson A, Heemeryck P, Van Dijk J, Le Friec J, Boulan B, Vossier F, Sanman LE, Syed S, Amara N, Couté Y, Lafanechère L, Denarier E, Delphin C, Pelletier L, Humbert S, Bogyo M, Andrieux A, Rogowski K, Moutin MJ.
Science. 2017 Dec 15;358(6369):1448-1453. doi: 10.1126/science.aao4165. Epub 2017 Nov 16.
PMID: 29146868
Post-translational modification
Protein structure | Detyrosination | Chemistry | 391 |
29,630,590 | https://en.wikipedia.org/wiki/Kimball%20tag | A Kimball tag was a cardboard tag that included both human- and machine-readable data to support punched card processing. A Kimball tag was an early form of stock control label that, like its later successor the barcode, supported back office data processing functions. They were predominantly used by the retail clothing ("fashion") industry.
Tagging guns which use plastic toggles to attach price tags to clothing are still known as "Kimball guns" (or the corruption, "kimble guns"), although the tags now use bar codes.
History
Sears, Roebuck & Company sponsored the development of a specialized punched card system to track garment inventory, produce timely management reports, and reduce clerical errors. A pilot system was operational in 1952.
The A. Kimball Company, an established price tag manufacturer in New York City, and the Karl J. Braun Engineering Company of Stamford, Connecticut developed the garment tags and the machine that marked and punched them.
The Potter Instrument Company of Great Neck, New York developed a photoelectric tag reader for the 1952 pilot system. The reader scanned 100 tags per minute. A lens system enlarged the image of a tag's holes projected by a gas-type photoflash tube onto an array of phototubes. The phototubes fired thyratrons that activated relay logic to translate the tag's coded digits into Hollerith code and punch a standard sized punched card.
References
Supply chain management
Automatic identification and data capture | Kimball tag | Technology | 294 |
24,073,428 | https://en.wikipedia.org/wiki/Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition | The Conference on Computer Vision and Pattern Recognition is an annual conference on computer vision and pattern recognition.
Affiliations
The conference was first held in 1983 in Washington, DC, organized by Takeo Kanade and Dana H. Ballard. From 1985 to 2010 it was sponsored by the IEEE Computer Society. In 2011 it was also co-sponsored by University of Colorado Colorado Springs. Since 2012 it has been co-sponsored by the IEEE Computer Society and the Computer Vision Foundation, which provides open access to the conference papers.
Scope
The conference considers a wide range of topics related to computer vision and pattern recognition—basically any topic that is extracting structures or answers from images or video or applying mathematical methods to data to extract or recognize patterns. Common topics include object recognition, image segmentation, motion estimation, 3D reconstruction, and deep learning.
The conference generally has less than 30% acceptance rates for all papers and less than 5% for oral presentations. It is managed by a rotating group of volunteers who are chosen in a public election at the Pattern Analysis and Machine Intelligence-Technical Community (PAMI-TC) meeting four years before the meeting. The conference uses a multi-tier double-blind peer review process. The program chairs, who cannot submit papers, select area chairs who manage the reviewers for their subset of submissions.
Location
The conference is usually held in June in North America.
Awards
Best Paper Award
These awards are picked by committees delegated by the program chairs of the conference.
Longuet-Higgins Prize
The Longuet-Higgins Prize recognizes papers from ten years ago that have made a significant impact on computer vision research.
PAMI Young Researcher Award
The Pattern Analysis and Machine Intelligence Young Researcher Award is an award given by the Technical Committee on Pattern Analysis and Machine Intelligence of the IEEE Computer Society to a researcher within 7 years of completing their Ph.D. for outstanding early career research contributions. Candidates are nominated by the computer vision community, with winners selected by a committee of senior researchers in the field. This award was originally instituted in 2012 by the journal Image and Vision Computing, also presented at the conference, and the journal continues to sponsor the award.
PAMI Thomas S. Huang Memorial Prize
The Thomas Huang Memorial Prize was established at the 2020 conference and is awarded annually starting from 2021 to honor researchers who are recognized as examples in research, teaching/mentoring, and service to the computer vision community.
See also
International Conference on Computer Vision
European Conference on Computer Vision
References
External links
2020 conference website
2019 conference website
Computer vision research infrastructure
IEEE conferences
Signal processing conferences
Computer science conferences
Annual events | Conference on Computer Vision and Pattern Recognition | Technology | 512 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.