id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,495,744
https://en.wikipedia.org/wiki/Intrinsic%20equation
In geometry, an intrinsic equation of a curve is an equation that defines the curve using a relation between the curve's intrinsic properties, that is, properties that do not depend on the location and possibly the orientation of the curve. Therefore an intrinsic equation defines the shape of the curve without specifying its position relative to an arbitrarily defined coordinate system. The intrinsic quantities used most often are arc length , tangential angle , curvature or radius of curvature, and, for 3-dimensional curves, torsion . Specifically: The natural equation is the curve given by its curvature and torsion. The Whewell equation is obtained as a relation between arc length and tangential angle. The Cesàro equation is obtained as a relation between arc length and curvature. The equation of a circle (including a line) for example is given by the equation where is the arc length, the curvature and the radius of the circle. These coordinates greatly simplify some physical problem. For elastic rods for example, the potential energy is given by where is the bending modulus . Moreover, as , elasticity of rods can be given a simple variational form. References External links Curves Equations
Intrinsic equation
[ "Mathematics" ]
236
[ "Mathematical objects", "Equations" ]
1,496,061
https://en.wikipedia.org/wiki/List%20of%20Unified%20Modeling%20Language%20tools
This article compares UML tools. UML tools are software applications which support some functions of the Unified Modeling Language. General Features See also List of requirements engineering tools References External links . Technical communication Software comparisons Diagramming software Computing-related lists
List of Unified Modeling Language tools
[ "Technology" ]
49
[ "Computing-related lists", "Software comparisons", "Computing comparisons" ]
1,496,209
https://en.wikipedia.org/wiki/Multistorey%20car%20park
A multistorey car park (Commonwealth English) or parking garage (American English), also called a multistorey, parking building, parking structure, parkade (Canadian), parking ramp, parking deck, or indoor parking, is a building designed for car, motorcycle, and bicycle parking in which parking takes place on more than one floor or level. The first known multistorey facility was built in London in 1901 and the first underground parking was built in Barcelona in 1904 (see history). The term multistorey (or multistory) is almost never used in the United States, because almost all parking structures have multiple parking levels. Parking structures may be heated if they are enclosed. Design of parking structures can add considerable cost for planning new developments, with costs in the United States around $28,000 per space and $56,000 per space for underground (excluding the cost of land), and can be required by cities in parking mandates for new buildings. Some cities such as London have abolished previously enacted minimum parking requirements. Minimum parking requirements are a hallmark of zoning and planning codes for municipalities in the US. (States do not prescribe parking requirements, while counties and cities can). History The earliest known multi-storey car park was opened in May 1901 by City & Suburban Electric Carriage Company at 6 Denman Street, central London. The location had space for 100 vehicles over seven floors, totaling 19,000 square feet. The same company opened a second location in 1902 for 230 vehicles. The company specialized in the sale, storage, valeting, and on-demand delivery of electric vehicles that could travel about 40 miles and had a top speed of 20 miles per hour. The earliest known parking garage in the United States was built in 1918 for the Hotel La Salle at 215 West Washington Street in the West Loop area of downtown Chicago, Illinois. It was designed by Holabird and Roche. The Hotel La Salle was demolished in 1976, but the parking structure remained because it had been designated as preliminary landmark status and the structure was several blocks from the hotel. It was demolished in 2005 after failing to receive landmark status from the city of Chicago. A 49-storey apartment tower, 215 West, has taken its place, also featuring a parking garage. When the Capital Garage in Washington, D.C. was built in 1927, it was reportedly the largest parking structure of its kind in the country. It was imploded in 1974. Design The movement of vehicles between floors can take place by means of: interior inclined parking ramps and express ramps without parking – common interior circular or helical express ramps exterior ramps – which may take the form of a circular or helical ramp vehicle lifts (or elevators) – the least common automated robot systems – combination of ramp and elevator Where the car park is built on sloping land, it may be split-level or have sloped parking. Many parking structures are independent buildings dedicated exclusively to that use. The design loads for car parks are often less than the office building they serve (50 psf versus 80 [100] psf), leading to long floor spans of 55–65 feet that permit cars to park in rows without supporting columns in between [called long span]. Podium parking below high-rise and mid-rise buildings are often short-span 25–30 feet clear between columns, since office/residential/retail floors above require more support [100 psf per International Building Code]. Columns in short -span structures obstruct row based parking spaces and will be less efficient than long-span designs; parking efficiency is measured in cars per level square footage [car count/level area]. Common structural systems in the United States for long-span structures are prestressed concrete double-tee floor systems, post-tensioned cast-in-place concrete floor systems or short-span podium parking with post-tensioned slabs and drop panels [drop heads. Steel embeds or thicker slabs can eliminate the need for drop panels, providing higher clearances for higher profile vehicles.] In recent times, parking structures built to serve residential and some business properties have been built as part of a larger building, often underground as part of the basement, such as the parking lot at the Atlantic Station redevelopment in Atlanta. This saves land for other uses (as opposed surface parking), is cheaper and more practical in most cases than a separate structure, and is hidden from view. It protects customers and their cars from weather such as rain, snow, or hot summer sunshine that raises a vehicle's interior temperature to extremely high levels. Underground parking of only two levels was considered an innovative concept in 1964, when developer Louis Lesser developed a two-level underground parking structure under six 10-storey high-rise residential halls at California State University, Los Angeles, which lacked space for horizontal expansion in the university. The simple two-level parking structure was considered unusual enough in 1964 that a separate newspaper section entitled "Parking Underground" described the parking lot as an innovative "concept" and as "subterranean spaces". In Toronto, a 2,400 space underground parking structure below Nathan Phillips Square is one of the world's largest. Parking which serve shopping centers can be built adjacent to the center for easier access at each floor between shops and parking. One example is Mall of America in Bloomington, Minnesota, USA, which has two large parking lots attached to the building, at the eastern and western ends. A common position for parking within shopping centers in the UK is on the roof, around the various utility systems, enabling customers to take lifts straight down into the center. Examples of such are The Oracle in Reading and Festival Place in Basingstoke. Parking garages without mixed use can provide excellent uses for the Roof area: The Grove Parking Garage is the site for movies on its 8th level roof, The Grand Prix of Long Beach, CA can be viewed from the Roof level of The Aquarium of the Pacific Parking Garage and The Pike Parking Garage (opposite the Queensway Structure) were built with a thickened post-tensioned roof slab to accommodate crowds of people. These parking structures often have low ceiling clearances [7'-2" and 8'-4" for accessible parking], which restrict access by full-size vans and other large vehicles. On 15 December 2013, a man was killed during a robbery in the parking garage at The Mall at Short Hills in Millburn, New Jersey. The paramedics responding to the shooting were delayed because their ambulance was too large to enter the structure. In the United States, costs for parking garages are estimated to cost between $25,000 per space, with underground parking costing around $35,000 per space. Structural integrity Parking structures are subjected to the heavy and shifting loads of moving vehicles, and must bear the associated physical stresses. Expansion joints are used between sections not only for thermal expansion but to accommodate the flexing of the structure's sections due to vehicle traffic. Parking structures are generally not subject to building inspections after being checked for their initial occupancy permit. Seismic retrofits can be applied where earthquakes are an issue. Some parking structures have partly collapsed, either during construction or years later. In July 2009 a fourth-floor section failed at the Centergy building in midtown Atlanta, pancaking down and destroying more than 30 vehicles but injuring no-one. In December 2007, a car crashed into the wall of the deck at the SouthPark Mall in Charlotte, North Carolina, weakening it and causing a small collapse which destroyed two cars below. On the same day, one under construction in Jacksonville, Florida collapsed as concrete was being poured on the sixth floor. In November 2008, the sudden collapse of the middle level of a deck in Montreal was preceded by warning signs some weeks before, including cracks and water leaks. In June 2012, the Algo Centre Mall's rooftop parking deck collapsed into the building, crashing through the upper level lottery kiosk adjacent to the food court and escalators to the ground floor below, killing two people. In October 2012 four people were killed and nine more injured when a parking structure under construction at a campus of Miami-Dade College in Florida collapsed, purportedly due to an unfinished column. The Surfside condominium's main building's collapse that killed ninety-eight people was likely caused by the failure of the long-term degradation of reinforced concrete structural support in the basement-level parking garage. Precast parking structures As multi-storey car parks have become more common since the middle of the twentieth century, many constructions of such structures have been using precast concrete to reduce the construction time. The design involves putting parking structure parts together. The parts of precast concrete include multi-storey structural wall panels, interior and exterior columns, structural floors, girders, wall panels, stairs, and slabs. The precast concrete parts are transported using flatbed semi-trailers to the sites. The structural floor modules may need to be laid tilted during the transportation in order to cover as large floor area as possible while they can be easily transported on the roadways. The modules are lifted using precast concrete lifting anchor systems at the sites for assembly. Decorations may include using of covers to close the holes in the precast concrete that contains the lifting anchors, and installing facades to the exterior of the structures. In modern construction of the precast modules, there are other features to improve the strength of the structure. An example is to use prestressed strands on post-tensioned concrete for the construction of the shear walls. Another example is the use of carbon-fiber-reinforced polymer to replace steel wire mesh to lighten the load and yield more corrosion resistance especially for the cold-climate areas which use salt for melting snow. Architectural value These structures are not usually known for their architectural value. As Architectural Record has noted, "In the Pantheon of Building Types, the parking garage lurks somewhere in the vicinity of prisons and toll plazas." The New York Times has labeled parking structures as "the grim afterthought of American design". A handful of structures have received considerable praise for their design, including 1111 Lincoln Road, in the South Beach section of Miami Beach, Florida and designed by the internationally known Swiss architectural firm of Herzog & de Meuron. The Brutalist Preston bus station in the United Kingdom, which incorporates a multistorey car park Castle Terrace Car Park in Edinburgh, United Kingdom In the United States, several have been listed on the National Register of Historic Places, including Boston's North Terminal Garage. In more recent developments, Queensway Bay Parking Garage, Long Beach CA, has received awards for it unique facade in 1992, Designed by International Parking Design and built by Bomel Construction Company Inc. Nomenclature The term multistorey car park (often abbreviated to multistorey or multistory) is used in the United Kingdom, Hong Kong, and many Commonwealth of Nations countries, and it is nowadays most commonly spelled without a hyphen. In the United States, the term parking structure is used, especially when it is necessary to distinguish such a structure from the "garage" connected with a house. In some places in North America, "parking garage" refers only to an indoor, often underground, structure. Outdoor, multi-level parking facilities are referred to by a number of regional terms: Parking garage is used, to varying degrees, throughout the U.S. and Canada, often referring to underground parking, and designed professionally by Structural Engineers and Architects; Parking Structure is used worldwide, and synonymously with “parking garage”. Parking deck is used mostly in the Southern United States. Parking ramp is used in the upper Midwest, especially Minnesota and Wisconsin, and has been observed as far east as Buffalo, New York. Parkade is widely used in Western Canada and South Africa. Parking building is used in New Zealand. Architects and structural engineers in the USA are likely to call it a parking structure since their work is all about structures and since that term is the vernacular in the United States. When constructed as the base of a high-rise, it is sometimes called a parking podium. United States building codes use the term open parking garage to refer to a structure designed for car storage that has openings along at least 40% of the perimeter, as opposed to an enclosed parking garage that requires mechanical ventilation. Natural or mechanical ventilation provides fresh air flow to disperse car exhaust in normal conditions, or hot gas and smoke in case of fire. Typically parking consultants in the UK describe the number of car park floors in terms of "G+x". G stands for ground and x for the number of floors above ground. For example, G+5 is a multi-storey car park structure with a ground floor and 5 floors above that, i.e. a total of 6 floors. The preceding does not apply to the United States where B+x refers to basement levels ascending in number x while descending in elevation, L1 or ground level [unlike European standards where ground level is below Level 1] with added levels as L2, etc. Construction types Concrete Steel structure Automated (mechanical) Steel structure Structure car parks are car parks made of structural steel components connected to each other to carry the loads and provide full structural rigidity. Steel is a high-strength material requiring less material than other types of structures like concrete and timber. Steel construction features: Cost savings: inexpensive to manufacture and erect, and requires less maintenance than traditional building methods. Speed: Allows construction or prefabrication off-site with rapid installation on-site. Some suppliers claim construction in days. Durability: Suppliers claim 50-plus years lifespan. Removability: Steel car park structure could be designed to be removed at a later date. Expandability: Steel car park structures can be expanded easily at a later date. Creativity: Steel allows for long column-free spans. The ceiling slab of the steel structure car park is typically made of composite material such as corrugated steel sheets and concrete. The surface of the first-floor parking can be left bare or covered with epoxy or tarmac. Foundationless and modular Demand, steel features, and innovation have led to the development of a foundationless, modular, removable steel car park structure. Parking demand often grows quickly, significantly and sometimes unexpectedly. Modular steel car parks could be the proper solution if the surface area available is not sufficient and can be expanded upward, or whenever it is not feasible to build up a multi-storey parking. The development of the building concept of modular car parks came about by using the modular assembling method of vertical and horizontal elements (such as columns and beams) Modular car park structures are versatile and can be built in phases or in different sizes and shape. The solution makes it possible to develop a parking structure even in case of particular conditions or constraints, such as archaeological sites or city centres, because it allows: To virtually double the parking surface without leaving any footprint on the ground, as no settlement for excavations or traditional foundations is needed; To double the parking surface by means of a light steel single-deck car park system. Prefab modular components of the system make each project versatile and suitable for both large and small sized areas. These parking structures are generally demountable and can be relocated to avoid making the choice of converting a surface to parking area irrevocably. They could be used as permanent structures or are conceived as temporary parking facilities for temporary parking demand needs. A number of parking decks have been demounted after a few years – to make room for the development of a permanent structure – and relocated to respond to local parking demand. Automated parking The earliest use of an automated parking system (APS) was in Paris in 1905 at the Garage Rue de Ponthieu. The APS consisted of a groundbreaking multi-storey concrete structure with an internal elevator to transport cars to upper levels where attendants parked the cars. A 1931 Popular Mechanics article speculated about design for an underground garage where the car is taken to a parking area by a conveyor and then an elevator to shuttles mounted on rails. The total cost of ownership of automated parking needs to be carefully considered. The actual cost of construction of automated car parks is typically higher than conventional car park structures, however, this can be offset by the higher space efficiency including reduced excavation waste from minimized footprints. The cost of the mechanical equipment needed to transport the cars needs to be added to the building cost. In addition, operation and maintenance costs of the mechanical equipment need to be added in order to determine the total cost of ownership. Other costs could be saved, for example, there is no need for an energy-intensive ventilating system, since cars are not driven inside and human cashiers or security personnel may not be needed. For naturally ventilated car parks structures, the ventilation equipment is not needed. Automated car parks rely on similar technology to that used for mechanical handling and document retrieval. The driver leaves the car in an entrance module, and it is then transported to a parking slot by a robotic trolley. For the driver, the process of parking is reduced to leaving the car inside an entrance module. At peak periods a wait may occur before entering or leaving, because loading passengers and luggage occurs at the entrance and exit rather than at the parking stall. This loading blocks the entrance or exit from being available to others. It is generally not recommended to use automated car parks for high peak hour volume facilities. Additional factors that need to be taken into consideration are: Fear of breakdowns (How does the user get the car back) Maintenance contracts needed with suppliers Automotive factories and car dealerships often use automated car parks to store inventory, which makes best use of space if they operate in urban areas, plus the car park may be decorated to promote the brand. For instance at the Autostadt there are two 60 meter/200 ft tall glass silos (AutoTürme) used as storage for new Volkswagens. The two towers are connected to the Volkswagen factory by a 700-metre tunnel. When cars arrive at the towers they are carried up at a speed of 1.5 metres per second. The render for the Autostadt shows 6 towers. When purchasing a car from Volkswagen (the main brand only, not the sub-brands) in select European countries, it is optional if the customer wants it delivered to the dealership where it was bought or if the customer wants to travel to Autostadt to pick it up. If the latter is chosen, the Autostadt supplies the customer with free entrance, meal tickets and a variety of events building up to the point where the customer can follow on screen as the automatic elevator picks up the selected car in one of the silos. The car is then transported out to the customer without having driven a single meter, and the odometer is thus on "0". Automated car parks have been popular for multistorey residential buildings in New York City and Paris. In Toronto, automated car parks are gradually catching in the downtown core condominium developments sine the 2010s, due to developers having to meet city-mandated minimum parking space requirements while building on increasingly smaller lots. Other technologies Modern car parks utilize a variety of technologies to help motorists find unoccupied parking spaces, car location when returning to the vehicle and improve their experience. These include adaptive lighting, sensors and parking space LED indicators (red for occupied, green for available and blue is reserved for the disabled; above every parking space), indoor positioning system (IPS), including QR code, and mobile payment options. The Santa Monica Place shopping mall in California has cameras on each stall that can help count the lot occupancy and find lost cars. Online booking technology service providers have been created to help drivers find long-term parking in an automated manner, while also providing significant savings for those who book parking spaces ahead of time. They use real-time inventory management checking technology to display car parks with availability, sorted by price and distance from the airport. Other recent developments in technology include Vehicles Detection and Count Systems, Point of Sale & Revenue Control Systems, Traffic & Capacity Monitoring Systems, Valet Parking Point of Sale, Management & Revenue Control Systems. These systems help in way finding for parking clients with space availability shown at every turn, space monitoring using retrofit wifi transmitters in each space to update the space availability signs and to alert parking management of bottle necks and intervention measures. Revenue Control, Capacity Management, and Valet Point of Sale is a major issue for Office and Retail parking management and is also a means of parking management intervention, where website update the status of all of these issues for exclusive use by management. Irvine Spectrum Center, Irvine CA, with 3 parking structures, uses all of these systems The City of Santa Monica uses Traffic and Capacity Monitoring with its 30 parking structures. Disneyland, in Anaheim CA uses most of these hi-tech solutions on its 8 garages. Multistorey parking ship In 1991, a 1975 marine vessel was transformed into a floating pontoon multi-storey car parking facility. The ship was given the new name P-Arken (a pun on the words park and ark) and it is permanently towed in Gothenburg's harbour Lilla Bommen near Skeppsbron. In November 2019, a fully-clad parking barge for automobiles was patented in the United States. Its angular sides are designed to protect against driving wind, rain, and debris. Education and research In October 2009, the National Building Museum opened an exhibition solely devoted to the study of parking garages and their impact on the built environment. This exhibition, titled House of Cars: Innovation and the Parking Garage, was on view until 11 July 2010. Additional information on the design and building of parking structures can be found in "Parking Structures: planning, design, construction, maintenance, and repair" This resource is on its third edition, written by prominent staff of Walker Parking Consultants, a preeminent Parking Structure designer in the US. See also Arcade, "parkade" is a portmanteau or parking and arcade due to the architectural similarity. Auto Stacker Autostadt Automatic parking Automated parking system Automatic vehicle location Car condo Car parking system Parking guidance and information Trinity Square, Gateshead References External links "Robotic Parking Garage: No Tip Necessary " Garages (parking) Indoor positioning system Parking Structural system
Multistorey car park
[ "Technology", "Engineering" ]
4,522
[ "Structural engineering", "Wireless locating", "Building engineering", "Wireless networking", "Indoor positioning system", "Structural system" ]
1,496,229
https://en.wikipedia.org/wiki/Pentadecagon
In geometry, a pentadecagon or pentakaidecagon or 15-gon is a fifteen-sided polygon. Regular pentadecagon A regular pentadecagon is represented by Schläfli symbol {15}. A regular pentadecagon has interior angles of 156°, and with a side length a, has an area given by Construction As 15 = 3 × 5, a product of distinct Fermat primes, a regular pentadecagon is constructible using compass and straightedge: The following constructions of regular pentadecagons with given circumcircle are similar to the illustration of the proposition XVI in Book IV of Euclid's Elements. Compare the construction according to Euclid in this image: Pentadecagon In the construction for given circumcircle: is a side of equilateral triangle and is a side of a regular pentagon. The point divides the radius in golden ratio: Compared with the first animation (with green lines) are in the following two images the two circular arcs (for angles 36° and 24°) rotated 90° counterclockwise shown. They do not use the segment , but rather they use segment as radius for the second circular arc (angle 36°). A compass and straightedge construction for a given side length. The construction is nearly equal to that of the pentagon at a given side, then also the presentation is succeed by extension one side and it generates a segment, here which is divided according to the golden ratio: Circumradius Side length Angle Symmetry The regular pentadecagon has Dih15 dihedral symmetry, order 30, represented by 15 lines of reflection. Dih15 has 3 dihedral subgroups: Dih5, Dih3, and Dih1. And four more cyclic symmetries: Z15, Z5, Z3, and Z1, with Zn representing π/n radian rotational symmetry. On the pentadecagon, there are 8 distinct symmetries. John Conway labels these symmetries with a letter and order of the symmetry follows the letter. He gives r30 for the full reflective symmetry, Dih15. He gives d (diagonal) with reflection lines through vertices, p with reflection lines through edges (perpendicular), and for the odd-sided pentadecagon i with mirror lines through both vertices and edges, and g for cyclic symmetry. a1 labels no symmetry. These lower symmetries allows degrees of freedoms in defining irregular pentadecagons. Only the g15 subgroup has no degrees of freedom but can be seen as directed edges. Pentadecagrams There are three regular star polygons: {15/2}, {15/4}, {15/7}, constructed from the same 15 vertices of a regular pentadecagon, but connected by skipping every second, fourth, or seventh vertex respectively. There are also three regular star figures: {15/3}, {15/5}, {15/6}, the first being a compound of three pentagons, the second a compound of five equilateral triangles, and the third a compound of three pentagrams. The compound figure {15/3} can be loosely seen as the two-dimensional equivalent of the 3D compound of five tetrahedra. Isogonal pentadecagons Deeper truncations of the regular pentadecagon and pentadecagrams can produce isogonal (vertex-transitive) intermediate star polygon forms with equal spaced vertices and two edge lengths. Petrie polygons The regular pentadecagon is the Petrie polygon for some higher-dimensional polytopes, projected in a skew orthogonal projection: Uses A regular triangle, decagon, and pentadecagon can completely fill a plane vertex. However, due to the triangle's odd number of sides, the figures cannot alternate around the triangle, so the vertex cannot produce a semiregular tiling. See also Construction of the pentadecagon at given side length, calculation of the circumradius (German) Construction of the pentadecagon at given side length, exemplification: circumradius References External links Constructible polygons Polygons by the number of sides
Pentadecagon
[ "Mathematics" ]
892
[ "Constructible polygons", "Planes (geometry)", "Euclidean plane geometry" ]
1,496,348
https://en.wikipedia.org/wiki/Samphire
Samphire is a name given to a number of succulent salt-tolerant plants (halophytes) that tend to be associated with water bodies. Rock samphire (Crithmum maritimum) is a coastal species with white flowers that grows in Ireland, the United Kingdom and the Isle of Man. This is probably the species mentioned by Shakespeare in King Lear. Golden samphire (Limbarda crithmoides) is a coastal species with yellow flowers that grows across Eurasia. Several species in the genus Salicornia, known as "marsh samphire" in Britain. Blutaparon vermiculare, Central America, southeastern North America Tecticornia, Australia Sarcocornia, cosmopolitan Following the construction of the Channel Tunnel, the nature reserve created on new land near Folkestone made from excavated rock was named "Samphire Hoe", a name coined by Mrs Gillian Janaway. Etymology Originally "sampiere", a corruption of the French "Saint Pierre" (Saint Peter), samphire was named after the patron saint of fishermen because all of the original plants with its name grow in rocky salt-sprayed regions along the sea coast of northern Europe or in its coastal marsh areas. It is sometimes called rock samphire or seafennel. In North Wales, especially along the River Dee's marshes, it has long been known as sampkin. Uses Marsh samphire ashes were used to make soap and glass (hence its other old name in English, "glasswort") as it is a source of sodium carbonate, also known as soda ash. In the 14th century glassmakers located their workshops near regions where this plant grew, since it was so closely linked to their trade. Many samphires are edible. In England the leaves were gathered early in the year and pickled or eaten in salads with oil and vinegar. Marsh samphire (Salicornia bigelovii) was investigated as a potential biodiesel source that can be grown in coastal areas where conventional crops cannot be grown. Rock samphire is another kind of samphire, also called sea fennel. It is mentioned by Shakespeare in King Lear: This refers to the dangers involved in collecting rock samphire on sea cliffs. Aboriginal Australians have long used samphire as bush tucker, due to its abundance, flavour, and nutritional value. It is high in Vitamin A and a good source of calcium and iron. Other Australians have recently discovered the potential of the species as a food plant and it has begun to appear on restaurant menus across the country. A variety of rock samphire known as Paccasasso del Conero, or sea fennel, is well known in Italy along the Adriatic coast. This variety is typically used in local recipes such as a mortadella and paccasasso sandwich, pasta with mussels and paccasassi, or in fresh salad. References External links How to cook samphire Halophytes Vegetables Plant common names Rock Samphire in Italy: history and recipes
Samphire
[ "Chemistry", "Biology" ]
634
[ "Common names of organisms", "Plants", "Plant common names", "Halophytes", "Salts" ]
1,496,408
https://en.wikipedia.org/wiki/Golden%20samphire
The golden samphire (Limbarda crithmoides) is a perennial coastal species, which may be found growing on salt marsh or sea cliffs across western and southern Europe and the Mediterranean. Golden samphire has a tufted habit, and the plant may grow up to 1 m tall. It has narrow fleshy green to yellow green leaves and large flower heads, with six yellow ray florets which may be up to across. The flowers are self-fertile (able to pollinate themselves) and may also be pollinated by bees, flies and beetles. They bloom between June and October and can smell like shoe polish. Taxonomy It was first described by Carl Linnaeus as Inula crithmoides in his book Species Plantarum 2 on page 883 in 1753 and then later when the genus was renamed, it was published as Limbarda crithmoides by Barthélemy Charles Joseph Dumortier in Fl. Belg. on page 68 in 1827. It was verified by United States Department of Agriculture and the Agricultural Research Service on 11 June 2015. Known subspecies: Limbarda crithmoides subsp. crithmoides Limbarda crithmoides subsp. longifolia (Arcang.) Greuter Distribution and habitat It is native to temperate parts of Africa, Asia and Europe. Range It is found in Africa, within Algeria, Egypt (incl. Sinai), Libya, Morocco and Tunisia. In Asia, it is found in Israel, Lebanon, Syria and Turkey. Europe, within Ireland, United Kingdom (where it is mostly found in the Isle of Sheppey), Albania, Croatia, Greece (incl. Crete), Italy (incl. Sardinia and Sicily), Malta, Montenegro and Slovenia. Also within south-western European countries of France (incl. Corsica), Portugal, Spain (incl. Baleares). Uses Young leaves may be eaten raw or cooked as a leaf vegetable. It was formerly sold in markets in London for uses in pickles. In Lebanon, it was evaluated for use in saline agriculture. References External links Wildscreen Arkive, Limbarda (Limbarda crithmoides) photo Inuleae Leaf vegetables Halophytes Flora of Asia Flora of North Africa Flora of Europe Plants described in 1753 Taxa named by Carl Linnaeus
Golden samphire
[ "Chemistry" ]
477
[ "Halophytes", "Salts" ]
1,496,522
https://en.wikipedia.org/wiki/Sarcocornia
Sarcocornia is a formerly recognized genus of flowering plants in the amaranth family, Amaranthaceae. Species are known commonly as samphires, glassworts, or saltworts. Molecular phylogenetic studies have shown that when separated from Salicornia, the genus is paraphyletic, since Salicornia is embedded within it, and Sarcocornia has now been merged into a more broadly circumscribed Salicornia. When separated from Salicornia, the genus has a cosmopolitan distribution, and is most diverse in the Cape Floristic Region of South Africa. Description Species formerly placed in Sarcocornia are perennial herbs, subshrubs or shrubs. They are taking an erect or prostrate, creeping form. The new stems are fleshy and divided into joint-like segments. Older stems are woody and not segmented. The oppositely arranged leaves are borne on fleshy, knobby petioles, their base decurrent and connate (thus forming the segments), the blades forming small, triangular tips with narrow scarious margin. The terminal or lateral inflorescences are spike-like, made up of joint-like segments with tiny paired cymes emerging from the joints. Each cyme consists of three (rarely five) flowers completely embedded between the bract and immersed in the fleshy tissue of the axis. The flowers of a cyme are arranged in a transverse row, the central flower separating the lateral flowers, with tissue of the axis between them. The hermaphrodite or unisexual flowers are more or less radially symmetric, with a perianth of three or four fleshy tepals connate nearly to the apex, one or two stamens, and an ovary with two or three stigmas. The perianth is persistent in fruit. The fruit wall (pericarp) is membranous. The vertical seed is ellipsoid, with light brown, membranous, hairy seed coat, the hairs can be strongly curved, hooked, or conic, straight or slightly curved. The seed contains no perisperm (feeding tissue). The basic chromosome number is x=9. The species are diploid (18 chromosomes), tetraploid (36), hexaploid (54), or octoploid (72). Taxonomy The genus Sarcocornia was first described in 1978 by A J Scott. It separated the perennial species from the closely related annual Salicornia senus stricto, additionally containing some species formerly belonging to the former genus Arthrocnemum. The type species is Sarcocornia perennis. Sarcocornia/Salicornia began to evolve during the middle Miocene from ancestors in Eurasia, developing four phylogenetic lineages: the first was the Eurasian Sarcocornia clade, further diversifying into the American Sarcocornia clade, then the Salicornia clade, and the South African/Australian Sarcocornia clade. When Salicornia is separated from Sarcocornia to comprise all the annual, more frost tolerant species, the genus Sarcocornia is paraphyletic, since Salicornia evolved within Sarcocornia. The prostrate, mat-forming growth seems to have evolved several times independently. It is probably advantageous in habitats with prolonged flooding, high tidal movement and frost. A molecular phylogenetic study in 2017 confirmed the paraphyly of Sarcocornia, and merged the genus into Salicornia. Selected former species Accepted names in Salicornia are taken from Plants of the World Online. Sarcocornia alpini (Lag.) Rivas-Martınez = Salicornia alpini Sarcocornia ambigua (Michx.) M.A.Alonso & M.B.Crespo = Salicornia ambigua Sarcocornia andina (Phil.) Freitag, M.A.Alonso & M.B.Crespo = Salicornia andina Sarcocornia blackiana (Ulbr.) A.J.Scott = Salicornia blackiana Sarcocornia capensis (Moss) A.J.Scott = Salicornia capensis Sarcocornia carinata (Fuente, Rufo & Sánchez Mata) Fuente, Rufo & Sánchez Mata = Salicornia alpini subsp. carinata Sarcocornia decumbens (Tölken) A.J.Scott = Salicornia decumbens Sarcocornia decussata S.Steffen, Mucina & G.Kadereit = Salicornia decussata Sarcocornia dunensis (Moss) S.Steffen, Mucina & G.Kadereit = Salicornia dunensis Sarcocornia freitagii S. Steffen, Mucina & G.Kadereit = Salicornia helmutii Sarcocornia fruticosa (L.) A.J.Scott = Salicornia fruticosa Sarcocornia globosa P.G. Wilson = Salicornia globosa Sarcocornia hispanica Fuente, Rufo & Sánchez-Mata = Salicornia hispanica Sarcocornia lagascae Fuente, Rufo & Sánchez Mata = Salicornia lagascae Sarcocornia littorea (Moss) A.J.Scott = Salicornia littorea Sarcocornia magellanica (Phil.) M.A.Alonso & M.B.Crespo = Salicornia magellanica Sarcocornia mossambicensis Brenan = Salicornia mossambicensis Sarcocornia mossiana (Tölken) A.J.Scott = Salicornia mossiana Sarcocornia natalensis (Bunge ex Ung.-Sternb.) A.J.Scott = Salicornia natalensis Sarcocornia neei (Lag.) M.A.Alonso & M.B.Crespo = Salicornia neei Sarcocornia obclavata Yaprak = Salicornia obclavata Sarcocornia pacifica (Standl.) A.J.Scott = Salicornia pacifica Sarcocornia perennis (Miller) A.J.Scott = Salicornia perennis Sarcocornia pillansii (Moss) A.J.Scott = Salicornia pillansii Sarcocornia pruinosa Fuente, Rufo & Sánchez-Mata = Salicornia pruinosa Sarcocornia pulvinata (R.E.Fr.) A.J.Scott = Salicornia pulvinata Sarcocornia quinqueflora (Bunge ex Ung.-Sternb.) A.J. Scott = Salicornia quinqueflora Sarcocornia tegetaria S.Steffen, Mucina & G.Kadereit = Salicornia tegetaria Sarcocornia terminalis (Tölken) A.J.Scott = Salicornia terminalis Sarcocornia utahensis (Tidestr.) A.J.Scott = Salicornia utahensis Sarcocornia xerophila (Tölken) A.J.Scott = Salicornia xerophila References External links Sarcocornia. Red List of South African Plants. South African National Biodiversity Institute (SANBI). Historically recognized angiosperm genera Salicornia Halophytes
Sarcocornia
[ "Chemistry" ]
1,612
[ "Halophytes", "Salts" ]
1,496,720
https://en.wikipedia.org/wiki/Altacast
Altacast (formerly known as Edcast and Oddcast) is a free and open-source audio encoder that can be used to create Internet streams of varying types. Many independent and commercial broadcasters use Altacast to create Internet radio stations, such as those listed on the Icecast, Loudcaster and Shoutcast station directories. Development The original streaming software, Oddcast, was developed from 2000 to 2010. The official site at Oddsock.org hosted streaming media tools, which included Oddcast, Stream Transcoder, Icecast Station Browser plugin, Song Requester plugin and Do Something plugin. In late November 2010, Oddsock.org was shut down. Edcast, a fork of Oddcast, is being updated and hosted at Club RIO. In early 2012, development of Edcast was moved to Google Code and SourceForge. As of October 30, 2011, the latest stable version is 3.33.2011.1026 and the latest beta version is 3.37.2011.1214. In September 2012, a second fork, Altacast was released. The Standalone & DSP edition are derived from GPL software and is available on GitHub, while the RadioDJ edition is written in .NET Framework and developed separately. A version 2.0 for the Standalone & DSP edition that will be SHOUTcast v2 compatible is planned for the future. Altacast plugin for RadioDJ will no longer function with new versions of RadioDJv2 - Altacast is not supported by the developer of RadioDJ due to legal issues. Features Altacast is supported on Windows. It will run in conjunction with various media players compatible with Winamp plugins, such as AIMP, JetAudio, KMPlayer, MediaMonkey, MusicBee and foobar2000, as well as a standalone encoder. Altacast Standalone & DSP can stream to Icecast and SHOUTcast servers in Ogg Vorbis and Ogg FLAC out-of-the-box. MP3, AAC and AAC+ support can be added via the LAME encoder (lame_enc.dll), FAAC encoder (libFAAC.dll), and CT-aacPlus encoder (enc_aacplus.dll obtainable from Winamp 5.61) respectively. Adjustable settings for each encoder include bitrate (for MP3, AAC+, Ogg Vorbis), quality (for AAC, Ogg Vorbis), sample rate (22050 Hz or 44100 Hz) and channels (Parametric Stereo is available for AAC+ up till 56 kbit/s). SHOUTcast v2 is currently not officially supported in Altacast Standalone & DSP. However, it is possible to connect to stream ID no. 1 of a SHOUTcast v2 server in legacy (v1) mode. As a temporary workaround, the SHOUTcast DSP 1.9.2 plugin for Winamp-compatible media players may be used to broadcast to alternate mount points (e.g. stream ID no. 2). SHOUTcast v2 and Opus support is available in v1.4 onwards in the plugin. Reception In 2007 Oddcast was used in a document from the Department of Audio Communication of Technische Universität Berlin as part of description to set up an internet radio broadcast system. Use of edcast for a similar purpose was described in a 2010 article in the PCWorld magazine. A 2016 thesis for Oulu University of Applied Sciences has described the used of Altacast in the implementation of an internet radio station while the for a 2018 article in the Linux Journal recommended it as a compatible source client for Microsoft Windows in setting up a freeware internet radio station using Liquidsoap, icecast and open standards. Académie d'Orléans-Tours has used web radio (internet radio) for broadcasts for students and the use of edcast subsequently altacast in the system has been described. See also List of Internet radio stations List of streaming media systems References External links Audio software Internet radio in the United States Streaming software Internet radio software 2001 software Companies based in Chicago
Altacast
[ "Engineering" ]
861
[ "Audio engineering", "Audio software" ]
1,496,726
https://en.wikipedia.org/wiki/Maximal%20compact%20subgroup
In mathematics, a maximal compact subgroup K of a topological group G is a subgroup K that is a compact space, in the subspace topology, and maximal amongst such subgroups. Maximal compact subgroups play an important role in the classification of Lie groups and especially semi-simple Lie groups. Maximal compact subgroups of Lie groups are not in general unique, but are unique up to conjugation – they are essentially unique. Example An example would be the subgroup O(2), the orthogonal group, inside the general linear group GL(2, R). A related example is the circle group SO(2) inside SL(2, R). Evidently SO(2) inside GL(2, R) is compact and not maximal. The non-uniqueness of these examples can be seen as any inner product has an associated orthogonal group, and the essential uniqueness corresponds to the essential uniqueness of the inner product. Definition A maximal compact subgroup is a maximal subgroup amongst compact subgroups – a maximal (compact subgroup) – rather than being (alternate possible reading) a maximal subgroup that happens to be compact; which would probably be called a compact (maximal subgroup), but in any case is not the intended meaning (and in fact maximal proper subgroups are not in general compact). Existence and uniqueness The Cartan-Iwasawa-Malcev theorem asserts that every connected Lie group (and indeed every connected locally compact group) admits maximal compact subgroups and that they are all conjugate to one another. For a semisimple Lie group uniqueness is a consequence of the Cartan fixed point theorem, which asserts that if a compact group acts by isometries on a complete simply connected negatively curved Riemannian manifold then it has a fixed point. Maximal compact subgroups of connected Lie groups are usually not unique, but they are unique up to conjugation, meaning that given two maximal compact subgroups K and L, there is an element g ∈ G such that gKg−1 = L. Hence a maximal compact subgroup is essentially unique, and people often speak of "the" maximal compact subgroup. For the example of the general linear group GL(n, R), this corresponds to the fact that any inner product on Rn defines a (compact) orthogonal group (its isometry group) – and that it admits an orthonormal basis: the change of basis defines the conjugating element conjugating the isometry group to the classical orthogonal group O(n, R). Proofs For a real semisimple Lie group, Cartan's proof of the existence and uniqueness of a maximal compact subgroup can be found in and . and discuss the extension to connected Lie groups and connected locally compact groups. For semisimple groups, existence is a consequence of the existence of a compact real form of the noncompact semisimple Lie group and the corresponding Cartan decomposition. The proof of uniqueness relies on the fact that the corresponding Riemannian symmetric space G/K has negative curvature and Cartan's fixed point theorem. showed that the derivative of the exponential map at any point of G/K satisfies |d exp X| ≥ |X|. This implies that G/K is a Hadamard space, i.e. a complete metric space satisfying a weakened form of the parallelogram rule in a Euclidean space. Uniqueness can then be deduced from the Bruhat-Tits fixed point theorem. Indeed, any bounded closed set in a Hadamard space is contained in a unique smallest closed ball, the center of which is called its circumcenter. In particular a compact group acting by isometries must fix the circumcenter of each of its orbits. Proof of uniqueness for semisimple groups also related the general problem for semisimple groups to the case of GL(n, R). The corresponding symmetric space is the space of positive symmetric matrices. A direct proof of uniqueness relying on elementary properties of this space is given in . Let be a real semisimple Lie algebra with Cartan involution σ. Thus the fixed point subgroup of σ is the maximal compact subgroup K and there is an eigenspace decomposition where , the Lie algebra of K, is the +1 eigenspace. The Cartan decomposition gives If B is the Killing form on given by B(X,Y) = Tr (ad X)(ad Y), then is a real inner product on . Under the adjoint representation, K is the subgroup of G that preserves this inner product. If H is another compact subgroup of G, then averaging the inner product over H with respect to the Haar measure gives an inner product invariant under H. The operators Ad p with p in P are positive symmetric operators. This new inner produst can be written as where S is a positive symmetric operator on such that Ad(h)tS Ad h = S for h in H (with the transposes computed with respect to the inner product). Moreover, for x in G, So for h in H, For X in define If ei is an orthonormal basis of eigenvectors for S with Sei = λi ei, then so that f is strictly positive and tends to ∞ as |X| tends to ∞. In fact this norm is equivalent to the operator norm on the symmetric operators ad X and each non-zero eigenvalue occurs with its negative, since i ad X is a skew-adjoint operator on the compact real form . So f has a global minimum at Y say. This minimum is unique, because if Z were another then where X in is defined by the Cartan decomposition If fi is an orthonormal basis of eigenvectors of ad X with corresponding real eigenvalues μi, then Since the right hand side is a positive combination of exponentials, the real-valued function g is strictly convex if X ≠ 0, so has a unique minimum. On the other hand, it has local minima at t = 0 and t = 1, hence X = 0 and p = exp Y is the unique global minimum. By construction f(x) = f(σ(h)xh−1) for h in H, so that p = σ(h)ph−1 for h in H. Hence σ(h)= php−1. Consequently, if g = exp Y/2, gHg−1 is fixed by σ and therefore lies in K. Applications Representation theory Maximal compact subgroups play a basic role in the representation theory when G is not compact. In that case a maximal compact subgroup K is a compact Lie group (since a closed subgroup of a Lie group is a Lie group), for which the theory is easier. The operations relating the representation theories of G and K are restricting representations from G to K, and inducing representations from K to G, and these are quite well understood; their theory includes that of spherical functions. Topology The algebraic topology of the Lie groups is also largely carried by a maximal compact subgroup K. To be precise, a connected Lie group is a topological product (though not a group theoretic product) of a maximal compact K and a Euclidean space – G = K × Rd – thus in particular K is a deformation retract of G, and is homotopy equivalent, and thus they have the same homotopy groups. Indeed, the inclusion and the deformation retraction are homotopy equivalences. For the general linear group, this decomposition is the QR decomposition, and the deformation retraction is the Gram-Schmidt process. For a general semisimple Lie group, the decomposition is the Iwasawa decomposition of G as G = KAN in which K occurs in a product with a contractible subgroup AN. See also Hyperspecial subgroup Complex Lie group Notes References Topological groups Lie groups
Maximal compact subgroup
[ "Mathematics" ]
1,629
[ "Lie groups", "Mathematical structures", "Space (mathematics)", "Topological spaces", "Algebraic structures", "Topological groups" ]
1,496,737
https://en.wikipedia.org/wiki/Hexatonic%20scale
In music and music theory, a hexatonic scale is a scale with six pitches or notes per octave. Famous examples include the whole-tone scale, C D E F G A C; the augmented scale, C D E G A B C; the Prometheus scale, C D E F A B C; and the blues scale, C E F G G B C. A hexatonic scale can also be formed by stacking perfect fifths. This results in a diatonic scale with one note removed (for example, A C D E F G). Whole-tone scale The whole-tone scale is a series of whole tones. It has two non-enharmonically equivalent positions: C D E F G A C and D E F G A B D. It is primarily associated with the French impressionist composer Claude Debussy, who used it in such pieces of his as Voiles and Le vent dans la plaine, both from his first book of piano Préludes. This whole-tone scale has appeared occasionally and sporadically in jazz at least since Bix Beiderbecke's impressionistic piano piece In a Mist. Bop pianist Thelonious Monk often interpolated whole-tone scale flourishes into his improvisations and compositions. Mode-based hexatonic scale The major hexatonic scale is made from a major scale and removing the seventh note, e.g., C D E F G A C. It can also be made from superimposing mutually exclusive triads, e.g., C E G and D F A. Similarly, the minor hexatonic scale is made from a minor scale by removing the sixth note, e.g., C D E F G B C. Irish and Scottish and many other folk traditions use six-note scales. They can be easily described by the addition of two triads a tone apart, e.g., Am and G in "Shady Grove", or omitting the fourth or sixth from the seven-note diatonic scale. Augmented scale The augmented scale, also known in jazz theory as the symmetrical augmented scale, is so called because it can be thought of as an interlocking combination of two augmented triads an augmented second or minor third apart: C E G and E G B. It may also be called the "minor-third half-step scale", owing to the series of intervals produced. It made one of its most celebrated early appearances in Franz Liszt's Faust Symphony (Eine Faust Symphonie). Another famous use of the augmented scale (in jazz) is in Oliver Nelson's solo on "Stolen Moments". It is also prevalent in 20th century compositions by Alberto Ginastera, Almeida Prado, Béla Bartók, Milton Babbitt, and Arnold Schoenberg, by saxophonists John Coltrane and Oliver Nelson in the late 1950s and early 1960s, and bandleader Michael Brecker. Alternating E major and C minor triads form the augmented scale in the opening bars of the Finale in Shostakovich's Second Piano Trio. Prometheus scale The Prometheus scale is so called because of its prominent use in Alexander Scriabin's symphonic poem Prometheus: The Poem of Fire. Scriabin himself called this set of pitches, voiced as the simultaneity (in ascending order) C F B E A D the "mystic chord". Others have referred to it as the "Promethean chord". It may be thought of as C Lydian dominant without the 5th degree. It can also be though as a triad pair: a minor triad and an augmented triad 1/2 step up. For example, A minor triad and B flat augmented triad. Blues scale The blues scale is so named for its use of blue notes. Since blue notes are alternate inflections, strictly speaking there can be no one blues scale, but the scale most commonly called "the blues scale" comprises the minor pentatonic scale and an additional flat 5th scale degree: C E F G G B C. Tritone scale The tritone scale, C D E G G() B, is enharmonically equivalent to the Petrushka chord; it means a C major chord ( C E G() ) + G major chord's 2nd inversion ( D G B ). The two-semitone tritone scale, C D D F G A, is a symmetric scale consisting of a repeated pattern of two semitones followed by a major third now used for improvisation and may substitute for any mode of the jazz minor scale. The scale originated in Nicolas Slonimsky's book Thesaurus of Scales and Melodic Patterns through the "equal division of one octave into two parts," creating a tritone, and the "interpolation of two notes," adding two consequent semitones after the two resulting notes. The scale is the fifth mode of Messiaen's list. See also Hexachord Istrian scale References External links A model for hexatonic scales in 96-EDO , 96edo.com. Detailed Examination of Hexatonic Scales Originating in the Natural Scale/Harmonic Series The origin of triads and musica ficta filling in hexatonic gaps in the diatonic scale Tritones Musical scales lv:Blūza skaņkārta
Hexatonic scale
[ "Physics" ]
1,096
[ "Tritones", "Symmetry", "Musical symmetry" ]
1,496,757
https://en.wikipedia.org/wiki/Quest%20Diagnostics
Quest Diagnostics Incorporated is an American clinical laboratory. A Fortune 500 company, Quest operates in the United States, Puerto Rico, Mexico, and Brazil. Quest also maintains collaborative agreements with various hospitals and clinics across the globe. As of 2020, the company had approximately 48,000 employees, and it generated more than $7.7 billion in revenue in 2019. The company offers access to diagnostic testing services for cancer, cardiovascular disease, infectious disease, neurological disorders, COVID-19, and employment and court-ordered drug testing. History 1960–1995 Originally founded as Metropolitan Pathology Laboratory, Inc. in 1967 by Paul A. Brown, MD, the clinical laboratory underwent a variety of name changes. In 1969, the company's name changed to MetPath, Inc. with headquarters in Teaneck, New Jersey. By 1982, MetPath was acquired by what was then known as Corning Glass Works and was subsequently renamed Corning Clinical Laboratories. 1996–2000 On December 31, 1996, Quest Diagnostics became an independent company as a spin-off from Corning. Kenneth W. Freeman was appointed as CEO during this transition. Over the next year, Quest acquired a clinical laboratory division of Branford, Connecticut–based Diagnostic Medical Laboratory, Inc. (DML). Two years later in 1999, Quest added SmithKline Beecham Clinical Laboratories to their subsidiaries; which includes a joint venture ownership with CompuNet Clinical Laboratory. The purchase of SmithKline Beecham also included the lab's medical sample transport airline originally founded in 1988. In 1997, Quest and Banner Health formed a joint venture creating the Arizona based Sonora Quest laboratory, a business unit of Laboratory Sciences of Arizona. This entity represents the operations of Quest Diagnostics in the Arizona regional market. 2001–2015 From May 2004 to April 2012, Surya Mohapatra served as the company's President and CEO. In 2007 Quest acquired diagnostic testing equipment company AmeriPath. In response to Mohapatra's resignation after eight years with Quest, former Philips Healthcare CEO Stephen Rusckowski was appointed. Under Rusckowski, Quest Diagnostics teamed up with central New England's largest health care system, UMass Memorial Health Care, to purchase its clinical outreach laboratory. 2016–present In 2016, Quest collaborated with Safeway to bring testing services to twelve of its stores in California, Maryland, Virginia, Texas and Colorado. By the end of 2017, Quest, in partnership with Walmart, incorporated laboratory testing in about 15 of their locations in Texas and Florida. In May 2018, the company announced it will become an in-network laboratory provider to UnitedHealthcare starting in 2019, providing access to 48 million plan members. In September 2018, Quest moved its headquarters from Madison, where it was located since 2007, to Secaucus, New Jersey. In November 2018, Quest launched QuestDirect, a consumer-initiated testing service that allows patients to order health and wellness lab testing from home. In March 2020, the company launched a COVID-19 testing service. As of July 2020, Quest had performed more than 9.2 million COVID-19 molecular tests and 2.8 million serology tests. In April 2024, Quest has added a new blood screening to their AD-Detect product line. This test will analyze the blood for a specific Alzheimer's protein, pTau-217. Acquisitions Partnerships 2005: Forms a strategic alliance with Ciphergen Biosystems to commercialize novel proteomic tests. Controversies Quest Diagnostics set a record in April 2009 when it paid $302 million to the government to settle a Medicare fraud case alleging the company sold faulty medical testing kits. It was the largest qui tam (whistleblower) settlement paid by a medical lab for manufacturing and distributing a faulty product. In May 2011, Quest paid $241 million to the state of California to settle a False Claims Act case that alleged the company had overcharged Medi-Cal, the state's Medicaid program, and provided illegal kickbacks as incentives for healthcare providers to use Quest labs. In 2018, Quest Diagnostics was among a number of US based labs linked to inaccuracies of over 200 women's cervical smear tests for CervicalCheck, Ireland's national screening program. Audits of the testing performed by Quest (and another subcontractor Clinical Pathology Laboratories, Inc. of Austin Texas) showed a high rate of errors in analysis of samples which led to lawsuits and a government inquiry. Quest and the Irish government continue to settle the resulting lawsuits. On June 3, 2019, Quest announced that American Medical Collection Agency (AMCA), a billing collections service provider, had informed Quest Diagnostics that an unauthorized user had access to AMCA’s system containing personal information AMCA received from various entities, including from Quest. AMCA provides billing collections services to Optum360, which in turn is a Quest contractor. AMCA later went bankrupt after the breach. References External links Medical technology companies of the United States Companies listed on the New York Stock Exchange American companies established in 1967 Companies based in Hudson County, New Jersey Secaucus, New Jersey 1967 establishments in New York City Corning Inc. Life sciences industry Health care companies based in New Jersey 1982 mergers and acquisitions Corporate spin-offs Alzheimer's disease research
Quest Diagnostics
[ "Biology" ]
1,075
[ "Life sciences industry" ]
1,496,984
https://en.wikipedia.org/wiki/Flue-gas%20desulfurization
Flue-gas desulfurization (FGD) is a set of technologies used to remove sulfur dioxide () from exhaust flue gases of fossil-fuel power plants, and from the emissions of other sulfur oxide emitting processes such as waste incineration, petroleum refineries, cement and lime kilns. Methods Since stringent environmental regulations limiting emissions have been enacted in many countries, is being removed from flue gases by a variety of methods. Common methods used: Wet scrubbing using a slurry of alkaline sorbent, usually limestone or lime, or seawater to scrub gases; Spray-dry scrubbing using similar sorbent slurries; Wet sulfuric acid process recovering sulfur in the form of commercial quality sulfuric acid; SNOX Flue gas desulfurization removes sulfur dioxide, nitrogen oxides and particulates from flue gases; Dry sorbent injection systems that introduce powdered hydrated lime (or other sorbent material) into exhaust ducts to eliminate and from process emissions. For a typical coal-fired power station, flue-gas desulfurization (FGD) may remove 90 per cent or more of the in the flue gases. History Methods of removing sulfur dioxide from boiler and furnace exhaust gases have been studied for over 150 years. Early ideas for flue gas desulfurization were established in England around 1850. With the construction of large-scale power plants in England in the 1920s, the problems associated with large volumes of from a single site began to concern the public. The emissions problem did not receive much attention until 1929, when the House of Lords upheld the claim of a landowner against the Barton Electricity Works of the Manchester Corporation for damages to his land resulting from emissions. Shortly thereafter, a press campaign was launched against the erection of power plants within the confines of London. This outcry led to the imposition of controls on all such power plants. The first major FGD unit at a utility was installed in 1931 at Battersea Power Station, owned by London Power Company. In 1935, an FGD system similar to that installed at Battersea went into service at Swansea Power Station. The third major FGD system was installed in 1938 at Fulham Power Station. These three early large-scale FGD installations were suspended during World War II, because the characteristic white vapour plumes would have aided location finding by enemy aircraft. The FGD plant at Battersea was recommissioned after the war and, together with FGD plant at the new Bankside B power station opposite the City of London, operated until the stations closed in 1983 and 1981 respectively. Large-scale FGD units did not reappear at utilities until the 1970s, where most of the installations occurred in the United States and Japan. The Clean Air Act of 1970 (CAA) and it amendments have influenced implementation of FGD. In 2017, the revised PTC 40 Standard was published. This revised standard (PTC 40-2017) covers Dry and Regenerable FGD systems and provides a more detailed Uncertainty Analysis section. This standard is currently in use today by companies around the world. As of June 1973, there were 42 FGD units in operation, 36 in Japan and 6 in the United States, ranging in capacity from 5 MW to 250 MW. As of around 1999 and 2000, FGD units were being used in 27 countries, and there were 678 FGD units operating at a total power plant capacity of about 229 gigawatts. About 45% of the FGD capacity was in the U.S., 24% in Germany, 11% in Japan, and 20% in various other countries. Approximately 79% of the units, representing about 199 gigawatts of capacity, were using lime or limestone wet scrubbing. About 18% (or 25 gigawatts) utilized spray-dry scrubbers or sorbent injection systems. FGD on ships The International Maritime Organization (IMO) has adopted guidelines on the approval, installation and use of exhaust gas scrubbers (exhaust gas cleaning systems) on board ships to ensure compliance with the sulphur regulation of MARPOL Annex VI. Flag States must approve such systems and port States can (as part of their port state control) ensure that such systems are functioning correctly. If a scrubber system is not functioning properly (and the IMO procedures for such malfunctions are not adhered to), port States can sanction the ship. The United Nations Convention on the Law Of the Sea also bestows port States with a right to regulate (and even ban) the use of open loop scrubber systems within ports and internal waters. Sulfuric acid mist formation Fossil fuels such as coal and oil can contain a significant amount of sulfur. When fossil fuels are burned, about 95 percent or more of the sulfur is generally converted to sulfur dioxide (). Such conversion happens under normal conditions of temperature and of oxygen present in the flue gas. However, there are circumstances under which such reaction may not occur. can further oxidize into sulfur trioxide () when excess oxygen is present and gas temperatures are sufficiently high. At about 800 °C, formation of is favored. Another way that can be formed is through catalysis by metals in the fuel. Such reaction is particularly true for heavy fuel oil, where a significant amount of vanadium is present. In whatever way is formed, it does not behave like in that it forms a liquid aerosol known as sulfuric acid () mist that is very difficult to remove. Generally, about 1% of the sulfur dioxide will be converted to . Sulfuric acid mist is often the cause of the blue haze that often appears as the flue gas plume dissipates. Increasingly, this problem is being addressed by the use of wet electrostatic precipitators. FGD chemistry Principles Most FGD systems employ two stages: one for fly ash removal and the other for removal. Attempts have been made to remove both the fly ash and in one scrubbing vessel. However, these systems experienced severe maintenance problems and low removal efficiency. In wet scrubbing systems, the flue gas normally passes first through a fly ash removal device, either an electrostatic precipitator or a baghouse, and then into the -absorber. However, in dry injection or spray drying operations, the is first reacted with the lime, and then the flue gas passes through a particulate control device. Another important design consideration associated with wet FGD systems is that the flue gas exiting the absorber is saturated with water and still contains some . These gases are highly corrosive to any downstream equipment such as fans, ducts, and stacks. Two methods that may minimize corrosion are: (1) reheating the gases to above their dew point, or (2) using materials of construction and designs that allow equipment to withstand the corrosive conditions. Both alternatives are expensive. Engineers determine which method to use on a site-by-site basis. Scrubbing with an alkali solid or solution is an acid gas, and, therefore, the typical sorbent slurries or other materials used to remove the from the flue gases are alkaline. The reaction taking place in wet scrubbing using a (limestone) slurry produces calcium sulfite () and may be expressed in the simplified dry form as: Wet scrubbing can be conducted with a (hydrated lime) and : (M = Ca, Mg) To partially offset the cost of the FGD installation, some designs, particularly dry sorbent injection systems, further oxidize the (calcium sulfite) to produce marketable (gypsum) that can be of high enough quality to use in wallboard and other products. The process by which this synthetic gypsum is created is also known as forced oxidation: 2 A natural alkaline usable to absorb is seawater. The is absorbed in the water, and when oxygen is added reacts to form sulfate ions and free . The surplus of is offset by the carbonates in seawater pushing the carbonate equilibrium to release gas: In industry caustic soda () is often used to scrub , producing sodium sulfite: 2 Types of wet scrubbers used in FGD To promote maximum gas–liquid surface area and residence time, a number of wet scrubber designs have been used, including spray towers, venturis, plate towers, and mobile packed beds. Because of scale buildup, plugging, or erosion, which affect FGD dependability and absorber efficiency, the trend is to use simple scrubbers such as spray towers instead of more complicated ones. The configuration of the tower may be vertical or horizontal, and flue gas can flow concurrently, countercurrently, or crosscurrently with respect to the liquid. The chief drawback of spray towers is that they require a higher liquid-to-gas ratio requirement for equivalent removal than other absorber designs. FGD scrubbers produce a scaling wastewater that requires treatment to meet U.S. federal discharge regulations. However, technological advancements in ion-exchange membranes and electrodialysis systems has enabled high-efficiency treatment of FGD wastewater to meet EPA discharge limits. The treatment approach is similar for other highly scaling industrial wastewaters. Venturi-rod scrubbers A venturi scrubber is a converging/diverging section of duct. The converging section accelerates the gas stream to high velocity. When the liquid stream is injected at the throat, which is the point of maximum velocity, the turbulence caused by the high gas velocity atomizes the liquid into small droplets, which creates the surface area necessary for mass transfer to take place. The higher the pressure drop in the venturi, the smaller the droplets and the higher the surface area. The penalty is in power consumption. For simultaneous removal of and fly ash, venturi scrubbers can be used. In fact, many of the industrial sodium-based throwaway systems are venturi scrubbers originally designed to remove particulate matter. These units were slightly modified to inject a sodium-based scrubbing liquor. Although removal of both particles and in one vessel can be economic, the problems of high pressure drops and finding a scrubbing medium to remove heavy loadings of fly ash must be considered. However, in cases where the particle concentration is low, such as from oil-fired units, it can be more effective to remove particulate and simultaneously. Packed bed scrubbers A packed scrubber consists of a tower with packing material inside. This packing material can be in the shape of saddles, rings, or some highly specialized shapes designed to maximize the contact area between the dirty gas and liquid. Packed towers typically operate at much lower pressure drops than venturi scrubbers and are therefore cheaper to operate. They also typically offer higher removal efficiency. The drawback is that they have a greater tendency to plug up if particles are present in excess in the exhaust air stream. Spray towers A spray tower is the simplest type of scrubber. It consists of a tower with spray nozzles, which generate the droplets for surface contact. Spray towers are typically used when circulating a slurry (see below). The high speed of a venturi would cause erosion problems, while a packed tower would plug up if it tried to circulate a slurry. Counter-current packed towers are infrequently used because they have a tendency to become plugged by collected particles or to scale when lime or limestone scrubbing slurries are used. Scrubbing reagent As explained above, alkaline sorbents are used for scrubbing flue gases to remove . Depending on the application, the two most important are lime and sodium hydroxide (also known as caustic soda). Lime is typically used on large coal- or oil-fired boilers as found in power plants, as it is very much less expensive than caustic soda. The problem is that it results in a slurry being circulated through the scrubber instead of a solution. This makes it harder on the equipment. A spray tower is typically used for this application. The use of lime results in a slurry of calcium sulfite () that must be disposed of. Fortunately, calcium sulfite can be oxidized to produce by-product gypsum () which is marketable for use in the building products industry. Caustic soda is limited to smaller combustion units because it is more expensive than lime, but it has the advantage that it forms a solution rather than a slurry. This makes it easier to operate. It produces a "spent caustic" solution of sodium sulfite/bisulfite (depending on the pH), or sodium sulfate that must be disposed of. This is not a problem in a kraft pulp mill for example, where this can be a source of makeup chemicals to the recovery cycle. Scrubbing with sodium sulfite solution It is possible to scrub sulfur dioxide by using a cold solution of sodium sulfite; this forms a sodium hydrogen sulfite solution. By heating this solution it is possible to reverse the reaction to form sulfur dioxide and the sodium sulfite solution. Since the sodium sulfite solution is not consumed, it is called a regenerative treatment. The application of this reaction is also known as the Wellman–Lord process. In some ways this can be thought of as being similar to the reversible liquid–liquid extraction of an inert gas such as xenon or radon (or some other solute which does not undergo a chemical change during the extraction) from water to another phase. While a chemical change does occur during the extraction of the sulfur dioxide from the gas mixture, it is the case that the extraction equilibrium is shifted by changing the temperature rather than by the use of a chemical reagent. Gas-phase oxidation followed by reaction with ammonia A new, emerging flue gas desulfurization technology has been described by the IAEA. It is a radiation technology where an intense beam of electrons is fired into the flue gas at the same time as ammonia is added to the gas. The Chendu power plant in China started up such a flue gas desulfurization unit on a 100 MW scale in 1998. The Pomorzany power plant in Poland also started up a similar sized unit in 2003 and that plant removes both sulfur and nitrogen oxides. Both plants are reported to be operating successfully. However, the accelerator design principles and manufacturing quality need further improvement for continuous operation in industrial conditions. No radioactivity is required or created in the process. The electron beam is generated by a device similar to the electron gun in a TV set. This device is called an accelerator. This is an example of a radiation chemistry process where the physical effects of radiation are used to process a substance. The action of the electron beam is to promote the oxidation of sulfur dioxide to sulfur(VI) compounds. The ammonia reacts with the sulfur compounds thus formed to produce ammonium sulfate, which can be used as a nitrogenous fertilizer. In addition, it can be used to lower the nitrogen oxide content of the flue gas. This method has attained industrial plant scale. Facts and statistics The information in this section was obtained from a US EPA published fact sheet. Flue gas desulfurization scrubbers have been applied to combustion units firing coal and oil that range in size from 5 MW to 1,500 MW. Scottish Power are spending £400 million installing FGD at Longannet power station, which has a capacity of over 2,000 MW. Dry scrubbers and spray scrubbers have generally been applied to units smaller than 300 MW. FGD has been fitted by RWE npower at Aberthaw Power Station in south Wales using the seawater process and works successfully on the 1,580 MW plant. Approximately 85% of the flue gas desulfurization units installed in the US are wet scrubbers, 12% are spray dry systems, and 3% are dry injection systems. The highest removal efficiencies (greater than 90%) are achieved by wet scrubbers and the lowest (less than 80%) by dry scrubbers. However, the newer designs for dry scrubbers are capable of achieving efficiencies in the order of 90%. In spray drying and dry injection systems, the flue gas must first be cooled to about 10–20 °C above adiabatic saturation to avoid wet solids deposition on downstream equipment and plugging of baghouses. The capital, operating and maintenance costs per short ton of removed (in 2001 US dollars) are: For wet scrubbers larger than 400 MW, the cost is $200 to $500 per ton For wet scrubbers smaller than 400 MW, the cost is $500 to $5,000 per ton For spray dry scrubbers larger than 200 MW, the cost is $150 to $300 per ton For spray dry scrubbers smaller than 200 MW, the cost is $500 to $4,000 per ton Alternative methods of reducing sulfur dioxide emissions An alternative to removing sulfur from the flue gases after burning is to remove the sulfur from the fuel before or during combustion. Hydrodesulfurization of fuel has been used for treating fuel oils before use. Fluidized bed combustion adds lime to the fuel during combustion. The lime reacts with the to form sulfates which become part of the ash. This elemental sulfur is then separated and finally recovered at the end of the process for further usage in, for example, agricultural products. Safety is one of the greatest benefits of this method, as the whole process takes place at atmospheric pressure and ambient temperature. This method has been developed by Paqell, a joint venture between Shell Global Solutions and Paques. See also Incineration Scrubber Flue-gas emissions from fossil-fuel combustion Flue-gas stacks Wellman–Lord process References External links Schematic process flow of FGD plant 5000 MW FGD Plant (includes a detailed process flow diagram) Alstom presentation to UN-ECE on air pollution control (includes process flow diagram for dry, wet and seawater FGD) Flue Gas Treatment article including the removal of hydrogen chloride, sulfur trioxide, and other heavy metal particles such as mercury. Institute of Clean Air Companies – national trade association representing emissions control manufacturers Pollution control technologies Air pollution control systems Acid gas control Incineration Environmental engineering Environmental impact of the energy industry Gas technologies Desulfurization
Flue-gas desulfurization
[ "Chemistry", "Engineering" ]
3,804
[ "Desulfurization", "Separation processes", "Chemical engineering", "Combustion engineering", "Pollution control technologies", "Incineration", "Civil engineering", "Environmental engineering" ]
1,497,098
https://en.wikipedia.org/wiki/Kaprekar%27s%20routine
In number theory, Kaprekar's routine is an iterative algorithm named after its inventor, Indian mathematician D. R. Kaprekar. Each iteration starts with a number, sorts the digits into descending and ascending order, and calculates the difference between the two new numbers. As an example, starting with the number 8991 in base 10: 6174, known as Kaprekar's constant, is a fixed point of this algorithm. Any four-digit number (in base 10) with at least two distinct digits will reach 6174 within seven iterations. The algorithm runs on any natural number in any given number base. Definition and properties The algorithm is as follows: Choose any natural number in a given number base . This is the first number of the sequence. Create a new number by sorting the digits of in descending order, and another number by sorting the digits of in ascending order. These numbers may have leading zeros, which can be ignored. Subtract to produce the next number of the sequence. Repeat step 2. The sequence is called a Kaprekar sequence and the function is the Kaprekar mapping. Some numbers map to themselves; these are the fixed points of the Kaprekar mapping, and are called Kaprekar's constants. Zero is a Kaprekar's constant for all bases , and so is called a trivial Kaprekar's constant. All other Kaprekar's constants are nontrivial Kaprekar's constants. For example, in base 10, starting with 3524, with 6174 as a Kaprekar's constant. All Kaprekar sequences will either reach one of these fixed points or will result in a repeating cycle. Either way, the end result is reached in a fairly small number of steps (within seven iterations or steps). Note that the numbers and have the same digit sum and hence the same remainder modulo . Therefore, each number in a Kaprekar sequence of base numbers (other than possibly the first) is a multiple of . When leading zeroes are retained, only repdigits lead to the trivial Kaprekar's constant. Families of Kaprekar's constants In base 4, it can easily be shown that all numbers of the form 3021, 310221, 31102221, 3...111...02...222...1 (where the length of the "1" sequence and the length of the "2" sequence are the same) are fixed points of the Kaprekar mapping. In base 10, it can easily be shown that all numbers of the form 6174, 631764, 63317664, 6...333...17...666...4 (where the length of the "3" sequence and the length of the "6" sequence are the same) are fixed points of the Kaprekar mapping. In the following, we will refer to the fixed points of the Kaprekar routine not as "Kaprekar constants" but as "Kaprekar numbers defined by Definition 2". In addition, "Kaprekar constant α" refers to the case where all the numbers of that digit become 0 or α due to the Kaprekar routine. In 2005, Y. Hirata calculated all fixed points up to 31 digits and examined their distribution. In 1981, Pritchett, et al. showed that the Kaprekar constant is limited to two numbers, 495 (3 digits) and 6174 (4 digits). They also classified the Kaprekar numbers into four types, but there was some overlap in this classification. In 2024, Haruo Iwasaki of the Ranzan Mathematics Study Group (headed by Kenichi Iyanaga) showed that in order for a natural number to be a Kaprekar number, it must belong to one of five sets composed of combinations of the seven numbers 495, 6174, 36, 123456789, 27, 124578, and 09, and that this new classification using the five sets includes a correction to the classification by Pritchett, et al. As a result, the number of -digit Kaprekar numbers is determined by two types of equations: or three types of Diophantine equations: It was found that the number of integer solutions of the equations that can be established is the same as the number of solutions that express all of the -digit Kaprekar numbers. In addition, there are no Kaprekar numbers for 5-digit and 7-digit numbers because they do not satisfy the above equations. Furthermore, it is clear that have more than one solution. Although 11-digit and 13-digit numbers have only one solution, these numbers have loop 5 and loop 2 numbers respectively, so Pritchett's result that the "Kapekar constant" is limited to 495 (3 digits) and 6174 (4 digits) is again verified. In this way, the problem of determining all of the Kaprekar numbers defined in Definition 2 and the number of these was solved. Following this paper, we will give one example. Example: In the case of , since is an odd number that is not a multiple of 3, the number of equations that can be solved is limited to the following three, and if the operation (denoted by ) defined above is applied once to the numbers corresponding to the solutions of these equations, seven Kaprekar numbers can be obtained. The solution to is The solution to is The solutions to are b = 2k It can be shown that all natural numbers are fixed points of the Kaprekar mapping in even base for all natural numbers . See also Arithmetic dynamics Collatz conjecture Dudeney number Factorion Happy number Kaprekar number Meertens number Narcissistic number Perfect digit-to-digit invariant Perfect digital invariant Sum-product number Sorting algorithm Citations References G. D. Prichett, A. L. Ludington, and J. F. Lapenta, The determination of all decadic Kaprekar constants, The Fibonacci Quarterly, 19.1 (1981), 45–52. Y. Hirata, The Kaprekar transformation for higher-digit numbers, Maebashi Kyoai Gakuen Ronshu, 5 (2005), 21–50. H.Iwasaki. A new classification of the Kaprekar Numbers. The Fibonacci Quarterly. 62.4(2024),275-281. ^ External links Working link to YouTube Sample (Perl) code to walk any four-digit number to Kaprekar's Constant Sample (Python) code to walk any four-digit number to Kaprekar's Constant Arithmetic dynamics Base-dependent integer sequences Sorting algorithms
Kaprekar's routine
[ "Mathematics" ]
1,408
[ "Sorting algorithms", "Recreational mathematics", "Arithmetic dynamics", "Order theory", "Number theory", "Dynamical systems" ]
1,497,274
https://en.wikipedia.org/wiki/Soil%20survey
Soil survey, or soil mapping, is the process of classifying soil types and other soil properties in a given area and geo-encoding such information. Background Soil surveys apply the principles of soil science and draw heavily from geomorphology, theories of soil formation, physical geography, and analysis of vegetation and land use patterns. Primary data for the soil survey are acquired by field sampling and by remote sensing. Remote sensing principally uses aerial photography, but LiDAR and other digital techniques are steadily gaining in popularity. In the past, a soil scientist would take hard-copies of aerial photography, topographic maps, and mapping keys into the field with them. Today, a growing number of soil scientists bring a ruggedized tablet computer and GPS into the field with them. The tablet may be loaded with digital aerial photos, LiDAR, topography, soil geodatabases, mapping keys, and more. Publication The term soil survey may also be used as a noun to describe the published results. In the United States, these surveys were once published in book form for individual counties by the National Cooperative Soil Survey. Today, soil surveys are no longer published in book form; they are published to the web and accessed on NRCS Web Soil Survey where a person can create a custom soil survey. This allows for rapid flow of the latest soil information to the user. In the past it could take years to publish a paper soil survey. Today it takes only moments for changes to go live to the public. The most current soil survey data is made available for high end GIS users such as professional consulting companies and universities. Typical information in a published county soil survey includes the following: a brief overview on how to use the survey a general soil map for comparing the sustainability of large sections of the county a detailed map with specific soil series outlined and indexed a section on the use and management of soils tables describing the physical features and environment of the county tables containing land use suitability based on standards set by the Natural Resources Conservation Service. Uses The information in a soil survey can be used by farmers and ranchers to help determine whether a particular soil type is suited for crops or livestock and what type of soil management might be required. An architect or engineer might use the engineering properties of a soil to determine whether it is suitable for a certain type of construction. A homeowner may even use the information for maintaining or constructing their garden, yard, or home. Soil survey information can be used to predict or estimate the potentials and limitations of soils for many specific uses. A soil survey includes an important part of the information that is used to make workable plans for land management. The information must be interpreted to be usable by professional planners and others. Predictions based on soil surveys serve as a basis for judgment about land use and management for areas ranging from small tracts to regions of several million acres. These predictions, however, must be evaluated along with economic, social, and environmental considerations before they can be used to make valid recommendations for land use and management. See also FAO soil classification USDA soil taxonomy Pedometrics Earth sciences survey References External links A Compendium of On-Line Soil Survey Information NRCS Web Soil Survey Inventory of the soil resource across the U.S. NRCS Helping People Understand Soils California Online Soil Survey Soil Data Access Texas Soil Surveys, hosted by the Portal to Texas History Soil Maps of the world European Digital Archive on the Soil Maps of the world Historical Soil Surveys of South Carolina at the University of South Carolina Library's Digital Collections Page Land management Measurement Pedology Field surveys
Soil survey
[ "Physics", "Mathematics" ]
720
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
1,497,313
https://en.wikipedia.org/wiki/Prolotherapy
Prolotherapy, also called proliferation therapy, is an injection-based treatment used in chronic musculoskeletal conditions. It has been characterised as an alternative medicine practice. Medical uses A 2015 review found no evidence that prolotherapy is safe or effective for Achilles tendinopathy, plantar fasciosis, and Osgood–Schlatter disease. The quality of the studies was also poor. Another 2015 review assigned a strength of recommendation level A for Achilles tendinopathy and knee osteoarthritis and level B for lateral epicondylosis, Osgood–Schlatter disease, and plantar fasciosis. Level A recommendations are based on consistent and good-quality patient-oriented evidence while level B are based on inconsistent or limited-quality patient-oriented evidence. Low back pain A 2007 Cochrane review of prolotherapy in adults with chronic low-back pain found unclear evidence of effect. A 2009 review concluded the same for subacute low back pain. A 2015 review found consistent evidence that it does not help in low back pain. There was tentative evidence of benefit when used with other low back pain treatments. Evidence of benefit remains tentative (level B) for dextrose prolotherapy in low back or sacroiliac pain. Tendinitis A 2009 systematic review of the efficacy in the treatment of lateral epicondylitis concluded that these therapies may benefit people with lateral epicondylitis, but the evidence was limited. A 2010 review concluded moderate evidence exists to support the use of prolotherapy injections in the management of pain in lateral epicondylitis, and that prolotherapy was no more effective than eccentric exercise in the treatment of Achilles tendinopathy. A 2016 review found a trend towards benefit in 2016 for lateral epicondylitis. A 2017 review found tentative evidence in Achilles tendinopathy. In 2012, a systematic review studying various injection therapies found that prolotherapy and hyaluronic acid injection therapies were more effective than placebo when treating lateral epicondylitis. Of the studies evaluated, one of ten glucocorticoid trials, one of five trials for autologous blood injection or platelet-rich plasma, one trial of polidocanol, and one trial of prolotherapy met the criteria for low risk of bias. The authors noted that few of the reviewed trials met the criteria for low risk of bias. Knee osteoarthritis Tentative evidence of prolotherapy benefit was reported in a 2011 review. One 2017 review found evidence of benefit from low-quality studies. A 2017 review described the evidence as moderate for knee osteoarthritis. A 2016 review found benefit but there was a moderate degree of variability between trials and risk of bias. In 2019, the American College of Rheumatology recommended against prolotherapy for knee osteoarthritis. Contraindications Contraindications for patients to receive prolotherapy injections may include: Local abscess Bleeding disorders Patient on anticoagulant medication Known allergy to prolotherapy agent Acute infections such as cellulitis Septic arthritis Relative contraindications include: Acute gouty arthritis Acute fracture Side effects Patients receiving prolotherapy injections have reported generally mild side effects, including mild pain and irritation at the injection site (often within 72 hours of the injection), numbness at the injection site, or mild bleeding. Pain from prolotherapy injections is temporary and is often treated with acetaminophen or, in rare cases, opioid medications. NSAIDs are not usually recommended due to their counter action to prolotherapy-induced inflammation, but are occasionally used in patients with pain refractory to other methods of pain control. Theoretical adverse events of prolotherapy injection include lightheadedness, allergic reactions to the agent used, bruising, infection, or nerve damage. Allergic reactions to sodium morrhuate are rare. Rare cases of back pain, neck pain, spinal cord irritation, pneumothorax, and disc injury have been reported at a rate comparable to that of other spinal injection procedures. Technique Prolotherapy involves the injection of an irritant solution into a joint space, weakened ligament, or tendon insertion to relieve pain. Most commonly, hyperosmolar dextrose (a sugar) is the solution used; glycerine, lidocaine (a commonly used local anesthetic), phenol, and sodium morrhuate (a derivative of cod liver oil extract) are other commonly used agents. The injection is administered at joints, ligaments, or tendons where they connect to bone. Prolotherapy treatment sessions are generally given every two to six weeks for several months in a series ranging from three to six or more treatments. Many patients receive treatment at less frequent intervals until treatments are rarely required, if at all. Terminology and mechanism The term originated with George S. Hackett, MD, in 1956 in a publication titled "The rehabilitation of an incompetent structure by the generation of new cellular tissue". He applied the term prolotherapy from the words "proli’" (Latin), meaning offspring, and "proliferate", meaning to produce new cells in rapid succession. Although the erroneous term "sclerotherapy" was utilized by some in the past to describe this treatment, it is now clear that prolotherapy does not cause scarring. The mechanism of prolotherapy requires further clarification. It is expected to involve a number of mechanisms. Criticism Some major medical insurance policies view prolotherapy as an investigational or experimental therapy with an inconclusive evidence base. Consequently, they currently do not provide coverage for prolotherapy procedures. Medicare reviewers in 1999 determined at that time that practitioners had not provided "any scientific evidence on which to base a [different] coverage decision," and so retained Medicare's current coverage policy to not cover prolotherapy injections for chronic low back pain, but expressed willingness to reconsider if presented with results of "further studies on the benefits of prolotherapy." History The concept of creating irritation or injury to stimulate healing has been recorded as early as Roman times when hot needles were poked into the shoulders of injured gladiators. In 1840, French surgeon Alfred-Armand-Louis-Marie Velpeau published a paper detailing how he had injected an iodine solution into a hernia in order to create beneficial inflammation. American surgeon Joseph Pancoast later wrote that he had been performing this procedure (using either iodine or cantharides) since 1836. Another early American practitioner of this method was George Heaton. After World War 1, sclerotherapy came to be a common treatment for malformations of blood vessels and the lymphatic system. This involved injecting a therapeutic liquid to shrink them. By the late 1920s, this method was used to treat hernias. By the late 1930s, it was also used to treat ligamentous laxity. In the 1950s, George S. Hackett, a general surgeon in the United States, began performing injections of irritant solutions in an effort to repair joints and hernias. In 1955, Gustav Anders Hemwall became acquainted with George Hackett at an American Medical Association meeting and started practicing the technique. Hackett coined the term "prolotherapy" for the practice, a very early appearance being in his 1956 book Ligament and Tendon Relaxation (Skeletal Disability) Treated by Prolotherapy (Fibro-Osseus Proliferation). References Musculoskeletal system Orthopedic surgical procedures Regenerative biomedicine Alternative medical treatments
Prolotherapy
[ "Biology" ]
1,629
[ "Organ systems", "Musculoskeletal system" ]
1,497,328
https://en.wikipedia.org/wiki/Lexical%20aspect
In linguistics, the lexical aspect or Aktionsart (, plural Aktionsarten ) of a verb is part of the way in which that verb is structured in relation to time. For example, the English verbs arrive and run differ in their lexical aspect since the former describes an event which has a natural endpoint while the latter does not. Lexical aspect differs from grammatical aspect in that it is an inherent semantic property of a predicate, while grammatical aspect is a syntactic or morphological property. Although lexical aspect need not be marked morphologically, it has downstream grammatical effects, for instance that arrive can be modified by "in an hour" while believe cannot. Theories of aspectual class Although all theories of lexical aspect recognize that verbs divide into different classes, the details of the classification differ. An early attempt by Vendler recognized four classes, which has been modified several times. Vendler's classification Zeno Vendler classified verbs into four categories on whether they express "activity", "accomplishment", "achievement" or "state". Activities and accomplishments are distinguished from achievements and states in that the first two allow the use of continuous and progressive aspects. Activities and accomplishments are distinguished from each other by boundedness. Activities do not have a terminal point (a point before which the activity has taken place and after which it cannot continue: "John drew a circle"), but accomplishments have one. Of achievements and states, achievements are instantaneous, but states are durative. Achievements and accomplishments are distinguished from one another in that achievements take place immediately (such as in "recognise" or "find"), but accomplishments approach an endpoint incrementally (as in "paint a picture" or "build a house"). Comrie's classification In his discussion of lexical aspect, Bernard Comrie included the category semelfactive or punctual events such as "sneeze". His divisions of the categories were as follows: states, activities, and accomplishments are durative, but semelfactives and achievements are punctual. Of the durative verbs, states are unique as they involve no change, and activities are atelic (that is, have no "terminal point") whereas accomplishments are telic. Of the punctual verbs, semelfactives are atelic, and achievements are telic. The following table shows examples of lexical aspect in English that involve change (an example of a state is 'know'). Moens and Steedman's classification Another classification is proposed by Moens and Steedman, based on the idea of the event nucleus. Syntactic analyses of event structure Aspectual classes can be analyzed as differing in their event structure, and this has led to the development of syntactic analyses of event structure, with each aspectual class treated as having a distinct syntactic structure. See also Predicate Syntax–semantics interface References Grammar Time in linguistics Syntax–semantics interface fr:Aspect lexical
Lexical aspect
[ "Physics" ]
617
[ "Spacetime", "Time in linguistics", "Physical quantities", "Time" ]
1,497,356
https://en.wikipedia.org/wiki/MCF%20Employees%27%20Union
MCF Employees' Union, a trade union at the Mangalore Chemicals and Fertilisers, in Karnataka, India. MCFEU is affiliated to Hind Mazdoor Sabha. Trade unions in India Hind Mazdoor Sabha-affiliated unions Trade unions at the Mangalore Chemicals and Fertilisers Chemical industry trade unions Organizations with year of establishment missing
MCF Employees' Union
[ "Chemistry" ]
73
[ "Chemical industry trade unions" ]
1,497,363
https://en.wikipedia.org/wiki/MCF%20Mazdoor%20Sangh
MCF Mazdoor Sangh, a trade union at the Mangalore Chemicals and Fertilisers, in Karnataka, India. MCFMS is affiliated to Bharatiya Mazdoor Sangh. Trade unions in India Bharatiya Mazdoor Sangh-affiliated unions Trade unions at the Mangalore Chemicals and Fertilisers Chemical industry trade unions Organizations with year of establishment missing
MCF Mazdoor Sangh
[ "Chemistry" ]
77
[ "Chemical industry trade unions" ]
1,497,460
https://en.wikipedia.org/wiki/Windows%20Neptune
Neptune was the codename for a version of Microsoft Windows under development in 1999. Based on Windows 2000, it was originally to replace the Windows 9x series and was scheduled to be the first home consumer-oriented version of Windows built on Windows NT code. Internally, the project's name was capitalized as NepTune. History Neptune largely resembled Windows 2000, but some new features were introduced. Neptune included a logon screen similar to that later used in Windows XP. A firewall new to Neptune was later integrated into Windows XP as the Windows Firewall. Neptune also experimented with a new HTML and Win32-based user interface originally intended for Windows Me, called Activity Centers, for task-centered operations. Only one alpha build of Neptune, 5111, was released to testers under a non-disclosure agreement, and later made its way to various beta collectors' sites and virtual museums in 2000. Other builds of Neptune are known to exist due to information in beta builds of Windows Me and Windows XP. In November 2015, a build 5111.6 disk was shown in a Microsoft Channel 9 video; version 5111 was the last build of Neptune that was sent to external testers, as the .1 or .6 after the build number stood for variant, not for compile. It is the only build of Neptune that made its way to the public. Build 5111 included Activity Centers - a new task-based user interface that featured individual "pages" focusing on daily tasks with facilities that include (but are not limited to) browsing the Internet, communication, document management and entertainment. User management was also improved in Neptune with the introduction of several new user types as well as a dedicated full-screen user interface. The new interfaces were primarily implemented using Internet Explorer's web technology, often using the then-new Mars framework. A key focus of the Neptune project was to experiment with user experiences that did not require manually saving previous work; some of this effort is visible in the only available build, which enables hibernation by default and requires the user to take extra steps to fully shut down the device. The Activity Centers could be installed by copying ACCORE.DLL from the installation disk to the hard drive and then running regsvr32 on ACCORE.DLL. The centers contained traces of Windows Me, then code-named Millennium, but were broken due to JavaScript errors, missing links and executables to the Game, Photo, and Music Centers. In response, some Windows enthusiasts have spent years fixing Activity Centers in build 5111 close to what Microsoft intended. In early 2000, Microsoft merged the team working on Neptune with that developing Odyssey, the successor to Windows 2000 for business customers. The combined team worked on a new project codenamed Whistler, which was released at the end of 2001 as Windows XP. In the meantime, Microsoft released Windows Me in 2000 as their final 9x series installment. Early development builds of Whistler feature an improved version of the logon screen found in Neptune build 5111. Triton Neptune was intended to have a successor named Triton, which was to be a minor update with very few user interface changes; service packs were additionally planned for it. Triton was slated for a spring 2002 release (coinciding with Microsoft's final fiscal quarter of 2001). Triton was devised back in 1998 alongside Neptune; the only details of it within Microsoft's internal planning documentation that year relates to a deadline for added hardware support by December 2001. According to Paul Thurrott, the timeline of releases was Windows NT 5.0 (the codename for Windows 2000) for high-end workstations and Windows 98 for entry-level and mid-range PCs from 1998 to 1999; followed by Neptune in 2000 and 2001 for both workstations and consumer PCs; followed by Triton for the same target audience. However, according to Charlie Kindel, Triton was to be a version of Neptune centered on home server usage. The project's codename refers to Neptune's largest moon, Triton. Legacy The touch-oriented Metro design language introduced as part of Windows 8, released in 2012, shared a large number of common goals with the Neptune project, including the unimplemented Activity Centers' focus on typography as well as dedicated full-screen applications for common tasks. In addition, Windows 8 introduced hybrid boot, a functionality that takes advantage of hibernation to capture the initial states of necessary system applications and boot drivers, largely similar in principle to the Boot Accelerator feature that would have been included as part of Neptune. See also List of Microsoft codenames Development of Windows XP References Neptune
Windows Neptune
[ "Technology" ]
948
[ "Computing platforms", "Microsoft Windows" ]
1,497,463
https://en.wikipedia.org/wiki/Conformable%20matrix
In mathematics, a matrix is conformable if its dimensions are suitable for defining some operation (e.g. addition, multiplication, etc.). Examples If two matrices have the same dimensions (number of rows and number of columns), they are conformable for addition. Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. That is, if is an matrix and is an matrix, then needs to be equal to for the matrix product to be defined. In this case, we say that and are conformable for multiplication (in that sequence). Since squaring a matrix involves multiplying it by itself () a matrix must be (that is, it must be a square matrix) to be conformable for squaring. Thus for example only a square matrix can be idempotent. Only a square matrix is conformable for matrix inversion. However, the Moore–Penrose pseudoinverse and other generalized inverses do not have this requirement. Only a square matrix is conformable for matrix exponentiation. See also Linear algebra References Linear algebra Matrices
Conformable matrix
[ "Mathematics" ]
239
[ "Mathematical objects", "Matrices (mathematics)", "Matrix stubs", "Linear algebra", "Algebra" ]
1,497,491
https://en.wikipedia.org/wiki/Pedestal
A pedestal (, ) or plinth is a support at the bottom of a statue, vase, column, or certain altars. Smaller pedestals, especially if round in shape, may be called socles. In civil engineering, it is also called basement. The minimum height of the plinth is usually kept as 45 cm (for buildings). It transmits loads from superstructure to the substructure and acts as the retaining wall for the filling inside the plinth or raised floor. In sculpting, the terms base, plinth, and pedestal are defined according to their subtle differences. A base is defined as a large mass that supports the sculpture from below. A plinth is defined as a flat and planar support which separates the sculpture from the environment. A pedestal, on the other hand, is defined as a shaft-like form that raises the sculpture and separates it from the base. An elevated pedestal or plinth that bears a statue, and which is raised from the substructure supporting it (typically roofs or corniches), is sometimes called an acropodium. The term is from Greek ἄκρος ákros 'topmost' and πούς poús (root ποδ- pod-) 'foot'. Architecture Although in Syria, Asia Minor and Tunisia the Romans occasionally raised the columns of their temples or propylaea on square pedestals, in Rome itself they were employed only to give greater importance to isolated columns, such as those of Trajan and Antoninus, or as a podium to the columns employed decoratively in the Roman triumphal arches. The architects of the Italian Renaissance, however, conceived the idea that no order was complete without a pedestal, and as the orders were by them employed to divide up and decorate a building in several stories, the cornice of the pedestal was carried through and formed the sills of their windows, or, in open arcades, round a court, the balustrade of the arcade. They also would seem to have considered that the height of the pedestal should correspond in its proportion with that of the column or pilaster it supported; thus in the church of Saint John Lateran, where the applied order is of considerable dimensions, the pedestal is high instead of the ordinary height of 3 to . Asia In Asian art a lotus throne is a stylized lotus flower used as the seat or base for a figure. It is the normal pedestal for divine figures in Buddhist art and Hindu art, and often seen in Jain art. Originating in Indian art, it followed Indian religions to East Asia in particular. In imperial China, a stone tortoise called bixi was traditionally used as the pedestal for important stele, especially those associated with emperors. According to the 1396 version of the regulations issued by the Ming Dynasty founder, the Hongwu Emperor, the highest nobility (those of the gong and hou ranks) and the officials of the top 3 ranks were eligible for bixi-based funerary tablets, while lower-level mandarins' steles were to stand on simple rectangular pedestals. See also Pedestal desk Pedestal table, a table with a single central leg Tray Ozen An (Shinto) Notes References Architectural elements Sculpture terms
Pedestal
[ "Technology", "Engineering" ]
663
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
1,497,553
https://en.wikipedia.org/wiki/Reverberatory%20furnace
A reverberatory furnace is a metallurgical or process furnace that isolates the material being processed from contact with the fuel, but not from contact with combustion gases. The term reverberation is used here in a generic sense of rebounding or reflecting, not in the acoustic sense of echoing. Operation Chemistry determines the optimum relationship between the fuel and the material, among other variables. The reverberatory furnace can be contrasted on the one hand with the blast furnace, in which fuel and material are mixed in a single chamber, and, on the other hand, with crucible, muffling, or retort furnaces, in which the subject material is isolated from the fuel and all of the products of combustion including gases and flying ash. There are, however, a great many furnace designs, and the terminology of metallurgy has not been very consistently defined, so it is difficult to categorically contradict other views. The applications of these devices fall into two general categories, metallurgical melting furnaces, and lower temperature processing furnaces typically used for metallic ores and other minerals. A reverberatory furnace is at a disadvantage from the standpoint of efficiency compared to a blast furnace due to the separation of the burning fuel and the subject material, and it is necessary to effectively utilize both reflected radiant heat and direct contact with the exhaust gases (convection) to maximize heat transfer. Historically these furnaces have used solid fuel, and bituminous coal has proven to be the best choice. The brightly visible flames, due to the substantial volatile component, give more radiant heat transfer than anthracite coal or charcoal. Contact with the products of combustion, which may add undesirable elements to the subject material, is used to advantage in some processes. Control of the fuel/air balance can alter the exhaust gas chemistry toward either an oxidizing or a reducing mixture, and thus alter the chemistry of the material being processed. For example, cast iron can be puddled in an oxidizing atmosphere to convert it to the lower-carbon mild steel or bar iron. The Siemens-Martin oven in open hearth steelmaking is also a reverberatory furnace. Reverberatory furnaces (in this context, usually called air furnaces) were formerly also used for melting brass, bronze, and pig iron for foundry work. They were also, for the first 75 years of the 20th century, the dominant smelting furnace used in copper production, treating either roasted calcine or raw copper sulfide concentrate. While they have been supplanted in this role, first by flash furnaces and more recently also by the Ausmelt and ISASMELT furnaces, they are very effective at producing slags with low copper losses. History The first reverberatory furnaces were perhaps in the medieval period, and were used for melting bronze for casting bells. The earliest known detailed description was provided by Biringuccio. They were first applied to smelting metals in the late 17th century. Sir Clement Clerke and his son Talbot built cupolas or reverberatory furnaces in the Avon Gorge below Bristol in about 1678. In 1687, while obstructed from smelting lead (by litigation), they moved on to copper. In the following decades, reverberatory furnaces were widely adopted for smelting these metals and also tin. They had the advantage over older methods that the fuel was mineral coal, not charcoal or 'white coal' (chopped dried wood). In the 1690s, they (or associates) applied the reverberatory furnace (in this case known as an air furnace) to melting pig iron for foundry purposes. This was used at Coalbrookdale and various other places, but became obsolete at the end of the 18th century with the introduction of the foundry cupola furnace, which was a kind of small blast furnace, and a quite different species from the reverberatory furnace. The puddling furnace, introduced by Henry Cort in the 1780s to replace the older finery process, was also a variety of reverberatory furnace. Aluminium melting Reverberatory furnaces are widely used to melt secondary aluminium scrap for eventual use by die-casting industries. The simplest reverberatory furnace is nothing more than a steel box lined with alumina refractory brick with a flue at one end and a vertically lifting door at the other. Conventional oil or gas burners are placed usually on either side of the furnace to heat the brick and the eventual bath of molten metal is then poured into a casting machine to produce ingots. See also Open-hearth furnace Puddling furnace References Bibliography Encyclopædia Britannica, 14th ed. J. Day & R. F. Tylecote (eds.), The Industrial Revolution in Metals (1991) P. W. King, "The Cupola at Bristol", Somerset Araeology and Natural History 140 (for 1997), 37–52 P. W. King, "Sir Clement Clerke and the Adoption of coal in metallurgy", Transactions of the Newcomen Society 73(1) (2001–2), 33–53 External links Metallurgical furnaces
Reverberatory furnace
[ "Chemistry", "Materials_science" ]
1,077
[ "Metallurgy", "Metallurgical furnaces" ]
1,497,558
https://en.wikipedia.org/wiki/Tambour
In classical architecture, a tambour (Fr.: "drum") is the inverted bell of the Corinthian capital around which are carved acanthus leaves for decoration. The term also applies to the wall of a circular structure, whether on the ground or raised aloft on pendentives and carrying a dome (also known as a tholobate), and to the drum-shaped segments of a column, which is built up in several courses. A cover made of strips of wood connected together with fabric such as that of a roll-top desk is called a tambour. This has been adopted to describe an office cupboard that is designed to have doors that conceal within the cabinet when opened, also known as roller-shutters. See also Tholobate Notes References Columns and entablature Furniture
Tambour
[ "Technology" ]
167
[ "Structural system", "Columns and entablature" ]
1,497,620
https://en.wikipedia.org/wiki/Chinese%20Character%20Code%20for%20Information%20Interchange
The Chinese Character Code for Information Interchange () or CCCII is a character set developed by the Chinese Character Analysis Group in Taiwan. It was first published in 1980, and significantly expanded in 1982 and 1987. It is used mostly by library systems. It is one of the earliest established and most sophisticated encodings for traditional Chinese (predating the establishment of Big5 in 1984 and CNS 11643 in 1986). It is distinguished by its unique system for encoding simplified versions and other variants of its main set of hanzi characters. A variant of an earlier version of CCCII is used by the Library of Congress as part of MARC-8, under the name East Asian Character Code (EACC, ANSI/NISO Z39.64), where it comprises part of MARC 21's JACKPHY support. However, EACC contains fewer characters than the most recent versions of CCCII. Work at Apple based on Research Libraries Group's CJK Thesaurus, which was used to maintain EACC, was one of the direct predecessors of Unicode's Unihan set. Design Byte ranges CCCII is designed as an 94n set, as defined by ISO/IEC 2022. Each Chinese character is represented by a 3-byte code in which each byte is 7-bit, between 0x21 and 0x7E inclusive. Thus, the maximum number of Chinese characters representable in CCCII is 94×94×94 = 830584. In practice the number of characters encodable by CCCII would be less than this number, because variant characters are encoded in related ISO 2022 planes under CCCII, so most of the code points would have to be reserved for variants. In practice, however, bytes outside of these ranges are sometimes used. The code 0x212320 is used by some implementations as an ideographic space. A CCCII specification used by libraries in Hong Kong uses codes starting with 0x2120 for punctuation and symbols. The first byte 0x7F is used by some variants to encode codes for some otherwise unavailable Unified Repertoire and Ordering or CJK Unified Ideographs Extension A hanzi (e.g. 0x7F3449 for U+3449 or 0x7F796E for U+796E; notice how the continuation bytes match the UCS-2BE code), and this may include bytes outside of the 0x21–0x7E or even 0x20–0x7F range, e.g. 0x7F551C for U+551C, 0x7F5AA4 for U+5AA4 or 0x7F8EDA for U+8EDA. Interaction with ISO 2022 CCCII/EACC is not registered in the International Registry of Coded Character Sets to be Used with Escape Sequences, and as such, does not have a standard designation escape for use with ISO 2022. MARC-8 assigns EACC the private-use -byte 0x31 () in its implementation of ANSI X3.41 (ISO 2022). Layers and variant characters The 94 ISO 2022 planes are grouped into 16 layers of 6 planes each (except for layer 16, which contains the four planes 91–94). Layer 1 contains both non-hanzi and hanzi characters, with the non-hanzi and most frequently used hanzi being placed in plane 1, and with the remaining five planes consisting of less common hanzi. Layer 2 contains simplified Chinese characters, with their row and cell numbers being the same as their traditional Chinese equivalents in layer 1. Layers 3 through 12 contain further variant forms, at row and cell numbers homologous to the first two layers. The last four layers are used for other purposes. Specifically, layer 13 contains additional characters for Japanese language support (kana and Japanese kokuji), and layer 14 contains additional characters for Korean language support (hangul). Layer 15 is unused (reserved), while layer 16 is used for other characters. This distinctive design has been criticized by Christian Wittern of the International Research Institute for Zen Buddhism at Hanazono University, who asserts that the relationship of character variants "is very complex and can not be expressed in a fixed, one-dimensional, hard-wired codetable". Ken Lunde describes it as "one of the most well thought-out character set standards from Taiwan", describing its structure as "to be truly admired", but concluding that OpenType variant form substitution can provide the same level of functionality. CCCII defines roughly 53940 code points as of its 1987 edition, although a more recent draft from 1989 extends this to 75684 code points (comprising 44167 unique characters and 31517 variants). EACC, the variant used by the Library of Congress, includes only a smaller set of 15686 characters. Adoption As of 1995, CCCII or EACC was used mostly in libraries in the United States, Hong Kong and Taiwan. Although CCCII promised pan-CJK coverage, its support was limited to specialized hardware; difficulty ascertaining when the root versus variant character should be used, exacerbated by a lack of firmly established reference glyphs, further limited its adoption, resulting in Big5 being more commonly used for Chinese in those territories outside of library use (since Unicode had yet to become widely adopted at the time). , EACC is still in extensive use for specialized bibliographic purposes. It was also an important precursor to Unicode: work at Apple on a CJK character cross-reference database based on Research Libraries Group's CJK Thesaurus, used to maintain EACC, was directly incorporated into the development of Unicode's Unihan set. Unicode hanzi characters are referenced to their corresponding CCCII and EACC codes in the Unihan database, in the keys and ; however, since Unicode's character unification criteria (based on those used by the Japanese JIS X 0208 and on those developed by the Association for a Common Chinese Code in China) differ from those used by CCCII, not all variant characters are individually mapped. Mapping tables for hanzi, hangul, kana and punctuation between EACC and Unicode are available from the Library of Congress. Punctuation, symbol, kana and jamo charts Following are charts for punctuation, symbols, kana and Hangul jamo, showing the characters and giving possible Unicode mappings. Where possible, these are referenced against published mapping data. Unicode mappings for Hangul syllables are omitted below for brevity, but are documented by the Library of Congress. CCCII hanzi number in the tens of thousands and are not shown below (except where they are also included in the non-hanzi range, as radicals or numerals), but mappings to Unicode are available from the Unihan database and from elsewhere. Character set 0x2120 (plane 1, row 0: Hong Kong punctuation) Although CCCII is usually a 94n set, and therefore does not usually use codes starting with 0x2120, the following layout is used by a variant used by libraries in Hong Kong: Character set 0x2121 (plane 1, row 1: reserved for controls) No characters are assigned in plane 1 row 1, which is reserved for control codes. Character set 0x2122 (plane 1, row 2: mathematical operators) This row contains mathematical operators. EACC leaves this row empty. The following table is referenced against sources from Taiwan. The following table is referenced against CCCII data provided by the Hong Kong Innovative Users Group, a group of libraries in Hong Kong, and hosted by the University of Hong Kong. It uses an entirely different layout in this row: Character set 0x2123 (plane 1, row 3: Roman and punctuation) This row includes punctuation, western Arabic numerals and Roman letters. Compare row 3 of Wansung code and row 3 of GB 2312. Different variants variously encode the ideographic space (U+3000) at 0x212320 (which the MARC specification acknowledges), 0x212321 (which is listed in the ANSI standard, and is also acknowledged by MARC), or 0x21635F. EACC includes only the hyphen-minus, parentheses and ideographic space in this set. Character set 0x212A (plane 1, row 10: internal IME characters and geta mark) In EACC, this row includes several Private Use Area mapped characters used internally to represent character components by the RLIN input method, which is used by the Library of Congress for non-Roman cataloging. These component characters should only be used internally by an IME and, if encountered elsewhere, may be replaced with the geta mark (U+3013), which this row also includes at 0x212A46. This row is unassigned in CCCII, but the geta mark is also listed at that location in some mappings for CCCII. Character set 0x212B (plane 1, row 11: punctuation) This row contains various punctuation marks used in Chinese, in addition to other symbols. CCCII includes a set of 35 punctuation marks in this row. EACC includes only 13 characters in this row (shown boxed below). Character sets 0x212C–0x212E (plane 1, rows 12–14: radicals and ordinals) These rows contain Chinese radicals, Roman numerals, celestial stems and terrestrial branches. Character set 0x212F (plane 1, row 15: Chinese numerals and bopomofo) This row includes Chinese numerals and bopomofo characters. EACC includes only the ideographic zero (〇). Character set 0x272B (plane 7, row 11: reference mark) This row contains the reference mark (kome jirushi). Character set 0x272E–0x272F (plane 7, rows 14–15: alternative bopomofo) A variant used by libraries in Hong Kong does not include bopomofo characters in plane 1 row 15, but includes them in a different layout in plane 7. Character set 0x6921 (plane 73, row 1: Japanese punctuation) This row is in plane 73, the first plane of layer 13, which contains characters included for Japanese language support. It contains punctuation. Compare row 1 of JIS X 0208, which this row tends to follow the layout of for the characters it includes. Character set 0x6924 (plane 73, row 4: hiragana) This row contains hiragana. Compare row 4 of JIS X 0208. Character set 0x6925 (plane 73, row 5: katakana) This row contains katakana. Compare row 5 of JIS X 0208, which this row corresponds to, besides the addition of the separate dakuten and handakuten. Character set 0x6F24–0x6F25 (plane 79, rows 4–5: jamo) These rows contains Korean jamo. Character set 0x6F76 (plane 79, row 86: archaic Hangul) This row contains several historic Hangul characters no longer in regular use. Several of these are mapped to the Private Use Area. Character set 0x7B25 (plane 91, row 5: supplementary Katakana) This row contains additional katakana used to write foreign phonemes. See also Chinese character IT Chinese characters Footnotes References Some information on this page is based on the information on the CNS official website. External links CNS 11643 official web site (English version of pages available) has information about the CCCII character set in the "Chinese Information Code" section Full mapping of EACC to Unicode, from Library of Congress Computer-related introductions in 1980 1980 establishments in Taiwan Taiwanese inventions Character encoding Encodings of Asian languages Chinese-language computing
Chinese Character Code for Information Interchange
[ "Technology" ]
2,501
[ "Natural language and computing", "Character encoding" ]
1,497,703
https://en.wikipedia.org/wiki/Sylvania%20Electric%20Products
Sylvania Electric Products Inc. was an American manufacturer of diverse electrical equipment, including at various times radio transceivers, vacuum tubes, semiconductors, and mainframe computers such as MOBIDIC. They were one of the companies involved in the development of the COBOL programming language. History The Hygrade Sylvania Corporation was formed when NILCO, Sylvania and Hygrade Lamp Company merged into one company in 1931. In 1939, Hygrade Sylvania started preliminary research on fluorescent technology, and later that year, demonstrated the first linear, or tubular, fluorescent lamp. It was featured at the 1939 New York World's Fair. Sylvania was also a manufacturer of both vacuum tubes and transistors. In 1942, the company changed its name to Sylvania Electric Products Inc. During World War II, Sylvania was chosen from among several competing companies to manufacture the miniature vacuum tubes used in proximity fuze shells due to its quality standards and mass production capabilities. In 1959, Sylvania Electronics merged with General Telephone to form General Telephone and Electronics (GTE). Sylvania developed the earliest flash cubes for still cameras, later selling the technology to Eastman Kodak Company, and later a 10-flash unit called FlipFlash, as well as a line of household electric light bulbs, which continued during GTE's ownership, later sold off to the German manufacturer Osram, and is today marketed as Osram Sylvania. In June 1964, Sylvania unveiled a color TV picture tube in which europium-bearing phosphor was used for a much brighter, truer red than was possible before. Through merger and acquisitions, the company became a significant, but never dominating supplier of electrical distribution equipment, including transformers and switchgear, residential and commercial load centers and breakers, pushbuttons, indicator lights, and other hard-wired devices. All were manufactured and distributed under the brand name GTE Sylvania, with the name Challenger used for its light commercial and residential product lines. GTE Sylvania contributed to the technological advancement of electrical distribution products in the late 1970s with several interesting product features. At the time, they were the leading supplier of vacuum cast coil transformers, manufactured in their Hampton, Virginia plant. Their transformers featured aluminum primary winding and were cast using relatively inexpensive molds, allowing them to produce cast coil transformers in a variety of KVA capacities, primary and secondary voltages and physical coil sizes, including low profile coils for mining and other specialty applications. They also developed the first medium voltage 3 phase panel that could survive a dead short across two phases. Their patented design used bus bar encapsulated in a thin coating of epoxy and then bolted together across all three phases, using special non-conductive fittings. By 1981 GTE had made the decision to exit the electrical distribution equipment market and began selling off its product lines and manufacturing facilities. The Challenger line, mostly manufactured at the time in Jackson, Mississippi, was sold to a former officer of GTE, who used the Challenger name as the name of his new company. Challenger flourished, and was eventually sold to Westinghouse, and later Eaton Corporation. By the mid-1980s, the GTE Sylvania electrical equipment product line and name was no more. In 1993 GTE exited the lighting business to concentrate on its core telecomms operations. The European, Asian and Latin American operations are now under the ownership of Havells Sylvania. With the acquisition of the North American division by Osram GmbH in January 1993 Osram Sylvania Inc. was established. Brand name In 1981, GTE Sylvania sold the rights to the name Sylvania and Philco for use on consumer electronics equipment only, to the Netherlands' NV Philips. Philips wanted the Philco name as the Philco trademark precluded selling products under their own name in the United States. This marked the end of Sylvania's TV production in Batavia, New York, USA, and Smithfield, North Carolina, USA. The Sylvania Smithfield plant later became Channel Master. The rights to the Sylvania name in many countries are held by the U.S. subsidiary of the German company Osram. The Sylvania brand name is owned worldwide, apart from Australia, Canada, Mexico, Thailand, New Zealand, Puerto Rico and the USA, by Havells Sylvania, headquartered in London. Osram Sylvania Osram Sylvania manufactures and markets a wide range of lighting products for homes, business, and vehicles and holds a leading share of the North American lighting market [2]. In fiscal year 2008, the company achieved sales of about 1.75 billion euros, which comprised about 38% of Osram's total sales at the time. Osram's worldwide lighting businesses employed about 9,000 people at the time. In 2016, Osram spun off the general lighting business which included the North American Osram Sylvania unit into an independent company called LEDVANCE headquartered in Garching, Germany. In 2017, LEDVANCE was merged into a consortium of Chinese investment companies and the Chinese lighting manufacturer MLS under the LEDVANCE name. The North American headquarters of LEDVANCE, previously referred to as Osram Sylvania, and located in Danvers, Massachusetts, was relocated to Wilmington, Massachusetts in 2015, a town north of Boston, MA. LEDVANCE continues to use the well known Osram and Sylvania brand names in their corresponding and representative markets throughout the world. Advertising From 1951 until 1956, Sylvania sponsored the game show Beat the Clock. The grand prizes on the show would be Sylvania television sets, and some consolation prizes would be Sylvania radios. Sylvania "Blue Dot (tm) for sure shot" flashbulbs would be used to take a photograph of the contestants in awkward outfits or messy stunts. One of Sylvania's heavily advertised TV features was a lighted perimeter mask of adjustable brightness called "HALOLIGHT", which was purported to ease the optical transition if a viewer glanced from a dark background to the bright TV screen. Today Philips markets an Ambilight feature, lighting the wall behind a flat display to soften the viewing experience. HALOLIGHT could not be adapted for color TV, because color TV white balance (aka tracking from low to high brightness) was unpredictable. Since the white color temperature of the HALOLIGHT and the illuminated color screen could not be made equivalent, HALOLIGHT was withdrawn. Osram Sylvania sponsored the It's a Small World ride at Disneyland in California with a twelve-year agreement starting in 2009. In 2014, the sponsor logo at the attraction's entrance changed to that of Siemens, the parent company of Sylvania. Accidents The Sylvania Electric Products explosion, which involved scrap thorium, occurred on July 2, 1956, at their facility in Bayside, Queens, New York City. The incident injured nine people; one employee subsequently died of his injuries. References External links LEDVANCE Sylvania General Lighting site Feilo Sylvania Group web site Defunct manufacturing companies of the United States Defunct semiconductor companies of the United States Guitar amplification tubes Siemens Vacuum tubes Academy Award for Technical Achievement winners American companies established in 1931 American companies disestablished in 1959
Sylvania Electric Products
[ "Physics" ]
1,496
[ "Vacuum tubes", "Vacuum", "Matter" ]
1,497,720
https://en.wikipedia.org/wiki/Artur%20Ekert
Artur Konrad Ekert (born 19 September 1961) is a Polish professor of quantum physics at the Mathematical Institute, University of Oxford, professorial fellow in quantum physics and cryptography at Merton College, Oxford, Lee Kong Chian Centennial Professor at the National University of Singapore and the founding director of the Centre for Quantum Technologies (CQT). His research interests extend over most aspects of information processing in quantum-mechanical systems, with a focus on quantum communication and quantum computation. He is best known as one of the pioneers of quantum cryptography. Early life Ekert was born in Wrocław, and studied physics at the Jagiellonian University in Kraków and at the University of Oxford. Between 1987 and 1991 he was a graduate student at Wolfson College, Oxford. In his doctoral thesis he showed how quantum entanglement and non-locality can be used to distribute cryptographic keys with perfect security. Career In 1991 he was elected a junior research fellow and subsequently (1994) a research fellow at Merton College, Oxford. At the time he established the first research group in quantum cryptography and computation, based in the Clarendon Laboratory, Oxford. Subsequently, it evolved into the Centre for Quantum Computation, now based at DAMTP in Cambridge. Between 1993 and 2000 he held a position of the Royal Society Howe Fellow. In 1998 he was appointed a professor of physics at the University of Oxford and a fellow and tutor in physics at Keble College, Oxford. From 2002 until 2006 he was the Leigh-Trapnell Professor of Quantum Physics at the Department of Applied Mathematics and Theoretical Physics, Cambridge University and a professorial fellow of King's College, Cambridge. Since 2006 he is professor of quantum physics at the Mathematical Institute, University of Oxford. Also in 2006 was appointed a Lee Kong Chian Centennial Professor at the National University of Singapore and became the founding director of the Centre for Quantum Technologies (CQT). After retiring from the director position in 2020 he remains a Distinguished Fellow at CQT. In 2020 he joined the Okinawa Institute of Science and Technology as adjunct professor. He has worked with and advised several companies and government agencies, served on various professional advisory boards, and is the Vice Chairman of The Noel Croucher Foundation. Research Ekert's research extends over most aspects of information processing in quantum-mechanical systems, with a focus on quantum cryptography and quantum computation. Building on the idea of quantum non-locality and Bell's inequalities he introduced entanglement-based quantum key distribution. His 1991 paper generated a spate of new research that established a vigorously active new area of physics and cryptography. It is one of the most cited papers in the field and was chosen by the editors of the Physical Review Letters as one of their "milestone letters", i.e. papers that made important contributions to physics, announced significant discoveries, or started new areas of research. His subsequent work with John Rarity and Paul Tapster, from the Defence Research Agency (DRA) in Malvern, resulted in the proof-of-principle experimental quantum key distribution, introducing parametric down-conversion, phase encoding and quantum interferometry into the repertoire of cryptography. He and collaborators were the first to develop the concept of a security proof based on entanglement purification. Ekert and colleagues have made a number of contributions to both theoretical aspects of quantum computation and proposals for its experimental realisations. These include proving that almost any quantum logic gate operating on two quantum bits is universal, proposing one of the first realistic implementations of quantum computation, e.g. using the induced dipole-dipole coupling in an optically driven array of quantum dots, introducing more stable geometric quantum logic gates, and proposing "noiseless encoding", which became later known as decoherence free subspaces. His other notable contributions include work on quantum state swapping, optimal quantum state estimation and quantum state transfer. With some of the same collaborators, he has written on connections between the notion of mathematical proofs and the laws of physics. He has also contributed semi-popular writing on the history of science. Honours and awards For his discovery of quantum cryptography he was awarded the 1995 Maxwell Medal and Prize by the Institute of Physics, the 2007 Hughes Medal by the Royal Society, the 2019 Micius Quantum Prize and the 2024 Royal Society Milner Award. He is also a co-recipient of the 2004 European Union Descartes Prize. In 2016 he was elected a Fellow of the Royal Society. He is a fellow of the Singapore National Academy of Science and a recipient of the 2017 Singapore Public Administration Medal (Silver) Pingat Pentadbiran Awam. He is a foreign member of the Polish Academy of Arts and Sciences. See also List of Polish physicists References External links "Cryptoreality (Part I): From Ancient Ciphers to Quantum Computers – A Conversation with Artur Ekert" , Ideas Roadshow, 2017 "Cryptoreality (Part II): Applied Foundational Physics – A Conversation with Artur Ekert" , Ideas Roadshow, 2017 1961 births Living people Scientists from Wrocław Jagiellonian University alumni Alumni of Wolfson College, Oxford Quantum physicists 21st-century Polish physicists 21st-century British physicists Academic staff of the National University of Singapore Fellows of Merton College, Oxford Maxwell Medal and Prize recipients Fellows of the Royal Society Recipients of the Pingat Pentadbiran Awam Quantum information scientists
Artur Ekert
[ "Physics" ]
1,099
[ "Quantum physicists", "Quantum mechanics" ]
1,497,743
https://en.wikipedia.org/wiki/Atrato%20River
The Atrato River () is a river of northwestern Colombia. It rises in the slopes of the Western Cordillera and flows almost due north to the Gulf of Urabá (or Gulf of Darién), where it forms a large, swampy delta. Its course crosses the Chocó Department, forming that department's border with neighboring Antioquia in two places. Its total length is about , and it is navigable as far as Quibdó (400 km / 250 mi), the capital of the department. In 2016, the Constitutional Court of Colombia granted the river legal rights of personhood after years of degradation of the river basin from large-scale mining and illegal logging practices, which severely impacted the traditional ways of life for Afro-Colombians and Indigenous people. Drainage area The river’s total length is about , and it is navigable as far as Quibdó (400 km / 250 mi), the capital of the department. The basin occupies an area of and has an average annual precipitation of >5,000 mm/year that reaches up to 12,000 mm/year in the upper basin. Flowing through a narrow valley between the Cordillera and coastal range, it has only short tributaries, the principal ones being the Truandó, the Sucio, and the Murrí rivers. The gold and platinum mines of Chocó line some of its confluence, and the river sands are auriferous. Mining and its toxic leavings have adversely affected river and environmental quality, damaging habitat for many species and affecting the ethnic groups, the predominantly Afro-Colombian and Native American indigenous peoples who live along the river. The river is one of the few ways to move around in the Chocó region. Wildlife Northwestern Colombia encompasses an area of great diversity in wildlife. During the Pleistocene era at the height of the Atrato river, where it intersected the Cauca-Magdalena, the area was covered by a sea. It is proposed that this created a geographic barrier that may have caused many species to diverge through the process of allopatric speciation. For example, Philip Hershkovitz proposed that the cotton-top tamarin (Saguinus oedipus) and the white-footed tamarin (Saguinus leocopus) diverged because of the rise of the Atrato, and today they are principally separated by the river. Fish Andinoacara biseriatus – A Cichlid. History In the 19th and early 20th centuries, the San Juan and the Atrato rivers attracted considerable attention as part of a feasible route for a trans-isthmian canal in Colombia. William Kennish, an engineer and inventor from the Isle of Man and Royal British Navy veteran, proposed an aqueduct making use of the Atrato River and its tributary, the Truando River, to cross the Colombian isthmus. After publishing a report in 1855 on this proposal for a New York firm, he was chosen to guide a US military expedition to explore and survey the proposed project in Colombia. In 1901, the United States government's Isthmian Canal Commission determined that the Atrato River was not suitable for a canal, due to the length of the route (over 100 miles) and the large amount of silt carried by the river, and recommended Nicaragua and Panama as preferable sites. In November 2016, the Constitutional Court of Colombia declared the legal personhood of the Atrato River possessing the rights to ‘’protection, conservation, maintenance, and restoration.'' While the Colombian Constitution does not explicitly recognize Rights of Nature [RoN], ruled that it is a set of ‘’biocultural rights’’ that can be inferred from guarantees in the constitution for biodiversity, cultural, and humanitarian protections. The ‘biocultural rights’’ claim emphasized that the cultural rights of Colombian Indigenous and Afro-Colombian citizens, and the biological rights of the Atrato River are inextricably linked. As a result, Judge Palacio ruled that the biocultural rights should support the conservation, restoration, and sustainable development of the Atrato River The ruling transpired from the degradation of the river basin from large-scale mining and illegal logging practices, which severely impacted the traditional ways of life for Afro-Colombians and Indigenous people. Illegal logging changed the flow of the river, and illicit mining increased the level of toxic chemicals [i.e., mercury and cyanide] entering the river system, causing a threat to the biodiversity of the area, and adversely impacting the health of the vulnerable people of these societies, including children The court referred to New Zealand’s Te Awa Tupua Act (Whanganui River Claims Settlement) and cited New Zealand’s recognition of the Whanganui River’s legal personhood as precedent. Following that example, the court ordered the creation of a guardian body – the Commission of the Guardians of Atrato River, to represent the interests of the river, and manage the river’s resources in a sustainable way that is consistent with the river’s legal personhood status. Initially, the commission would include government representatives and one community representative. However, civil society rejected the idea of just one community and instead made a request for fourteen council members to serve on the council. The request was approved, and the council was formed in May 2018. References Environmental personhood Protected areas of Colombia Rivers of Colombia
Atrato River
[ "Environmental_science" ]
1,106
[ "Environmental personhood", "Environmental ethics" ]
1,497,849
https://en.wikipedia.org/wiki/List%20of%20Google%20products
The following is a list of products, services, and apps provided by Google. Active, soon-to-be discontinued, and discontinued products, services, tools, hardware, and other applications are broken out into designated sections. Web-based products Search tools Google Search – a web search engine and Google's core product. Google Alerts – an email notification service that sends alerts based on chosen search terms whenever it finds new results. Alerts include web results, Google Groups results, news and videos. Google Assistant – a virtual assistant. Gemini – a conversational generative artificial intelligence chatbot. Google Books – a search engine for books. Google Dataset Search – allows searching for datasets in data repositories and local and national government websites. Google Flights – a search engine for flight tickets. Google Images – a search engine for images online. Google Shopping – a search engine to search for products across online shops. Google Travel – a trip planner service. Google Videos – a search engine for videos. Google Lens – an image recognition technology Groupings of articles, creative works, documents, or media Google Arts & Culture – an online platform to view artworks and cultural artifacts. Google Books – a website that lists published books and hosts a large, searchable selection of scanned books. Google Finance – searchable US business news, opinions, and financial data. Google News – automated news compilation service and search engine for news in more than 20 languages. Google Patents – a search engine to search through millions of patents, each result with its own page, including drawings, claims and citations. Google Scholar – a search engine for the full text of scholarly literature across an array of publishing formats and scholarly fields. Includes virtually all peer-reviewed journals. YouTube – a video hosting website. Advertising services Google Ads – an online advertising platform. AdMob – a mobile advertising network. Google AdSense – a contextual advertising program for web publishers that delivers text-based advertisements that are relevant to site content pages. Google Ad Manager – an advertisement exchange platform. Google Marketing Platform – an online advertising and analytics platform. Google Tag Manager (2023)– a tag management system to manage JavaScript and HTML tags, including web beacons, for web tracking and analytics. Local Service Ads - an online advertising platform for lead generations that provide local businesses with a Google guaranteed green check mark. Communication and publishing tools Blogger – a weblog publishing tool. FeedBurner – a tool in web feed management services, including feed traffic analysis and advertising facilities. Google Chat – an instant messaging software with the capability of creating multi-user "rooms". Google Saved – a collections app. Google Classroom – a content management system for schools that aids in the distribution and grading of assignments and provides in-class communication. Google Fonts – a webfont hosting service. Google Groups – an online discussion service that also offers Usenet access. Google Meet – a video conferencing platform. Google Voice – a VoIP system that provides a phone number that can be forwarded to actual phone lines. Productivity tools Google products and services for productivity software. Gmail – an email service. Google Account – controls how a user appears and presents themselves on Google products. Google Calendar – an online calendar with Gmail integration, calendar sharing and a "quick add" function to create events using natural language. Google Charts – an interactive, web-based chart image generation from user-supplied JavaScript. Google Docs Editors – a productivity office suite with document collaboration and publishing capabilities. Tightly integrated with Google Drive. Google Docs – a document editing software. Google Sheets – a spreadsheet editing software. Google Slides – a presentation editing software. Google Drawings – a diagramming software. Google Forms – a survey software. Google Sites – a webpage creation and publication tool. Google Keep – a note-taking service. Google Drive – a file hosting service with synchronisation option; tightly integrated with Google Docs Editors. Google Translate – a service that allows carrying out machine translation of any text or web page between pairs of languages. Map-related products Google Maps – a mapping service that indexes streets and displays satellite and street-level imagery, providing directions and local business search. Google My Maps – a social custom map making tool based on Google Maps. Google Earth – a virtual 3D globe that uses satellite imagery, aerial photography, GIS from Google's repository. Google Mars – imagery of Mars using the Google Maps interface. Elevation, visible imagery and infrared imagery can be shown. Google Moon – NASA imagery of the moon through the Google Maps interface. Google Street View – provides interactive panoramas from positions along many streets in the world. Google Sky – view planets, stars and galaxies. Google Santa Tracker – simulates tracking Santa Claus on Christmas Eve. Statistical tools Google Analytics – a traffic statistics generator for defined websites, with Google Ads integration. Webmasters can optimize ad campaigns, based on the statistics. Analytics are based on the Urchin software. Google Ngram Viewer – charts year-by-year frequencies of any set of comma-delimited strings in Google's text corpora. Google Public Data Explorer – a public data and forecasts from international organizations and academic institutions including the World Bank, OECD, Eurostat and the University of Denver. TensorFlow – a machine learning service that simplifies designing neural networks in an easier and more visible fashion. Google Trends – a graphing application for Web Search statistics, showing the popularity of particular search terms over time. Multiple terms can be shown at once, and Results can be displayed by city, region or language. Related news stories are shown. Has a sub-section that shows popularity of websites over time. Google Activity Report – a monthly report including statistics about a user's Google usage, such as sign-in, third party authentication changes, Gmail usage, calendar, search history and YouTube. Looker Studio – an online tool for converting data into customizable informative reports and dashboards. Business-oriented products Google Workspace – a suite of web applications for businesses, education providers and nonprofits that include customizable versions of several Google products accessible through a custom domain name. Services include, but are not limited to, Gmail, Google Contacts, Google Calendar, Google Docs Editors, Google Sites, Google Meet, Google Chat, Google Cloud Search, and more. One Google workspace exclusive product is Google Vault. Google Business Profile – a listing service that allows business owners to create and verify their own business data including address, phone number, business category and photos. Google Tables (beta) – Business workflow automation tool. Healthcare related products Google ARDA project – stand for automated retinal disease assessment. It is an AI tool to help doctors detect retinal disease. Google Care Studio – tool for clinicians to search, browse and see highlights across a patient's broader electronic health record. Google Fit – health-tracking platform. Health Connect (beta) – Android platform which help health and fitness apps to use the same on-device data, within a unified ecosystem. Developer tools Accelerated Mobile Pages (AMP) – an open-source project and service to accelerate content on mobile devices. AMP provides a JavaScript library for developers and restricts the use of third-party JS. Google App Engine – write and run web applications. Google Developers – open source code and lists of API services. Provided project hosting for free and open source software until 2016. Dart – a structured web programming language. Flutter – a mobile cross platform development tool for Android and iOS. Go – a compiled, concurrent programming language. OpenSocial – APIs for building social applications on many websites. Google PageSpeed Tools – optimize webpage performance. Google Web Toolkit – an open source Java software development framework that allows web developers to create Ajax applications in Java. Google Search Console Sitemap – submission and analysis for the Sitemaps protocol. GN – meta-build system generating Ninja build configurations. Replaced GYP in Chromium. Gerrit – a code collaboration tool. Googletest – testing framework in C++. Bazel – a build system. FlatBuffers – a serialization library. Protocol Buffers – a serialization library similar to FlatBuffers. Shaderc – tools and library for compiling HLSL or GLSL into SPIRV. American fuzzy lop – a security-oriented fuzzer. Google Guava – core libraries for Java. Google Closure Tools – JavaScript tools. Google Collaboratory – write Python code using a Jupyter notebook. Security tools reCAPTCHA – a user-dialogue system used to prevent bots from accessing websites. Google Safe Browsing – a blacklist service for web resources that contain malware or phishing content. Titan – a security hardware chip. Titan Security Key – a U2F security token. Titan M – used in Pixel smartphones starting with the Pixel 3. Titan M2 – successor starting with the Pixel 6 based on RISC-V Titan C – used in Google-made Chromebooks such as the Pixel Slate. Operating systems Android – a Linux-based operating system for mobile devices such as smartphones and tablet computers by Google and the Open Handset Alliance. Wear OS – a version of Android designed for smartwatches and other wearable items. Android Auto – a version of Android made for automobiles by Google. Android TV – a version of Android made for smart TVs. Google Cast – a version of Google Cast which powers some Google Nest devices. ChromeOS – a Linux-based operating system for web applications. Fitbit OS – an operating system for Fitbit devices. Fuchsia – an operating system based on the Zircon kernel. Desktop applications AdWords Editor – desktop application to manage a Google AdWords account; lets users make changes to their account and advertising campaigns before synchronizing with the online service. Drive File Stream – file synchronisation software that works with the business edition of Google Drive. Google Chrome – a web browser. Google IME – input method editor that allows users to enter text in one of the supported languages using a Roman keyboard. Google Japanese Input – Japanese input method editor. Android Studio – integrated development environment for Android. Google Web Designer – WYSIWYG editor for making rich HTML5 pages and ads intended to run on multiple devices. Backup and Sync – client software to synchronize files between the user's computer and Google Drive storage. Tilt Brush – painting game for the Vive and Oculus Rift. Google Trends Screensaver – a screensaver showing the Google Trends in a customizable colorful grid for macOS. Chrome Remote Desktop – desktop and browser application to remotely access another Windows, Mac, or Linux system. Mobile applications Hardware Product families Google Pixel – smartphones, tablets, laptops, earbuds, and other accessories. Google Nest – smart home products including smart speakers, smart displays, digital media players, smart doorbells, smart thermostats, smoke detectors, and wireless routers. Fitbit – activity trackers and smartwatches. Stadia Controller – game controller for Stadia. Models Nexus One – 3.7" phone running Android 2.3 "Gingerbread" Nexus S – 4" phone running Android 4.1 "Jelly Bean" Nest Learning Thermostat (first generation) – smart thermostat Galaxy Nexus – 4.7" phone running Android 4.3 "Jelly Bean" Nexus Q – media streaming entertainment device in the Google Nexus product family Nexus 7 (2012) – 7" tablet running Android 5.1 "Lollipop" Nexus 10 – 10" tablet running Android 5.1 "Lollipop" Nest Learning Thermostat (second generation) – smart thermostat Nexus 4 – 4.7" phone running Android 5.1 "Lollipop" Chromebook Pixel (2013) – laptop running ChromeOS Nexus 7 (2013) – 7" tablet running Android 6.0 "Marshmallow" Nexus 5 – 4.95" phone running Android 6.0 "Marshmallow" Nest Protect (first generation) – smoke alarm Nexus 6 – 5.96" phone running Android 7.1.1 "Nougat" Nexus 9 – 9" tablet running Android 7.1 "Nougat" Nexus Player – streaming media player running Android 8.0 "Oreo" Chromebook Pixel (2015) – laptop running ChromeOS Nest Cam Indoor – security camera Nest Protect (second generation) – smoke alarm Nest Learning Thermostat (third generation) – smart thermostat Nexus 5X – 5" phone running Android 8.1 "Oreo" Nexus 6P – 5.7" phone running Android 8.1 "Oreo" Pixel C – 10.2" convertible tablet running Android 8.1 "Oreo" Nest Cam Outdoor – security camera Pixel – 5" smartphone running Android 10 Pixel XL – 5.5" smartphone running Android 10 Daydream View (first generation) – virtual reality headset for smartphones Google Home – smart speaker Google Wifi – wireless router Nest Cam IQ Indoor – security camera Nest Thermostat E – smart thermostat Nest Hello – smart video doorbell Nest Cam IQ Outdoor – security camera Nest × Yale – smart lock Pixel 2 – 5" smartphone running Android 11 Pixel 2 XL – 6" smartphone running Android 11 Daydream View (second generation) – virtual reality headset for smartphones Home Mini – smart speaker Home Max – smart speaker Pixel Buds (first generation) – wireless earbuds Pixelbook – laptop running ChromeOS Pixel 3 – 5.5" smartphone running Android 11 Pixel 3 XL – 6.3" smartphone running Android 11 Pixel Slate – 2-in-1 PC running ChromeOS Pixel Stand – wireless charger Nest Hub – smart display Stadia Controller – gaming controller for Stadia Pixel 3a – 5.6" smartphone running Android 11 Pixel 3a XL – 6" smartphone running Android 11 Nest Hub Max – smart display Pixel 4 – 5.7" smartphone running Android 11 Pixel 4 XL – 6.3" smartphone running Android 11 Pixelbook Go – laptop running ChromeOS Nest Mini – smart speaker Nest Wifi – wireless router Pixel Buds (second generation) – wireless earbuds Pixel 4a – 5.8" smartphone running Android 11 Pixel 4a (5G) – 6.2" smartphone running Android 11 Pixel 5 – 6" smartphone running Android 11 Nest Audio – smart speaker Nest Thermostat – smart thermostat Pixel Buds A-Series – wireless earbuds Pixel 5a – 6.3" smartphone running Android 11 Pixel 6 – 6.4" smartphone running Android 12 Pixel 6 Pro – 6.7" smartphone running Android 12 Pixel 6a – 6.1" smartphone running Android 12 Pixel 7 – 6.3" smartphone running Android 13 Pixel 7 Pro – 6.7" smartphone running Android 13 Pixel Watch – smartwatch Pixel Tablet – tablet Pixel 7a – 6.1" smartphone running Android 13 Pixel Fold – foldable smartphone Pixel 8 – 6.2" smartphone running Android 14 Pixel 8 Pro – 6.7" smartphones running Android 14 Pixel Watch 2 - smartwatch Pixel 8a – 6.1" smartphone running Android 14 Pixel 9 – 6.3" smartphone running Android 14 Pixel 9 Pro – 6.3" smartphone running Android 14 Pixel 9 Pro XL – 6.8" smartphone running Android 14 Pixel 9 Pro Fold - foldable smartphone running Android 14 Pixel Watch 3 - smartwatch Pixel Buds Pro 2 - wireless earbuds Processors Pixel Visual Core (2017, Pixel 2) Titan M (2018, Pixel 3) Pixel Neural Core (2019, Pixel 4) Titan C (2019, Pixelbook Go) Titan M2 (2021, Pixel 6) Google Tensor (2021, Pixel 6) Google Tensor G2 (2022, Pixel 7) Google Tensor G3 (2023, Pixel 8) Google Tensor G4 (2024, Pixel 9) Services Google Cloud Platform – a modular cloud-based services for software development. Google Crisis Response – a public project that covers disasters, turmoils and other emergencies and alerts. Google Fi Wireless – a MVNO aimed at simple plans and pricing. Google Get Your Business Online – increase the web presence of small businesses and cities. Advice on search engine optimization and maintaining business owners update their business profile. Google Public DNS – a publicly accessible DNS server. Google Person Finder – an open-source tool that helps people reconnect with others in the aftermath of a disaster. Google Firebase – a real time database that provides an API that allows developers to store and sync data across multiple clients. Google Cast – a display entertainment and apps from a phone, tablet or laptop right on a TV or speakers. Google Pay – a digital wallet platform and online payment system. YouTube TV – an over-the top internet television service that offers live TV. Scheduled to be discontinued Applications that are no longer in development and scheduled to be discontinued in the future: 2025 Google URL Shortener – URL shortening service. Started to turn down support on March 30, 2018, was discontinued on March 30, 2019, and will stop working on August 25, 2025. Firebase Dynamic Links – URL shortening service. Will shut down on August 25, 2025. Google Fit API – Will drop support on June 30, 2025. Discontinued products and services Google has retired many offerings, either because of obsolescence, integration into other Google products, or lack of interest. Google's discontinued offerings are colloquially referred to as Google Graveyard. 2024 Jamboard – Discontinued on December 31. Stack – An app that allows users to scan and organize documents and receipts on their mobile devices. Shut down on September 23. Chromecast – Discontinued on August 6. Replaced by Google TV Streamer. VPN by Google One – Shut down on June 20, citing low usage. Google Pay (for US only) – Payment app developed by Google. Shut down on June 4 and replaced by Google Wallet. People Cards - New profiles can no longer be created after April 7 but gave users the option to download or save content until they were removed the following month. Google cited that the feature wasn't that useful as they intended it to be. Dropcam – Shut down on April 8. Nest Secure – Pulled from Google Store in October 2020. Shut down on April 8. Google Podcasts – Shut down on April 2 and replaced by YouTube Music. Keen – Shut down on March 24 and the website is no longer accessible. Google Search's Cache link – Discontinued in February consisting no longer necessary Google Earth View – Website with a collection of satellite-captured wallpapers and Chromecast backgrounds. Shut down in mid-January. Basic HTML View on Gmail – Discontinued in January. 2023 Google Optimize - freemium web analytics and testing tool. Shut down on September 30. Google Glass (Enterprise Edition) – wearable computer with an optical head-mounted display and camera that allows the wearer to interact with various applications and the Internet via natural language voice commands. Discontinued on September 15. Google Duo - Free high quality video calling service for mobiles and desktops Google Domains - Shut down on September 7 after migrating to Squarespace. Google Pixel Pass – Discontinued on August 29. Google Cloud IoT Core Service – Shut down on August 16. Google Album Archive – Discontinued on July 19. Google Code Competitions – Discontinued on July 1. Google Universal Analytics – Shut down on July 1 and replaced by Google Analytics 4. Conversational Actions – Extended the functionality of Google Assistant by allowing 3rd party developers to create custom experiences, or conversations, for users of Google Assistant. Shut down in June. Grasshopper – Shut down on June 15. Google Now Launcher – Discontinued in May. Jacquard – Shut down in April 24. Google Currents – internal enterprise communication tool, formerly Google+ for G Suite. Shut down on March, with users migrated to Spaces in Google Chat. Google Street View (standalone app) – Shut down on March 21. The Street View Studio app and the ability to use Street View in the main Google Maps app rendered the Street View app redundant, however it is now required to purchase a 360 camera to contribute to Street View, as the app allowed you to create photospheres with any supported smartphone camera. The "Photo Paths" feature, which allowed any smartphone to create a 2D capture of any road not yet covered by Street View was completely removed, requiring users to either purchase a 360 camera or migrate to a third-party service such as Mapillary. Stadia – Shut down on January 18. 2022 YouTube Originals – discontinued on December 31. Google OnHub – stopped receiving support on December 19. Google Hangouts – discontinued on November 1, after migrating all users to Google Chat. Google Surveys – a survey app shut down in favor of Google Forms. YouTube for Wii U – Shut down in October. YouTube Go – An app aimed at making YouTube easier to access on mobile devices in emerging markets through special features like downloading video on WiFi for viewing later. It was shut down in August. Google My Business – An app that allowed businesses to manage their Google Maps Business profiles. It was shut down in July. Kormo Jobs – An app that allowed users in primarily India, Indonesia, and Bangladesh to help them find jobs nearby that match their skills and interests. It was shut down in July. Android Auto for phone screens – An app that allowed the screen of the phone to be used as an Android Auto interface while driving, intended for vehicles that did not have a compatible screen built in. It was shut down in July. Google Chrome Apps – Apps hosted or packaged web applications that ran on the Google Chrome browser. Support for Windows and other Operating systems dropped in June and shut down on ChromeOS in January 2025. For ChromeOS devices enrolled in the LTS channel, Chrome apps will be supported until October 2028. G Suite (Legacy Free Edition) – A free tier offering some of the services included in Google's productivity suite. Google Assistant Snapshot – The successor to Google Now that provided predictive cards with information and daily updates in the Google app for Android and iOS. Cameos on Google – Cameos allows celebrities, models and public figure to record video-based Q&A. Shut down on February 16. Android Things – A part of Google Internet of Things (IoT). Shut down on January 5. 2021 AngularJS – Open source web application framework. Shut down on December 31. Google Clips – a miniature clip-on camera device. Pulled from Google Store on October 15, 2019. Discontinued on December 31. Google Toolbar – web browser toolbar with features such as a Google Search box, pop-up blocker and ability for website owners to create buttons. Shut down on December 12. My Maps – an Android app that enabled users to create custom maps for personal use or sharing on their mobile device. Shut down on October 15 and users were asked to migrate to the mobile web version of the app. Google Bookmarks – Online bookmarking service. Discontinued on September 30. Tour Builder – allowed users to create and share interactive tours inside Google Earth. Shut down in July, replaced by new creation tools in Google Earth. Poly – a service to browse, share and download 3D models. Shut down on June 30. Google Expeditions – virtual reality (VR) platform designed for educational institutions. Discontinued on June 30. The majority of Expedition's tours were migrated to Google Arts & Culture. Tour Creator – allowed users to create immersive, 360° guided tours in the Expeditions app that could be viewed with VR devices. Shut down on June 30. Timely – an Android app that provided alarm, stopwatch and timer functions with synchronization across devices. Timely servers were shut down on May 31. Google Go Links – a URL shortening service that also supported custom domain for customers of Google Workspace. Discontinued on April 1. Google Public Alerts – an online notification service that sent safety alerts to various countries. Shut down on March 31 and functions moved to Google Search and Google Maps. Google Crisis Map – a service that visualized crisis and weather-related data. Shut down March 30. Improvements to Google Search and Maps rendered this service redundant. Google App Maker – allowed users to develop apps for businesses. Shut down on January 19 due to Google's acquisition of AppSheet. 2020 Google Cloud Print – a cloud-based printing solution that has been in beta since 2010. Discontinued on December 31. Google Play Music – Google's music streaming service. Discontinued on December 3 and replaced by YouTube Music and Google Podcasts. Google Station – service that allowed users to Spread Wi-Fi hotspots. Shut down on September 30. Hire by Google – applicant tracking system and recruiting software. Shut down on September 1. Password Checkup – an extension that warned of breached third-party logins. Shut down in July after it had been integrated with Chrome. Google Photos Print – a subscription service that automatically selected the best ten photos from the last thirty days which were mailed to users' homes. Shut down in June. Shoelace – an app used to find group activities with others who share your interests. Shut down in May. Neighbourly – an experimental mobile app designed to help you learn about your neighborhood by asking other residents. Shut down on May 12. Fabric – modular SDK platform launched by Crashlytics in 2014. Google acquired Crashlytics in 2017 and announced plans to migrate all of its features to Firebase. It was shut down on May 4. Material Theme Editor – plugin for Sketch app which allowed you to create a material-based design system for your app. Discontinued in March. YouTube For Wii U Browser Fiber TV – an IPTV service bundled with Google Fiber. Discontinued on February 4. One Today – an app that allowed users to donate $1 to different organizations and discover how their donation would be used. Discontinued in January. Androidify – allowed users to create a custom Android avatar. Discontinued in January. 2019 YouTube Annotations – annotations that were displayed over videos on YouTube. On January 15, all existing annotations were removed from YouTube. Google Pinyin – Discontinued in March. Mr. Jingles – Google's notifications widget. Discontinued on March 7. Google Tasks canvas: A full-screen interface of Google Tasks that was discontinued in April. Google Allo – Google's instant messaging app. Discontinued on March 12. Google Image Charts a chart-making service that provided images of rendered chart data, accessed with REST calls. The service was deprecated in 2012, temporarily disabled in February 2019 and discontinued on March 18, 2019. Inbox by Gmail – an email application for Android, iOS, and web platform that organized and automated to-do lists using email content. As of April 2, accessing the Inbox subdomain redirects to Gmail proper. Google+ – The consumer edition of Google's social media platform. As of April 2, users receive a message stating that "Google+ is no longer available for consumer (personal) and brand accounts". Google Jump – cloud-based video stitching service. Discontinued June 28. Works with Nest the smart home platform of Google brand Nest. Users were asked to migrate to the Google Assistant platform. Support ended on August 31. YouTube for Nintendo 3DS – official app for Nintendo 3DS. Discontinued on September 3. YouTube Messages – direct messages on YouTube – discontinued after September 18. YouTube Leanback – a web application for control with a remote, intended for use with smart TVs and other similar devices. Discontinued on October 2. Google Daydream View Google's VR headset (first-gen in late 2016, second-gen in late 2017) was discontinued just after their "Made by Google" event in October The Google Daydream platform itself is being retired as well. Touring Bird – Travel website which facilitated booking tours, tickets and activities in top locations. The service was shut down on November 17. Google Bulletin – "Hyperlocal" news service which allowed users to post news from their neighborhood. It was shut down on November 22. Google Fusion Tables – A service for managing and visualizing data. The service was shut down on December 3. Google Translator Toolkit – An online computer-assisted translation tool designed to allow translators to edit the translations that are automatically generated by Google Translate. It was shut down on December 4, citing declining usage and the availability of other similar tools. Google Correlate – finds search patterns which correspond with real-world trends. It was shut down on December 15, as a result of low usage. Google Search Appliance – A rack mounted device used to index documents. Hardware sales ended in 2017 and initial shutdown occurred in 2018; and was ultimately shut down on December 31, 2019. Google Native Client (NaCL/PNaCl) – sandboxing technology for running a subset of native code. It was discontinued on December 31. Datally – Lets users save mobile data. Removed from Play Store in October. Build with Chrome – an initiative between Lego and Google to build the world using Lego. It was discontinued in March. Google Game Builder – A prototype program that could develop video games in real time and was released on Steam for Windows and MacOS. It used card-based virtual programming and could import models from Google Poly. The source code was released for free on GitHub. 2018 Blogger Web Comments (Firefox only) – displays related comments from other Blogger users. City Tours – overlay to Maps that shows interesting tours within a city. Dashboard Widgets for Mac (Mac OS X Dashboard Widgets) – suite of mini-applications including Gmail, Blogger and Search History. Joga Bonito – soccer community site. Local – Local listings service, merged with Google Maps. MK-14 – 4U rack-mounted server for Google Radio Automation system. Google sold its Google Radio Automation business to WideOrbit Inc. Google Music Trends – music ranking of songs played with iTunes, Winamp, Windows Media Player and Yahoo Music. Trends were generated by Google Talk's "share your music status" feature. Google Personalized Search – search results personalization, merged with Google Accounts and Web History. Photos Screensaver – slideshow screensaver as part of Google Pack, which displays images sourced from a hard disk, or through RSS and Atom Web feeds. Rebang (Google China) – search trend site, similar to Google Zeitgeist. , part of Google Labs. Spreadsheets – spreadsheet management application, before it was integrated with Writely to form Google Docs & Spreadsheets. University Search – search engine listing for university websites. U.S. Government Search – search engine and personalized homepage that exclusively draws from sites with a .gov TLD. Discontinued June 2006. Video Player – view videos from Google Video. Voice Search – automated voice system for web search using the telephone. Became Google Voice Local Search and integrated on the Google Mobile web site. Google X – redesigned Google search homepage. It appeared in Google Labs, but disappeared the following day for undisclosed reasons. Accessible Search – search engine for the visually impaired. Quick Search Box – search box, based on Quicksilver, easing access to installed applications and online searches. Visigami – image search application screen saver that searches files from Google Images, Picasa and Flickr. Wireless access – VPN client for Google WiFi users, whose equipment does not support WPA or 802.1X protocols. Google Play Newsstand – News publication and magazine store. Replaced by Google News on May 15, removed from Google Play on November 5, and magazines were no longer available on Google News since January 2020. Google News & Weather – News publication app. Merged by Google News on May 15. Google global market finder. QPX Express API – flight search API. Google Contact Lens – was a smart contact lens project capable of monitoring the user's glucose level in tears. On November 16, Verily announced it has discontinued the project because of the lack of correlation between tear glucose and blood glucose. 2017 Google Gesture Search - Gesture search is a software that draws letters composed of the name of a program, contact or setting, it can find the latter Google Nexus – Smartphone lineup, replaced by Google Pixel on October 4. Trendalyzer – data trend viewing platform. Discontinued in September. Google Swiffy – convert Adobe Flash files (SWF) into HTML5. Discontinued on July 1. Glass OS – an operating system for Google Glass. Discontinued on June 20. Google Spaces – group discussions and messaging. Discontinued on April 17. Google Map Maker – map editor with browser interface. Discontinued on April 1, replaced by Google Maps and Google Local Guides. Google Hands Free – retail checkout without using your phone or watch. Pilot started in the Bay area March 2016, but discontinued on February 8. Google Maps Engine – develop geospatial applications. Discontinued on February 1. Free Search – embed site/web search into a user's website. Replaced by Google Custom Search. Google Maps for PS Vita, version of Google maps for the PS Vita, discontinued in January 2015, YouTube for PS Vita, discontinued in January 2015, 2016 Google Code – Open source code hosting. Discontinued on January 25 and renamed to Google Developers. Picasa – photo organization and editing application. Closed March 15 and replaced by Google Photos. Google Compare – comparison-shopping site for auto insurance, credit cards and mortgages Google Showtimes – movie showtime search engine. Discontinued on November 1. MyTracks – GPS logging. Shut down April 30. Project Ara – an "initiative to build a phone with interchangeable modules for various components like cameras and batteries" was suspended to "streamline the company's seemingly disorganized product lineup". on September 2. Panoramio – geolocation-oriented photo sharing website. Discontinued on November 4. Google's Local Guides program as well as photo upload tools in Google Maps rendered Panoramio redundant. Google Feed API – download public Atom or RSS feeds using JavaScript. Deactivated on December 15. 2015 Wildfire by Google – social media marketing software Google Earth Plugin – an application service used to customize Google Earth. Discontinued on December 15. Google Flu Trends – a web service to help predict outbreaks of flu activity. Discontinued on August 9. Google Moderator – rank user-submitted questions, suggestions and ideas via crowdsourcing. Discontinued on June 30. Google Helpouts – Hangout-based live video chat with experts. Discontinued on April 20. Google Earth Enterprise – Google Earth for enterprise use. Discontinued on March 20. BebaPay – prepaid ticket payment system. Discontinued on March 15. Google Glass (Consumer Edition) – wearable computer with an optical head-mounted display and camera that allows the wearer to interact with various applications and the Internet via natural language voice commands. Discontinued on January 19. Speak To Tweet – telephone service created in 2011 in collaboration with Twitter and SayNow allowing users to phone a specific number and leave a voicemail; a tweet was automatically posted on X. Discontinued sometime during 2015. 2014 Freebase – an open, Creative Commons, attribution licensed collection of structured data, and a Freebase platform for accessing and manipulating that data via the Freebase API. Discontinued on December 16. Google Questions and Answers – community-driven knowledge market website. Discontinued on December 1. Orkut – social networking website. Discontinued on September 30. Google's "discussion search" option. Discontinued in July. Quickoffice – productivity suite for mobile devices. Discontinued in June, merged into Google Drive. Google TV – smart TV platform based on Android. Discontinued and replaced by Android TV in June. Google Offers – service offering discounts and coupons. Shut down on March 31. Google Chrome Frame – plugin for Internet Explorer that allowed web pages to be viewed using WebKit and the V8 JavaScript engine. Discontinued on February 25. Google Schemer – social search to find local activities. Discontinued on February 7. YouTube My Speed. Discontinued in January, replaced by Google Video Quality Report. Google Notifier – alerted users to new messages in their Gmail account. Discontinued on January 31. 2013 My Maps, GIS tools for Google Maps. Google Currents – Magazine app. Merged into Google Play Newsstand on November 20. Google Checkout – online payment processing service, aimed at simplifying the process of paying for online purchases. Discontinued on November 20, merged into Google Wallet. iGoogle – customisable homepage, which can contain web feeds and Google Gadgets. Discontinued on November 1. Google Latitude – mobile geolocation tool that lets friends know where users are. Discontinued on August 9, with some functionality moved to Google+. Google Reader – web-based news aggregator, capable of reading Atom and RSS feeds. Discontinued on July 1. Meebo – A social networking website discontinued on June 6. Google Building Maker – web-based building and editing tool to create 3D buildings for Google Earth. Discontinued on June 4. Google Talk – instant messaging service that provided both text and voice communication. Replaced May 15 by Google Hangouts. The service was discontinued by 2017 on all platforms. SMS Search – mobile phone short message service. Discontinued on May 10. Google Cloud Connect – Microsoft Office plugin for automatically backing up Office documents upon saving onto Google Docs. Discontinued on April 30, in favor of Google Drive. Picnik – online photo editor. Discontinued on April 19, and moved to Google+ photo manager. Google Calendar Sync – sync Microsoft Outlook email and calendar with Gmail and Google Calendar. Synchronization for existing installations stopped on August 1. Replaced with Google Sync, which does not synchronize Outlook calendars, but can sync email using IMAP or POP3. Also, Google Apps for Business, Education, and Government customers can use Google Apps Sync for Microsoft Outlook. 2012 Picasa Web Albums Uploader – upload images to the "Picasa Web Albums" service. It consisted of an iPhoto plug-in and a stand-alone application. Google Chart API – interactive Web-based chart image generator, deprecated in 2012 with service commitment to 2015 and turned off in 2019. Google promotes JavaScript-based Google Charts as a replacement, which is not backwards-compatible with the Google Chart API's HTTP methods. Google Apps Standard Edition – Discontinued on December 6. Nexus Q – digital media player. Discontinued in November. Google Refine – data cleansing and processing. It was spun off from Google on October 2, becoming open source; it is now OpenRefine. TV Ads – Method to place advertising on TV networks. Discontinued on August 30, with all remaining active campaigns ending December 16. Knol – write authoritative articles related to various topics. Discontinued October 1. Yinyue (Music) (Google China) – site linking to a large archive of Chinese pop-music (principally Cantopop and Mandopop), including audio streaming over Google's own player, legal lyric downloads, and in most cases legal MP3 downloads. The archive was provided by Top100.cn (i.e., this service does not search the whole Internet) and was available in mainland China only. Discontinued in September, users were given the option to download playlists until October 19. Google Insights for Search – insights into Google search term usage. Discontinued September 27, merged in Google Trends. Listen – subscribe to and stream podcasts and Web audio. Discontinued in August. BumpTop – physics-based desktop application. Discontinued in August. Google Video – a free video hosting service. Shut down and migrated to YouTube (which Google acquired in 2006) on August 20. Google Notebook – online note-taking and web-clipping application. Discontinued in July. Google Website Optimizer – testing and optimization tool. Discontinued on August 1. Google Mini – reduced capacity, lower-cost version of the Google Search Appliance. Discontinued on July 31. Google Wave – online communication and collaborative real-time editor tool that bridge email and chat. Support ended on April 30. Slide.com – Discontinued on March 6. Google Friend Connect – add social features to websites. Discontinued on March 1, replaced by Google+'s pages and off-site Page badges. Jaiku – social networking, microblogging and lifestreaming service comparable to Twitter. Shut down January 15. Google Code Search – software search engine. Discontinued on January 15. Google Health – store, manage, and share personal health information in one place. Development ceased June 24, 2011; accessible until January 1, 2012; data available for download until January 1, 2013. 2011 Google Buzz – social networking service integrated with Gmail allowing users to share content immediately and make conversations. Discontinued in December and replaced by Google+. Google Sidewiki – browser sidebar and service that allowed contributing and reading helpful information alongside any web page. Discontinued in December. Gears – web browser features, enabling some new web applications. Removed from all platforms by November. Squared – creates tables of information about a subject from unstructured data. Discontinued in September. Aardvark – social search utility that allowed people to ask and answer questions within their social networks. It used people's claimed expertise to match 'askers' with good 'answerers'. Discontinued on September 30. Google PowerMeter – view building energy consumption. Discontinued on September 16. Desktop – desktop search application that indexed emails, documents, music, photos, chats, Web history and other files. Discontinued on September 14. Google Fast Flip – online news aggregator. Discontinued September 6. Google Pack – application suite. Discontinued on September 2. Google Directory – collection of links arranged into hierarchical subcategories. The links and their categorization were from the Open Directory Project, sorted using PageRank. Discontinued on July 20. Google Blog Search – weblog search engine. Discontinued in July. Google Labs – test and demonstrate new Google products. Discontinued in July. Image Swirl – an enhancement for an image-search tool in Google Labs. It was built on top of image search by grouping images with similar visual and semantic qualities. Shut down in July due to discontinuation of Google Labs. Google Sets – generates a list of items when users enter a few examples. For example, entering "Green, Purple, Red" emits the list "Green, Purple, Red, Blue, Black, White, Brown". Discontinued mid-year. Directory – navigation directory, specifically for Chinese users. Hotpot – local recommendation engine that allowed people to rate restaurants, hotels etc. and share them with friends. Moved to Google Places service in April. Real Estate – place real estate listings in Google Maps. Discontinued February 10. 2010 Google Base – submission database that enabled content owners to submit content, have it hosted and made searchable. Information was organized using attributes. Discontinued on December 17, replaced with Google Shopping APIs. GOOG-411 (also known as Voice Local Search) – directory assistance service. Discontinued on November 12. Google SearchWiki – annotate and re-order search results. Discontinued March 3, replaced by Google Stars. Marratech e-Meeting – web conferencing software, used internally by Google's employees. Discontinued on February 19. Living Stories – collaboration with The New York Times and The Washington Post for presenting news. Discontinued in February. 2009 Audio Ads – radio advertising program for US businesses. Discontinued on February 12. Catalogs – search engine for over 6,600 print catalogs, acquired through optical character recognition. Discontinued in January. Dodgeball – social networking service. Users could text their location to the service, which would then notify them of nearby people or events of interest. Replaced by Google Latitude. Google Mashup Editor – web mashup creation with publishing, syntax highlighting, debugging. Discontinued in July; migrated to Google App Engine. Google Ride Finder – taxi and shuttle search service, using real time position of vehicles in 14 U.S. cities. Used the Google Maps interface and cooperated with any car service that wished to participate. Discontinued in October. Shared Stuff – web page sharing system, incorporating a bookmarklet to share pages, and a page to view the most popular shared items. Pages could be shared through third-party applications such as Delicious or Facebook. Discontinued on March 30. 2008 Google Lively – 3D animated chat. Discontinued December 31. SearchMash – search engine to "test innovative user interfaces". Discontinued on November 24. Google Page Creator – webpage publishing program that could be used to create pages and to host them on Google servers. Discontinued on September 9, with all existing content gradually transferring to Google Sites through 2009. Send to Phone – send links and other information from Firefox to their phone by text message. Discontinued on August 28, replaced by Google Chrome to Phone. Google Browser Sync (Mozilla Firefox) – allowed Firefox users to synchronize settings across multiple computers. Discontinued in June. Hello – send images across the Internet and publish them to blogs. Discontinued on May 15. Web Accelerator – increased load speed of web pages. No longer available for, or supported by, Google as of January 20. 2007 Google Video Player – a video player that played back files in Google's own .gvi format and supported playlists in .gvp format. Shut down on August 17, 2007, due to Google's acquisition of YouTube. Google Video Marketplace – discontinued on August 15. Google Click-to-Call – allowed a user to speak directly over the phone without charge to businesses found on Google search results pages. Discontinued on July 20. Related Links – links to information related to a website's content. Discontinued on April 30. Public Service Search – non-commercial organization service, which included Google Site Search, traffic reports and unlimited search queries. Discontinued on February 13, replaced by Google Custom Search. 2006 Google Answers – online knowledge market that allowed users to post bounties for well-researched answers to their queries. Discontinued on November 28; still accessible (read-only). Writely – web-based word processor. On October 10, Writely was merged into Google Docs & Spreadsheets. Google Deskbar – desktop bar with a built-in mini browser. Replaced by a similar feature in Google Desktop. Discontinued May 8. See also Outline of Google History of Google List of mergers and acquisitions by Alphabet Google's hoaxes X Development Google.org References External links List of products on the Google corporate site List of products on Google Developers Google Google services Mobile software Google Computing-related lists Products Google Google
List of Google products
[ "Technology" ]
9,555
[ "Computing-related lists", "Lists of mobile computers", "Google lists" ]
1,497,900
https://en.wikipedia.org/wiki/John%20Theophilus%20Desaguliers
John Theophilus Desaguliers (12 March 1683 – 29 February 1744) was a French-born British natural philosopher, clergyman, engineer and freemason who was elected to the Royal Society in 1714 as experimental assistant to Isaac Newton. He had studied at Oxford and later popularized Newtonian theories and their practical applications in public lectures. Desaguliers's most important patron was James Brydges, 1st Duke of Chandos. As a Freemason, Desaguliers was instrumental in the success of the first Grand Lodge in London in the early 1720s and served as its third Grand Master. Biography Early life and education Desaguliers was born in La Rochelle, several months after his father Jean Desaguliers, a Protestant minister, had been exiled as a Huguenot by the French government. Jean Desaguliers was ordained as an Anglican by Bishop Henry Compton of London, and sent to Guernsey. Meanwhile, the baby was baptised Jean Théophile Desaguliers in the Protestant Temple in La Rochelle, and he and his mother then escaped to join Jean in Guernsey. In 1692, the family moved to London where Jean Desaguliers later set up a French school in Islington. He died in 1699. His son, who now used the anglicised name John Theophilus, attended Bishop Vesey's Grammar School in Sutton Coldfield until 1705 when he entered Christ Church, Oxford and followed the usual classical curriculum and graduated BA in 1709. He also attended lectures by John Keill, who used innovative demonstrations to illustrate difficult concepts of Newtonian natural philosophy. When Keill left Oxford in 1709 Desaguliers continued giving the lectures at Hart Hall, the forerunner of Hertford College, Oxford. He obtained a master's degree there in 1712. In 1719, Oxford granted him the honorary degree of Doctor of Civil Laws, after which he was often referred to as Dr Desaguliers. His doctorate was incorporated by Cambridge University in 1726. Desaguliers was ordained as a deacon in 1710, at Fulham Palace, and as a priest in 1717, at Ely Palace in London. Lecturer and promoter of Newtonian experimental philosophy In 1712, Desaguliers moved back to London and advertised courses of public lectures in Experimental Philosophy. He was not the first to do this, but became the most successful, offering to speak in English, French or Latin. By the time of his death, he had given over 140 courses of some 20 lectures each on mechanics, hydrostatics, pneumatics, optics and astronomy. He kept his lectures up to date, published notes for his auditors, and designed his own apparatus, including a renowned planetarium to demonstrate the solar system, and a machine to explain tidal motion. In 1717, Desaguliers lodged at Hampton Court and lectured in French to King George I and his family. Demonstrator at the Royal Society In 1714, Isaac Newton, President of the Royal Society, invited Desaguliers to replace Francis Hauksbee (1660–1713) as demonstrator at the Society's weekly meetings; he was soon thereafter made a Fellow of the Royal Society. Desaguliers promoted Newton's ideas and maintained the scientific nature of the meetings when Hans Sloane took over the Presidency after Newton died in 1727. Desaguliers contributed over 60 articles to the Philosophical Transactions of the Royal Society. He received the Society's prestigious Copley Medal in 1734, 1736 and 1741. The last award was for his summary of knowledge to date on the phenomenon of electricity. He had worked on this with Stephen Gray, who at one time lodged at the Desaguliers' home. Desaguliers's "Dissertation concerning Electricity" (1742), in which he coined the terms conductor and insulator, was awarded a gold medal by the Bordeaux Academy of Sciences. Patronage of the Duke of Chandos James Brydges, 1st Duke of Chandos appointed Desaguliers as his chaplain in 1716, but probably as much for his scientific expertise as his ecclesiastic duties. He was also gifted the living of St Lawrence Church, Little Stanmore, which was close to the Duke's mansion called Cannons, then under construction at nearby Edgware. The church was rebuilt in the baroque style in 1715. As the chapel at Cannons was not completed until 1720, the church was the location of first performances of the so-called Chandos Anthems by George Frideric Handel who was, in 1717/18, like Desaguliers, a member of the Duke's household. The Cannons estate benefited from Desaguliers' scientific expertise which was applied to the elaborate water garden there. He was also technical adviser to an enterprise in which Chandos had invested, the York Buildings Company, which used steam-power to extract water from the Thames. In 1718, Desaguliers dedicated to the Duke his translation of Edme Mariotte's treatise on the motion of water. It is perhaps no coincidence that in the summer of 1718 Handel composed his opera Acis and Galatea for performance at Cannons. In this work the hero Acis is turned into a fountain, and since, by tradition, the work was first performed outside on the terraces overlooking the garden, a connection with Desaguliers' new water works seems probable. Desaguliers advised the Duke of Chandos on many projects and appears to have been distracted from his parochial duties by his other interests. The Duke once complained that there were unreasonable delays in burying the dead but this was attributed to the curate who was left in charge of the church. Engineering interests Desaguliers applied his knowledge to practical applications. As well as his interest in steam engines and hydraulic engineering (in 1721 he cured a problem in the Edinburgh city water supply) he developed expertise in ventilation. He devised a more efficient fireplace which was used in the House of Lords and also invented the blowing wheel which removed stale air from the House of Commons for many years. Desaguliers studied the movements made by the human body when working as a machine. He befriended the strong man, Thomas Topham, and although there is no firm evidence that he used Topham as a body guard, Desaguliers recorded several of the feats that he performed. Desaguliers was a parliamentary adviser to the board concerned with the first Westminster Bridge. This much-needed second crossing of the Thames was not completed until 1750, after his death, but construction work resulted in the demolition of Desaguliers's home in Channel Row. Desaguliers also made significant contributions to the field of tribology. He was the first to recognise the possible role of adhesion in the friction process. For this contribution, he was named by Duncan Dowson as one of the 23 "Men of Tribology". Freemasonry Desaguliers was a member of the lodge which met at the Rummer & Grapes Tavern in Channel Row, Westminster, although that lodge later moved to the Horn Tavern in New Palace Yard. According to Rev. James Anderson, this lodge joined with three other lodges on 24 June 1717 to form what would become the Premier Grand Lodge of England. The new grand lodge grew rapidly as more lodges joined, and Desaguliers is remembered as being instrumental in its early success. He became the third Grand Master in 1719 and was later three times Deputy Grand Master. He helped James Anderson draw up the rules in the "Constitutions of the Freemasons", published in 1723, and he was active in the establishment of masonic charity. During a lecture trip to the Netherlands in 1731, Desaguliers initiated into Freemasonry Francis, Duke of Lorraine (1708–65) who later became Holy Roman Emperor. Desaguliers also presided when Frederick, Prince of Wales became a Freemason in 1731, and he additionally became a chaplain to the Prince. Family On 14 October 1712, John Theophilus Desaguliers married Joanna Pudsey, daughter of William and Anne Pudsey of Kidlington, near Oxford. For most of their married life, the couple lived at Channel Row, Westminster where Desaguliers gave the majority of his lectures. When forced to leave due to work on Westminster Bridge, they separated and John Theophilus took lodgings at the Bedford Coffee House in Covent Garden and carried on his lectures there. The Desaguliers had four sons and three daughters, for most of whom they acquired aristocratic godparents, but only two children survived beyond infancy: John Theophilus jnr (1718–1751) graduated from Oxford, became a clergyman, and died childless, while Thomas (1721–1780) had a distinguished military career in the Royal Artillery, rising to the rank of General. He became chief firemaster at the Arsenal, Woolwich, and seems to have been the first to be employed by the English army to apply scientific principles to the production of cannon and the powers of gunnery, for which he was elected a Fellow of the Royal Society. It was Thomas Desaguliers who in part designed and supervised the fireworks for the first performance of Handel's Music for the Royal Fireworks in Green Park. He later became an equerry to King George III. Final years John Theophilus Desaguliers had long suffered from gout. He died at his lodgings in the Bedford Coffee House on 29 February 1744 and was buried on 6 March 1744 in a prestigious location within the Savoy Chapel in London. The chapel was probably chosen for its Huguenot associations and in memory of Desaguliers's origins. The press announcements of his death referred to him as 'a gentleman universally known and esteemed'. In his will Desaguliers left his estate to his elder son who organised the publication of the second edition of his "Course of Experimental Philosophy". Although never a wealthy man, he did not die in poverty as suggested by the oft-quoted but inaccurate lines of the poet James Cawthorn: How poor neglected Desaguiliers fell! How he who taught two gracious kings to view All Boyle ennobled, and all Bacon knew, Died in a cell, without a friend to save, Without a guinea, and without a grave. These are taken from a long poem entitled "The Vanity of Human Enjoyment" (1749) in which the poet attempted to draw attention to the general lack of funding for men of science and not Desaguliers in particular. Portraits There are two known engravings, by Peter Pelham and by James Tookey, taken from a lost portrait of Desaguliers painted in about 1725 by Hans Hysing, and an engraving by R. Scaddon of a Thomas Frye painting, also apparently lost, which showed the subject as an old man in 1743. An engraving by Etienne-Jehandier Desrochers was almost certainly made in 1735 when Desaguliers was on his only visit to Paris. There is also an oil attributed to Jonathan Richardson. Publications Desaguliers wrote on many topics for the Philosophical Transactions of the Royal Society, produced several editions of notes for the auditors of his lectures and wrote occasional poetry. He translated technical works from French and Latin into English, often adding his own comments. His own Course of Experimental Philosophy was translated into Dutch and French Some original works A Sermon Preach’d before the King at Hampton Court (London, 1717) The Newtonian System of the World, the Best Model of Government: An Allegorical Poem (Westminster, 1728) A Course of Experimental Philosophy, 1st edition, Vol I (London, 1734) and Vol II (London 1744) A Dissertation Concerning Electricity (London, 1742) Some translations Ozanam, Jacques, A Treatise of Fortification, (Oxford, 1711) Ozanam, Jacques, A Treatise of Gnomonicks, or Dialling, (Oxford, 1712) Gauger, Nicolas, Fires Improv’d: Being a New Method of Building Chimneys, (London, 1st ed., 1715; 2nd ed., 1736) Mariotte, Edmé, The Motion of Water and other Fluids, being a Treatise on Hydrostaticks, (London, 1718) 'sGravesande, Willem, Mathematical Elements of Natural Philosophy Confirmed by Experiment, or an Introduction to Sir Isaac Newton’s Philosophy (London, 1720) Pitcairn, Archibald, The Whole Works of Dr Archibald Pitcairn (treatise on physic translated from Latin in collaboration with George Sewell) (2nd ed., London, 1727). Vaucanson, Jacques, An Account of the Mechanism of an Automaton, (London, 1742) See also Electric charge Development of a practical steam engine Direct bonding Dynamometer Ventilation (architecture) References Further reading Baker, C. H. Collins, and Baker, Muriel (1949) James Brydges First Duke of Chandos Oxford: Clarendon Press Berman, Ric (2012) Foundations of Modern Freemasonry: The Grand Architects: Political Change & the Scientific Enlightenment, 1714–1740 (Sussex Academic Press), Chapter 2. Campbell, James W.P. (2020), "The Significance of John Theophilus Desaguliers's Course of Experimental Philosophy to the History of Hydraulics and what it reveals about the first Pump-Driven Fountains", pp.331–347 in James W.P. Campbell, Nina Baker, Karey Draper, Michale Driver, Michael Heaton, Yiting Pan, Natcha Ruamsanitwong and David Yeomans (eds.), Iron, Steel and Buildings: Studies in the History of Construction: The Proceedings of the Seventh Conference of the Construction History Society, Cambridge: Construction History Society. Carpenter, A. T., (2011), John Theophilus Desaguliers: A Natural Philosopher, Engineer and Freemason in Newtonian England, (London: Continuum/Bloomsbury), ; Mackey, Albert G. (1966), An Encyclopedia of Freemasonry, reprint edition, (Chicago:, The Masonic History Company). Priestley, Joseph (1769), The History and Present State of Electricity: With Original Experiments (Google eBook), pp 61–67; accessed 12 May 2014 Stephens, H. M. (2004) Desaguliers, Thomas (1721–1780), rev. Jonathan Spain, Oxford Dictionary of National Biography, Oxford University Press; online edn, Jan 2014 accessed 12 May 2014 External links Masonic biography British Journal of Psychology 1683 births 1744 deaths People from La Rochelle Alumni of Christ Church, Oxford English chaplains 18th-century British inventors English physicists English scientists French–English translators Fellows of Hertford College, Oxford Fellows of the Royal Society Recipients of the Copley Medal English male writers Grand masters of the Premier Grand Lodge of England Freemasons of the Premier Grand Lodge of England French emigrants to England People from the Bailiwick of Guernsey People from Islington (district) Tribologists
John Theophilus Desaguliers
[ "Materials_science" ]
3,054
[ "Tribology", "Tribologists" ]
1,498,040
https://en.wikipedia.org/wiki/Rydberg%20atom
A Rydberg atom is an excited atom with one or more electrons that have a very high principal quantum number, n. The higher the value of n, the farther the electron is from the nucleus, on average. Rydberg atoms have a number of peculiar properties including an exaggerated response to electric and magnetic fields, long decay periods and electron wavefunctions that approximate, under some conditions, classical orbits of electrons about the nuclei. The core electrons shield the outer electron from the electric field of the nucleus such that, from a distance, the electric potential looks identical to that experienced by the electron in a hydrogen atom. In spite of its shortcomings, the Bohr model of the atom is useful in explaining these properties. Classically, an electron in a circular orbit of radius r, about a hydrogen nucleus of charge +e, obeys Newton's second law: where k = 1/(4πε0). Orbital momentum is quantized in units of ħ: . Combining these two equations leads to Bohr's expression for the orbital radius in terms of the principal quantum number, n: It is now apparent why Rydberg atoms have such peculiar properties: the radius of the orbit scales as n2 (the n = 137 state of hydrogen has an atomic radius ~1 μm) and the geometric cross-section as n4. Thus, Rydberg atoms are extremely large, with loosely bound valence electrons, easily perturbed or ionized by collisions or external fields. Because the binding energy of a Rydberg electron is proportional to 1/r and hence falls off like 1/n2, the energy level spacing falls off like 1/n3 leading to ever more closely spaced levels converging on the first ionization energy. These closely spaced Rydberg states form what is commonly referred to as the Rydberg series. Figure 2 shows some of the energy levels of the lowest three values of orbital angular momentum in lithium. History The existence of the Rydberg series was first demonstrated in 1885 when Johann Balmer discovered a simple empirical formula for the wavelengths of light associated with transitions in atomic hydrogen. Three years later, the Swedish physicist Johannes Rydberg presented a generalized and more intuitive version of Balmer's formula that came to be known as the Rydberg formula. This formula indicated the existence of an infinite series of ever more closely spaced discrete energy levels converging on a finite limit. This series was qualitatively explained in 1913 by Niels Bohr with his semiclassical model of the hydrogen atom in which quantized values of angular momentum lead to the observed discrete energy levels. A full quantitative derivation of the observed spectrum was derived by Wolfgang Pauli in 1926 following development of quantum mechanics by Werner Heisenberg and others. Methods of production The only truly stable state of a hydrogen-like atom is the ground state with n = 1. The study of Rydberg states requires a reliable technique for exciting ground state atoms to states with a large value of n. Electron impact excitation Much early experimental work on Rydberg atoms relied on the use of collimated beams of fast electrons incident on ground-state atoms. Inelastic scattering processes can use the electron kinetic energy to increase the atoms' internal energy exciting to a broad range of different states including many high-lying Rydberg states, Because the electron can retain any arbitrary amount of its initial kinetic energy, this process results in a population with a broad spread of different energies. Charge exchange excitation Another mainstay of early Rydberg atom experiments relied on charge exchange between a beam of ions and a population of neutral atoms of another species, resulting in the formation of a beam of highly excited atoms, Again, because the kinetic energy of the interaction can contribute to the final internal energies of the constituents, this technique populates a broad range of energy levels. Optical excitation The arrival of tunable dye lasers in the 1970s allowed a much greater level of control over populations of excited atoms. In optical excitation, the incident photon is absorbed by the target atom, resulting in a precise final state energy. The problem of producing single state, mono-energetic populations of Rydberg atoms thus becomes the somewhat simpler problem of precisely controlling the frequency of the laser output, This form of direct optical excitation is generally limited to experiments with the alkali metals, because the ground state binding energy in other species is generally too high to be accessible with most laser systems. For atoms with a large valence electron binding energy (equivalent to a large first ionization energy), the excited states of the Rydberg series are inaccessible with conventional laser systems. Initial collisional excitation can make up the energy shortfall allowing optical excitation to be used to select the final state. Although the initial step excites to a broad range of intermediate states, the precision inherent in the optical excitation process means that the laser light only interacts with a specific subset of atoms in a particular state, exciting to the chosen final state. Hydrogenic potential An atom in a Rydberg state has a valence electron in a large orbit far from the ion core; in such an orbit, the outermost electron feels an almost hydrogenic Coulomb potential, UC, from a compact ion core consisting of a nucleus with Z protons and the lower electron shells filled with Z-1 electrons. An electron in the spherically symmetric Coulomb potential has potential energy: The similarity of the effective potential "seen" by the outer electron to the hydrogen potential is a defining characteristic of Rydberg states and explains why the electron wavefunctions approximate to classical orbits in the limit of the correspondence principle. In other words, the electron's orbit resembles the orbit of planets inside a solar system, similar to what was seen in the obsolete but visually useful Bohr and Rutherford models of the atom. There are three notable exceptions that can be characterized by the additional term added to the potential energy: An atom may have two (or more) electrons in highly excited states with comparable orbital radii. In this case, the electron-electron interaction gives rise to a significant deviation from the hydrogen potential. For an atom in a multiple Rydberg state, the additional term, Uee, includes a summation of each pair of highly excited electrons: If the valence electron has very low angular momentum (interpreted classically as an extremely eccentric elliptical orbit), then it may pass close enough to polarise the ion core, giving rise to a 1/r4 core polarization term in the potential. The interaction between an induced dipole and the charge that produces it is always attractive so this contribution is always negative, where αd is the dipole polarizability. Figure 3 shows how the polarization term modifies the potential close to the nucleus. If the outer electron penetrates the inner electron shells, it will “see” more of the charge of the nucleus and hence experience a greater force. In general, the modification to the potential energy is not simple to calculate and must be based on knowledge of the geometry of the ion core. Quantum-mechanical details Quantum-mechanically, a state with abnormally high n refers to an atom in which the valence electron(s) have been excited into a formerly unpopulated electron orbital with higher energy and lower binding energy. In hydrogen the binding energy is given by: where Ry = 13.6 eV is the Rydberg constant. The low binding energy at high values of n explains why Rydberg states are susceptible to ionization. Additional terms in the potential energy expression for a Rydberg state, on top of the hydrogenic Coulomb potential energy require the introduction of a quantum defect, δℓ, into the expression for the binding energy: Electron wavefunctions The long lifetimes of Rydberg states with high orbital angular momentum can be explained in terms of the overlapping of wavefunctions. The wavefunction of an electron in a high ℓ state (high angular momentum, “circular orbit”) has very little overlap with the wavefunctions of the inner electrons and hence remains relatively unperturbed. The three exceptions to the definition of a Rydberg atom as an atom with a hydrogenic potential, have an alternative, quantum mechanical description that can be characterized by the additional term(s) in the atomic Hamiltonian: If a second electron is excited into a state ni, energetically close to the state of the outer electron no, then its wavefunction becomes almost as large as the first (a double Rydberg state). This occurs as ni approaches no and leads to a condition where the size of the two electron’s orbits are related; a condition sometimes referred to as radial correlation. An electron-electron repulsion term must be included in the atomic Hamiltonian. Polarization of the ion core produces an anisotropic potential that causes an angular correlation between the motions of the two outermost electrons. This can be thought of as a tidal locking effect due to a non-spherically symmetric potential. A core polarization term must be included in the atomic Hamiltonian. The wavefunction of the outer electron in states with low orbital angular momentum ℓ, is periodically localised within the shells of inner electrons and interacts with the full charge of the nucleus. Figure 4 shows a semi-classical interpretation of angular momentum states in an electron orbital, illustrating that low-ℓ states pass closer to the nucleus potentially penetrating the ion core. A core penetration term must be added to the atomic Hamiltonian. In external fields The large separation between the electron and ion-core in a Rydberg atom makes possible an extremely large electric dipole moment, d. There is an energy associated with the presence of an electric dipole in an electric field, F, known in atomic physics as a Stark shift, Depending on the sign of the projection of the dipole moment onto the local electric field vector, a state may have energy that increases or decreases with field strength (low-field and high-field seeking states respectively). The narrow spacing between adjacent n-levels in the Rydberg series means that states can approach degeneracy even for relatively modest field strengths. The theoretical field strength at which a crossing would occur assuming no coupling between the states is given by the Inglis–Teller limit, In the hydrogen atom, the pure 1/r Coulomb potential does not couple Stark states from adjacent n-manifolds resulting in real crossings as shown in figure 5. The presence of additional terms in the potential energy can lead to coupling resulting in avoided crossings as shown for lithium in figure 6. Applications and further research Precision measurements of trapped Rydberg atoms The radiative decay lifetimes of atoms in metastable states to the ground state are important to understanding astrophysics observations and tests of the standard model. Investigating diamagnetic effects The large sizes and low binding energies of Rydberg atoms lead to a high magnetic susceptibility, . As diamagnetic effects scale with the area of the orbit and the area is proportional to the radius squared (A ∝ n4), effects impossible to detect in ground state atoms become obvious in Rydberg atoms, which demonstrate very large diamagnetic shifts. Rydberg atoms exhibit strong electric-dipole coupling of the atoms to electromagnetic fields and has been used to detect radio communications. In plasmas Rydberg atoms form commonly in plasmas due to the recombination of electrons and positive ions; low energy recombination results in fairly stable Rydberg atoms, while recombination of electrons and positive ions with high kinetic energy often form autoionising Rydberg states. Rydberg atoms’ large sizes and susceptibility to perturbation and ionisation by electric and magnetic fields, are an important factor determining the properties of plasmas. Condensation of Rydberg atoms forms Rydberg matter, most often observed in form of long-lived clusters. The de-excitation is significantly impeded in Rydberg matter by exchange-correlation effects in the non-uniform electron liquid formed on condensation by the collective valence electrons, which causes extended lifetime of clusters. In astrophysics (radio recombination lines) Rydberg atoms occur in space due to the dynamic equilibrium between photoionization by hot stars and recombination with electrons, which at these very low densities usually proceeds via the electron re-joining the atom in a very high n state, and then gradually dropping through the energy levels to the ground state, giving rise to a sequence of recombination spectral lines spread across the electromagnetic spectrum. The very small differences in energy between Rydberg states differing in n by one or a few means that photons emitted in transitions between such states have low frequencies and long wavelengths, even up to radio waves. The first detection of such a radio recombination line (RRL) was by Soviet radio astronomers in 1964; the line, designated H90α, was emitted by hydrogen atoms in the n = 90 state. Today, Rydberg atoms of hydrogen, helium and carbon in space are routinely observed via RRLs, the brightest of which are the Hnα lines corresponding to transitions from n+1 to n. Weaker lines, Hnβ and Hnγ, with Δn = 2 and 3 are also observed. Corresponding lines for helium and carbon are Henα, Cnα, and so on. The discovery of lines with n > 100 was surprising, as even in the very low densities of interstellar space, many orders of magnitude lower than the best laboratory vacuums attainable on Earth, it had been expected that such highly-excited atoms would be frequently destroyed by collisions, rendering the lines unobservable. Improved theoretical analysis showed that this effect had been overestimated, although collisional broadening does eventually limit detectability of the lines at very high n. The record wavelength for hydrogen is λ = 73 cm for H253α, implying atomic diameters of a few microns, and for carbon, λ = 18  metres, from C732α, from atoms with a diameter of 57 micron. RRLs from hydrogen and helium are produced in highly ionized regions (H II regions and the Warm Ionised Medium). Carbon has a lower ionization energy than hydrogen, and so singly-ionized carbon atoms, and the corresponding recombining Rydberg states, exist further from the ionizing stars, in so-called C II regions which form thick shells around H II regions. The larger volume partially compensates for the low abundance of C compared to H, making the carbon RRLs detectable. In the absence of collisional broadening, the wavelengths of RRLs are modified only by the Doppler effect, so the measured wavelength, , is usually converted to radial velocity, , where is the rest-frame wavelength. H II regions in our Galaxy can have radial velocities up to ±150 km/s, due to their motion relative to Earth as both orbit the centre of the Galaxy. These motions are regular enough that can be used to estimate the position of the H II region on the line of sight and so its 3D position in the Galaxy. Because all astrophysical Rydberg atoms are hydrogenic, the frequencies of transitions for H, He, and C are given by the same formula, except for the slightly different reduced mass of the valence electron for each element. This gives helium and carbon lines apparent Doppler shifts of −100 and −140 km/s, respectively, relative to the corresponding hydrogen line. RRLs are used to detect ionized gas in distant regions of our Galaxy, and also in external galaxies, because the radio photons are not absorbed by interstellar dust, which blocks photons from the more familiar optical transitions. They are also used to measure the temperature of the ionized gas, via the ratio of line intensity to the continuum bremsstrahlung emission from the plasma. Since the temperature of H II regions is regulated by line emission from heavier elements such as C, N, and O, recombination lines also indirectly measure their abundance (metallicity). RRLs are spread across the radio spectrum with relatively small intervals in wavelength between them, so they frequently occur in radio spectral observations primarily targeted at other spectral lines. For instance, H166α, H167α, and H168α are very close in wavelength to the 21-cm line from neutral hydrogen. This allows radio astronomers to study both the neutral and the ionized interstellar medium from the same set of observations. Since RRLs are numerous and weak, common practice is to average the velocity spectra of several neighbouring lines, to improve sensitivity. There are a variety of other potential applications of Rydberg atoms in cosmology and astrophysics. Strongly interacting systems Due to their large size, Rydberg atoms can exhibit very large electric dipole moments. Calculations using perturbation theory show that this results in strong interactions between two close Rydberg atoms. Coherent control of these interactions combined with their relatively long lifetime makes them a suitable candidate to realize a quantum computer. In 2010 two-qubit gates were achieved experimentally. Strongly interacting Rydberg atoms also feature quantum critical behavior, which makes them interesting to study on their own. Current research directions Since 2000's Rydberg atoms research encompasses broadly five directions: sensing, quantum optics, quantum computation, quantum simulation and Rydberg states of matter. High electric dipole moments between Rydberg atomic states are used for radio frequency and terahertz sensing and imaging, including non-demolition measurements of individual microwave photons. Electromagnetically induced transparency was used in combination with strong interactions between two atoms excited in Rydberg state to provide medium that exhibits strongly nonlinear behaviour at the level of individual optical photons. The tuneable interaction between Rydberg states, enabled also first quantum simulation experiments. In October 2018, the United States Army Research Laboratory publicly discussed efforts to develop a super wideband radio receiver using Rydberg atoms. In March 2020, the laboratory announced that its scientists analysed the Rydberg sensor's sensitivity to oscillating electric fields over an enormous range of frequencies—from 0 to 1012 Hertz (the spectrum to 0.3mm wavelength). The Rydberg sensor can reliably detect signals over the entire spectrum and compare favourably with other established electric field sensor technologies, such as electro-optic crystals and dipole antenna-coupled passive electronics. Classical simulation A simple 1/r potential results in a closed Keplerian elliptical orbit. In the presence of an external electric field Rydberg atoms can obtain very large electric dipole moments making them extremely susceptible to perturbation by the field. Figure 7 shows how application of an external electric field (known in atomic physics as a Stark field) changes the geometry of the potential, dramatically changing the behaviour of the electron. A Coulombic potential does not apply any torque as the force is always antiparallel to the position vector (always pointing along a line running between the electron and the nucleus): , . With the application of a static electric field, the electron feels a continuously changing torque. The resulting trajectory becomes progressively more distorted over time, eventually going through the full range of angular momentum from L = LMAX, to a straight line L = 0, to the initial orbit in the opposite sense L = −LMAX. The time period of the oscillation in angular momentum (the time to complete the trajectory in figure 8), almost exactly matches the quantum mechanically predicted period for the wavefunction to return to its initial state, demonstrating the classical nature of the Rydberg atom. See also Heavy Rydberg system Old quantum theory Quantum chaos Rydberg molecule Rydberg polaron References Atoms
Rydberg atom
[ "Physics" ]
4,005
[ "Atoms", "Matter" ]
1,498,076
https://en.wikipedia.org/wiki/Unit%20in%20the%20last%20place
In computer science and numerical analysis, unit in the last place or unit of least precision (ulp) is the spacing between two consecutive floating-point numbers, i.e., the value the least significant digit (rightmost digit) represents if it is 1. It is used as a measure of accuracy in numeric calculations. Definition The most common definition is: In radix with precision , if , then where is the minimal exponent of the normal numbers. In particular, for normal numbers, and for subnormals. Another definition, suggested by John Harrison, is slightly different: is the distance between the two closest straddling floating-point numbers and (i.e., satisfying and ), assuming that the exponent range is not upper-bounded. These definitions differ only at signed powers of the radix. The IEEE 754 specification—followed by all modern floating-point hardware—requires that the result of an elementary arithmetic operation (addition, subtraction, multiplication, division, and square root since 1985, and FMA since 2008) be correctly rounded, which implies that in rounding to nearest, the rounded result is within 0.5 ulp of the mathematically exact result, using John Harrison's definition; conversely, this property implies that the distance between the rounded result and the mathematically exact result is minimized (but for the halfway cases, it is satisfied by two consecutive floating-point numbers). Reputable numeric libraries compute the basic transcendental functions to between 0.5 and about 1 ulp. Only a few libraries compute them within 0.5 ulp, this problem being complex due to the Table-maker's dilemma. Since the 2010s, advances in floating-point mathematics have allowed correctly rounded functions to be almost as fast in average as these earlier, less accurate functions. A correctly rounded function would also be fully reproducible. which theoretically would only produce one incorrect rounding out of 1000 random floating-point inputs. Examples Example 1 Let be a positive floating-point number and assume that the active rounding mode is round to nearest, ties to even, denoted . If , then . Otherwise, or , depending on the value of the least significant digit and the exponent of . This is demonstrated in the following Haskell code typed at an interactive prompt: > until (\x -> x == x+1) (+1) 0 :: Float 1.6777216e7 > it-1 1.6777215e7 > it+1 1.6777216e7 Here we start with 0 in single precision (binary32) and repeatedly add 1 until the operation does not change the value. Since the significand for a single-precision number contains 24 bits, the first integer that is not exactly representable is 224+1, and this value rounds to 224 in round to nearest, ties to even. Thus the result is equal to 224. Example 2 The following example in Java approximates as a floating point value by finding the two double values bracketing : . // π with 20 decimal digits BigDecimal π = new BigDecimal("3.14159265358979323846"); // truncate to a double floating point double p0 = π.doubleValue(); // -> 3.141592653589793 (hex: 0x1.921fb54442d18p1) // p0 is smaller than π, so find next number representable as double double p1 = Math.nextUp(p0); // -> 3.1415926535897936 (hex: 0x1.921fb54442d19p1) Then is determined as . // ulp(π) is the difference between p1 and p0 BigDecimal ulp = new BigDecimal(p1).subtract(new BigDecimal(p0)); // -> 4.44089209850062616169452667236328125E-16 // (this is precisely 2**(-51)) // same result when using the standard library function double ulpMath = Math.ulp(p0); // -> 4.440892098500626E-16 (hex: 0x1.0p-51) Example 3 Another example, in Python, also typed at an interactive prompt, is: >>> x = 1.0 >>> p = 0 >>> while x != x + 1: ... x = x * 2 ... p = p + 1 ... >>> x 9007199254740992.0 >>> p 53 >>> x + 2 + 1 9007199254740996.0 In this case, we start with x = 1 and repeatedly double it until x = x + 1. Similarly to Example 1, the result is 253 because the double-precision floating-point format uses a 53-bit significand. Language support The Boost C++ libraries provides the functions boost::math::float_next, boost::math::float_prior, boost::math::nextafter and boost::math::float_advance to obtain nearby (and distant) floating-point values, and boost::math::float_distance(a, b) to calculate the floating-point distance between two doubles. The C language library provides functions to calculate the next floating-point number in some given direction: nextafterf and nexttowardf for float, nextafter and nexttoward for double, nextafterl and nexttowardl for long double, declared in <math.h>. It also provides the macros FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON, which represent the positive difference between 1.0 and the next greater representable number in the corresponding type (i.e. the ulp of one). The Java standard library provides the functions and . They were introduced with Java 1.5. The Swift standard library provides access to the next floating-point number in some given direction via the instance properties nextDown and nextUp. It also provides the instance property ulp and the type property ulpOfOne (which corresponds to C macros like FLT_EPSILON) for Swift's floating-point types. See also IEEE 754 ISO/IEC 10967, part 1 requires an ulp function Least significant bit (LSB) Machine epsilon Round-off error References Bibliography Goldberg, David (1991–03). "Rounding Error" in "What Every Computer Scientist Should Know About Floating-Point Arithmetic". Computing Surveys, ACM, March 1991. Retrieved from http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#689. Computer arithmetic Floating point
Unit in the last place
[ "Mathematics" ]
1,489
[ "Computer arithmetic", "Arithmetic" ]
1,498,080
https://en.wikipedia.org/wiki/Picotechnology
The term picotechnology is a portmanteau of picometre and technology, intended to parallel the term nanotechnology. It is a hypothetical future level of technological manipulation of matter, on the scale of trillionths of a metre or picoscale (10−12). This is three orders of magnitude smaller than a nanometre (and thus most nanotechnology) and two orders of magnitude smaller than most chemistry transformations and measurements. Picotechnology would involve the manipulation of matter at the atomic level. A further hypothetical development, femtotechnology, would involve working with matter at the subatomic level. Applications Picoscience is a term used by some futurists to refer to structuring of matter on a true picometre scale. Picotechnology was described as involving the alteration of the structure and chemical properties of individual atoms, typically through the manipulation of energy states of electrons within an atom to produce metastable (or otherwise stabilized) states with unusual properties, producing some form of exotic atom. Analogous transformations known to exist in the real world are redox chemistry, which can manipulate the oxidation states of atoms; excitation of electrons to metastable excited states as with lasers and some forms of saturable absorption; and the manipulation of the states of excited electrons in Rydberg atoms to encode information. However, none of these processes produces the types of exotic atoms described by futurists. Alternatively, picotechnology is used by some researchers in nanotechnology to refer to the fabrication of structures where atoms and devices are positioned with sub-nanometre accuracy. This is important where interaction with a single atom or molecule is desired, because of the strength of the interaction between two atoms which are very close. For example, the force between an atom in an atomic force microscope probe tip and an atom in a sample being studied vary exponentially with separation distance, and is sensitive to changes in position on the order of 50 to 100 picometres (due to Pauli exclusion at short ranges and van der Waals forces at long ranges). In popular culture The Chinese science fiction novel The Three-Body Problem features a plot-point in which an advanced alien civilization imbues individual protons with supercomputing powers and subsequently manipulates said protons via quantum entanglement (the fictional name for these proton-sized supercomputers is "sophons"). See also Femtotechnology IBM in atoms, a 1989 demonstration by IBM of a technology capable of manipulating individual atoms Technological singularity "There's Plenty of Room at the Bottom", a 1959 lecture by physicist Richard Feynman on the direct manipulation of individual atoms References External links Picotechnology at the Nanosciences group at CEMES , France. Nanotechnology
Picotechnology
[ "Materials_science", "Engineering" ]
577
[ "Nanotechnology", "Materials science" ]
1,498,102
https://en.wikipedia.org/wiki/Femtotechnology
Femtotechnology is a term used in reference to the hypothetical manipulation of matter on the scale of a femtometer, or 10−15 m. This is three orders of magnitude lower than picotechnology, at the scale of 10−12 m, and six orders of magnitude lower than nanotechnology, at the scale of 10−9 m. Theory Work in the femtometer range involves manipulation of excited energy states within atomic nuclei, specifically nuclear isomers, to produce metastable (or otherwise stabilized) states with unusual properties. In the extreme case, excited states of the individual nucleons that make up the atomic nucleus (protons and neutrons) are considered, ostensibly to tailor the behavioral properties of these particles. The most advanced form of molecular nanotechnology is often imagined to involve self-replicating molecular machines, and there have been some speculations suggesting something similar might be possible with analogues of molecules composed of nucleons rather than atoms. For example, the astrophysicist Frank Drake once speculated about the possibility of self-replicating organisms composed of such nuclear molecules living on the surface of a neutron star, a suggestion taken up in the science fiction novel Dragon's Egg by the physicist Robert Forward. It is thought by physicists that nuclear molecules may be possible, but they would be very short-lived, and whether they could actually be made to perform complex tasks such as self-replication, or what type of technology could be used to manipulate them, is unknown. Applications Practical applications of femtotechnology are currently considered to be unlikely. The spacings between nuclear energy levels require equipment capable of efficiently generating and processing gamma rays, without equipment degradation. The nature of the strong interaction is such that excited nuclear states tend to be very unstable (unlike the excited electron states in Rydberg atoms), and there are a finite number of excited states below the nuclear binding energy, unlike the (in principle) infinite number of bound states available to an atom's electrons. Similarly, what is known about the excited states of individual nucleons seems to indicate that these do not produce behavior that in any way makes nucleons easier to use or manipulate, and indicates instead that these excited states are even less stable and fewer in number than the excited states of atomic nuclei. In fiction Femtotechnology plays a critical role in the 2005 science-fiction novel Pushing Ice. It also features in various stories by Greg Egan such as Riding the Crocodile, where he proposes the idea of a "strong bullet" which overcomes the instability of high atomic weight femto-structures by being accelerated to near light speed, letting it travel interstellar distances before impacting a target and constructing a stable nano-scale structure as it decays. See also Attophysics Femtochemistry Mode-locking, a laser technique producing pulses in the femtosecond range Ultrashort pulse References External links Femtotech? (Sub)Nuclear Scale Engineering and Computation There’s Plenty More Room at the Bottom: Beyond Nanotech to Femtotech Femtocomputing Hypothetical technology Nanotechnology
Femtotechnology
[ "Materials_science", "Engineering" ]
641
[ "Nanotechnology", "Materials science" ]
1,498,226
https://en.wikipedia.org/wiki/Consortium%20for%20the%20Barcode%20of%20Life
The Consortium for the Barcode of Life (CBOL) was an international initiative dedicated to supporting the development of DNA barcoding as a global standard for species identification. CBOL's Secretariat Office is hosted by the National Museum of Natural History, Smithsonian Institution, in Washington, DC. Barcoding was proposed in 2003 by Prof. Paul Hebert of the University of Guelph in Ontario as a way of distinguishing and identifying species with a short standardized gene sequence. Hebert proposed the 658 bases of the Folmer region of the mitochondrial gene cytochrome-C oxidase-1 as the standard barcode region. Hebert is the Director of the Biodiversity Institute of Ontario, the Canadian Centre for DNA Barcoding, and the International Barcode of Life Project (iBOL), all headquartered at the University of Guelph. The Barcode of Life Data Systems (BOLD) is also located at the University of Guelph. CBOL was created in May 2004 with support of the Alfred P. Sloan Foundation, following two meetings in 2003, also funded by the Sloan Foundation, at the Banbury Center, Cold Spring Harbor Laboratory. Since then, more than 200 organizations from more than 50 countries have joined CBOL and agreed to put their barcode data in a public database. CBOL promotes DNA barcoding through workshops, working groups, international conferences, outreach meetings to developing countries, planning meetings for barcoding projects, and production of outreach material to raise awareness of barcoding. CBOL's Database Working Group developed the data standard that GenBank, the European Bioinformatics Institute, and the DNA Data Bank of Japan have endorsed. CBOL's Plant Working Group proposed matK and rbcL as the standard barcode regions for land plants; CBOL approved this proposal in late 2005. The Fungal Working Group has identified ITS as the best barcode region for fungi, and CBOL's Protist Working Group is analyzing candidate regions for protistan groups. CBOL helped to plan and launch the global campaigns to barcode all species of fish and birds, and socioeconomically important groups like fruitflies. One of CBOL's primary contributions to the success of barcoding was its outreach efforts to government agencies (agriculture, environment, conservation, and others) and international organizations (CITES, Convention on Biological Diversity, Food and Agriculture Organization) that could benefit from barcoding. References External links Consortium for the Barcode of Life (CBOL) International Barcode of Life Project (iBOL) Barcode of Life Data Systems Taxonomy (biology) organizations Organizations established in 2007 Genetics organizations Smithsonian Institution International medical and health organizations DNA barcoding
Consortium for the Barcode of Life
[ "Biology" ]
547
[ "Genetics techniques", "Taxonomy (biology)", "DNA barcoding", "Taxonomy (biology) organizations", "Molecular genetics", "Phylogenetics" ]
1,498,475
https://en.wikipedia.org/wiki/List%20of%20honeydew%20sources
This is a list of honeydew sources. Honeydew is a sugary excretion from plant sap sucking insects such as aphids or scales. There are many trees that are hosts to aphids and scale insects that produce honeydew Honeydew sources References Apidologie 33 (2002) 353–354 accessed Feb 2005 Some Ohio Nectar and Pollen Producing Plants Dr. Jame E. Tew Ohio State University Extension Fact Sheet 1998; accessed Feb 2005 New Zealand Honey; accessed May 2005 All about honey; accessed Feb 2005 New Zealand Honey accessed April 2007 Spanish unifloral honeys; accessed Feb 2016 accessed March 2016 Lists of plants Beekeeping Insect ecology Gardening lists Agriculture-related lists Garden pests
List of honeydew sources
[ "Biology" ]
149
[ "Lists of plants", "Garden pests", "Plants", "Lists of biota", "Pests (organism)" ]
1,498,615
https://en.wikipedia.org/wiki/Rudolf%20Haag
Rudolf Haag (17 August 1922 – 5 January 2016) was a German theoretical physicist, who mainly dealt with fundamental questions of quantum field theory. He was one of the founders of the modern formulation of quantum field theory and he identified the formal structure in terms of the principle of locality and local observables. He also made important advances in the foundations of quantum statistical mechanics. Biography Rudolf Haag was born on 17 August 1922, in Tübingen, a university town in the middle of Baden-Württemberg. His family belonged to the cultured middle class. Haag's mother was the writer and politician Anna Haag. His father, Albert Haag, was a teacher of mathematics at a Gymnasium. After finishing high-school in 1939, he visited his sister in London shortly before the beginning of World War II. He was interned as an enemy alien and spent the war in a camp of German civilians in Manitoba. There he used his spare-time after the daily compulsory labour to study physics and mathematics as an autodidact. After the war, Haag returned to Germany and enrolled at the Technical University of Stuttgart in 1946, where he graduated as a physicist in 1948. In 1951, he received his doctorate at the University of Munich under the supervision of Fritz Bopp and became his assistant until 1956. In April 1953, he joined the CERN theoretical study group in Copenhagen directed by Niels Bohr. After a year, he returned to his assistant position in Munich and completed the German habilitation in 1954. From 1956 to 1957 he worked with Werner Heisenberg at the Max Planck Institute for Physics in Göttingen. From 1957 to 1959, he was a visiting professor at Princeton University and from 1959 to 1960 he worked at the University of Marseille. He became a professor of Physics at the University of Illinois Urbana-Champaign in 1960. In 1965, he and Res Jost founded the journal Communications in Mathematical Physics. Haag remained the first editor-in-chief until 1973. In 1966, he accepted the professorship position for theoretical physics at the University of Hamburg, where he stayed until he retired in 1987. After retirement, he worked on the concept of the quantum physical event. Haag developed an interest in music at an early age. He began learning the violin, but later preferred the piano, which he played almost every day. In 1948, Haag married Käthe Fues, with whom he had four children, Albert, Friedrich, Elisabeth, and Ulrich. After retirement, he moved together with his second wife Barbara Klie to Schliersee, a pastoral village in the Bavarian mountains. He died on 5 January 2016, in Fischhausen-Neuhaus, in southern Bavaria. Scientific career At the beginning of his career, Haag contributed significantly to the concepts of quantum field theory, including Haag's theorem, from which follows that the interaction picture of quantum mechanics does not exist in quantum field theory. A new approach to the description of scattering processes of particles became necessary. In the following years Haag developed what is known as Haag–Ruelle scattering theory. During this work, he realized that the rigid relationship between fields and particles that had been postulated up to that point, did not exist, and that the particle interpretation should be based on Albert Einstein's principle of locality, which assigns operators to regions of spacetime. These insights found their final formulation in the Haag–Kastler axioms for local observables of quantum field theories. This framework uses elements of the theory of operator algebras and is therefore referred to as algebraic quantum field theory or, from the physical point of view, as local quantum physics. This concept proved fruitful for understanding the fundamental properties of any theory in four-dimensional Minkowski space. Without making assumptions about non-observable charge-changing fields, Haag, in collaboration with Sergio Doplicher and John E. Roberts, elucidated the possible structure of the superselection sectors of the observables in theories with short-range forces. Sectors can always be composes with one another, each sector satisfies either para-Bose or para-Fermi statistics and for each sector there is a conjugate sector. These insights correspond to the additivity of charges in the particle interpretation, to the Bose–Fermi alternative for particle statistics, and to the existence of antiparticles. In the special case of simple sectors, a global gauge group and charge-carrying fields, which can generate all sectors from the vacuum state, were reconstructed from the observables. These results were later generalized for arbitrary sectors in the Doplicher–Roberts duality theorem. The application of these methods to theories in low-dimensional spaces also led to an understanding of the occurrence of braid group statistics and quantum groups. In quantum statistical mechanics, Haag, together with Nicolaas M. Hugenholtz and Marinus Winnink, succeeded in generalizing the Gibbs–von Neumann characterization of thermal equilibrium states using the KMS condition (named after Ryogo Kubo, Paul C. Martin, and Julian Schwinger) in such a way that it extends to infinite systems in the thermodynamic limit. It turned out that this condition also plays a prominent role in the theory of von Neumann algebras and resulted in the Tomita–Takesaki theory. This theory has proven to be a central element in structural analysis and recently also in the construction of concrete quantum field theoretical models. Together with Daniel Kastler and Ewa Trych-Pohlmeyer, Haag also succeeded in deriving the KMS condition from the stability properties of thermal equilibrium states. Together with Huzihiro Araki, Daniel Kastler, and Masamichi Takesaki, he also developed a theory of chemical potential in this context. The framework created by Haag and Kastler for studying quantum field theories in Minkowski space can be transferred to theories in curved spacetime. By working with Klaus Fredenhagen, Heide Narnhofer, and Ulrich Stein, Haag made important contributions to the understanding of the Unruh effect and Hawking radiation. Haag had a certain mistrust towards what he viewed as speculative developments in theoretical physics but occasionally dealt with such questions. The best known Acontribution is the Haag–Łopuszański–Sohnius theorem, which classifies the possible supersymmetries of the S-matrix that are not covered by the Coleman–Mandula theorem. Honors and awards In 1970 Haag received the Max Planck Medal for outstanding achievements in theoretical physics and in 1997 the Henri Poincaré Prize for his fundamental contributions to quantum field theory as one of the founders of the modern formulation. Since 1980 Haag was a member of the German National Academy of Sciences Leopoldina and since 1981 of the Göttingen Academy of Sciences. Since 1979 he was a corresponding member of the Bavarian Academy of Sciences and since 1987 of the Austrian Academy of Sciences. Publications Textbook Selected scientific works (Haag's theorem.) (Haag–Ruelle scattering theory.) (Haag–Kastler axioms.) (Doplicher-Haag-Roberts analysis of the superselection structure.) (KMS condition.) (Stability and KMS condition.) (KMS condition and chemical potential.) (Unruh effect.) (Hawking radiation.) (Classification of Supersymmetry.) (Concept of Event.) Others See also Axiomatic quantum field theory Communications in Mathematical Physics Constructive quantum field theory Haag–Łopuszański–Sohnius theorem Haag–Ruelle scattering theory Haag's theorem Hilbert's sixth problem Local quantum physics Principle of locality Quantum field theory Quantum field theory in curved spacetime Notes References Further reading (With photo). (With photo). External links . . . . Theoretical physicists Mathematical physicists German theoretical physicists 20th-century German physicists 21st-century German physicists Academic staff of the University of Hamburg Winners of the Max Planck Medal Members of the Austrian Academy of Sciences Members of the Bavarian Academy of Sciences Members of the German National Academy of Sciences Leopoldina 1922 births 2016 deaths People associated with CERN
Rudolf Haag
[ "Physics" ]
1,675
[ "Theoretical physics", "Theoretical physicists" ]
1,498,625
https://en.wikipedia.org/wiki/Mellin%20inversion%20theorem
In mathematics, the Mellin inversion formula (named after Hjalmar Mellin) tells us conditions under which the inverse Mellin transform, or equivalently the inverse two-sided Laplace transform, are defined and recover the transformed function. Method If is analytic in the strip , and if it tends to zero uniformly as for any real value c between a and b, with its integral along such a line converging absolutely, then if we have that Conversely, suppose is piecewise continuous on the positive real numbers, taking a value halfway between the limit values at any jump discontinuities, and suppose the integral is absolutely convergent when . Then is recoverable via the inverse Mellin transform from its Mellin transform . These results can be obtained by relating the Mellin transform to the Fourier transform by a change of variables and then applying an appropriate version of the Fourier inversion theorem. Boundedness condition The boundedness condition on can be strengthened if is continuous. If is analytic in the strip , and if , where K is a positive constant, then as defined by the inversion integral exists and is continuous; moreover the Mellin transform of is for at least . On the other hand, if we are willing to accept an original which is a generalized function, we may relax the boundedness condition on to simply make it of polynomial growth in any closed strip contained in the open strip . We may also define a Banach space version of this theorem. If we call by the weighted Lp space of complex valued functions on the positive reals such that where ν and p are fixed real numbers with , then if is in with , then belongs to with and Here functions, identical everywhere except on a set of measure zero, are identified. Since the two-sided Laplace transform can be defined as these theorems can be immediately applied to it also. See also Mellin transform Nachbin's theorem References External links Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. Integral transforms Theorems in complex analysis Laplace transforms
Mellin inversion theorem
[ "Mathematics" ]
410
[ "Theorems in mathematical analysis", "Theorems in complex analysis" ]
1,498,679
https://en.wikipedia.org/wiki/Kemble%27s%20Cascade
Kemble's Cascade (designated Kemble 1) is an asterism located in the constellation Camelopardalis. It is an apparent straight line of more than 20 colourful 5th to 10th magnitude stars over a distance of approximately 3 degrees (five moon diameters) of the night sky. It appears to "flow" into the compact open cluster NGC 1502, which can be found at one end. Discovery The asterism was named by Walter Scott Houston in honour of Father Lucian Kemble (1922–1999), a Franciscan friar and amateur astronomer who wrote a letter to Houston about the asterism, describing it as "a beautiful cascade of faint stars tumbling from the northwest down to the open cluster NGC 1502" that he had discovered while sweeping the sky with a pair of 7×35 binoculars. Houston was so impressed that he wrote an article on the asterism that appeared in his Deep Sky Wonders column in the astronomy magazine Sky & Telescope in 1980, in which he named it Kemble's Cascade. Father Lucian Kemble was also associated with two other asterisms, Kemble 2 (an asterism in the constellation of Draco that resembles a small version of Cassiopeia) and Kemble's Kite (an asterism that resembles a kite with a tail which is also in the constellation of Camelopardalis). In addition, an asteroid, 78431 Kemble, was named in his honour. References External links Kemble's Cascade Astronomy Picture of the Day, 2000-08-14 Kemble's Cascade Astronomy Picture of the Day, 2010-01-28 Kemble 2 Kemble's Kite Asterisms (astronomy) Camelopardalis
Kemble's Cascade
[ "Astronomy" ]
350
[ "Astronomy stubs", "Constellations", "Camelopardalis", "Sky regions", "Asterisms (astronomy)" ]
1,498,680
https://en.wikipedia.org/wiki/It%C3%B4%20calculus
Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations. The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes: where is a locally square-integrable process adapted to the filtration generated by , which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval. The main insight is that the integral can be defined as long as the integrand is adapted, which loosely speaking means that its value at time can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used. Important results of Itô calculus include the integration by parts formula and Itô's lemma, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms. In mathematical finance, the described evaluation strategy of the integral is conceptualized as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see Black–Scholes). Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time. This prevents the possibility of unlimited gains through clairvoyance: buying the stock just before each uptick in the market and selling before each downtick. Similarly, the condition that is adapted implies that the stochastic integral will not diverge when calculated as a limit of Riemann sums . Notation The process defined before as is itself a stochastic process with time parameter t, which is also sometimes written as . Alternatively, the integral is often written in differential form , which is equivalent to . As Itô calculus is concerned with continuous-time stochastic processes, it is assumed that an underlying filtered probability space is given The σ-algebra represents the information available up until time , and a process is adapted if is -measurable. A Brownian motion is understood to be an -Brownian motion, which is just a standard Brownian motion with the properties that is -measurable and that is independent of for all . Integration with respect to Brownian motion The Itô integral can be defined in a manner similar to the Riemann–Stieltjes integral, that is as a limit in probability of Riemann sums; such a limit does not necessarily exist pathwise. Suppose that is a Wiener process (Brownian motion) and that is a right-continuous (càdlàg), adapted and locally bounded process. If is a sequence of partitions of with mesh width going to zero, then the Itô integral of with respect to up to time is a random variable It can be shown that this limit converges in probability. For some applications, such as martingale representation theorems and local times, the integral is needed for processes that are not continuous. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. If is any predictable process such that for every then the integral of with respect to can be defined, and is said to be -integrable. Any such process can be approximated by a sequence Hn of left-continuous, adapted and locally bounded processes, in the sense that in probability. Then, the Itô integral is where, again, the limit can be shown to converge in probability. The stochastic integral satisfies the Itô isometry which holds when is bounded or, more generally, when the integral on the right hand side is finite. Itô processes An Itô process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time, Here, is a Brownian motion and it is required that σ is a predictable -integrable process, and μ is predictable and (Lebesgue) integrable. That is, for each . The stochastic integral can be extended to such Itô processes, This is defined for all locally bounded and predictable integrands. More generally, it is required that be -integrable and be Lebesgue integrable, so that Such predictable processes are called -integrable. An important result for the study of Itô processes is Itô's lemma. In its simplest form, for any twice continuously differentiable function on the reals and Itô process as described above, it states that is itself an Itô process satisfying This is the stochastic calculus version of the change of variables formula and chain rule. It differs from the standard result due to the additional term involving the second derivative of , which comes from the property that Brownian motion has non-zero quadratic variation. Semimartingales as integrators The Itô integral is defined with respect to a semimartingale . These are processes which can be decomposed as for a local martingale and finite variation process . Important examples of such processes include Brownian motion, which is a martingale, and Lévy processes. For a left continuous, locally bounded and adapted process the integral exists, and can be calculated as a limit of Riemann sums. Let be a sequence of partitions of with mesh going to zero, This limit converges in probability. The stochastic integral of left-continuous processes is general enough for studying much of stochastic calculus. For example, it is sufficient for applications of Itô's Lemma, changes of measure via Girsanov's theorem, and for the study of stochastic differential equations. However, it is inadequate for other important topics such as martingale representation theorems and local times. The integral extends to all predictable and locally bounded integrands, in a unique way, such that the dominated convergence theorem holds. That is, if and for a locally bounded process , then in probability. The uniqueness of the extension from left-continuous to predictable integrands is a result of the monotone class lemma. In general, the stochastic integral can be defined even in cases where the predictable process is not locally bounded. If then and are bounded. Associativity of stochastic integration implies that is -integrable, with integral , if and only if and . The set of -integrable processes is denoted by . Properties The following properties can be found in works such as and : The stochastic integral is a càdlàg process. Furthermore, it is a semimartingale. The discontinuities of the stochastic integral are given by the jumps of the integrator multiplied by the integrand. The jump of a càdlàg process at a time is , and is often denoted by . With this notation, . A particular consequence of this is that integrals with respect to a continuous process are always themselves continuous. Associativity. Let , be predictable processes, and be -integrable. Then, is integrable if and only if is -integrable, in which case Dominated convergence. Suppose that and , where is an -integrable process. then . Convergence is in probability at each time . In fact, it converges uniformly on compact sets in probability. The stochastic integral commutes with the operation of taking quadratic covariations. If and are semimartingales then any -integrable process will also be -integrable, and . A consequence of this is that the quadratic variation process of a stochastic integral is equal to an integral of a quadratic variation process, Integration by parts As with ordinary calculus, integration by parts is an important result in stochastic calculus. The integration by parts formula for the Itô integral differs from the standard result due to the inclusion of a quadratic covariation term. This term comes from the fact that Itô calculus deals with processes with non-zero quadratic variation, which only occurs for infinite variation processes (such as Brownian motion). If and are semimartingales then where is the quadratic covariation process. The result is similar to the integration by parts theorem for the Riemann–Stieltjes integral but has an additional quadratic variation term. Itô's lemma Itô's lemma is the version of the chain rule or change of variables formula which applies to the Itô integral. It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous -dimensional semimartingale and twice continuously differentiable function from to , it states that is a semimartingale and, This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation . The formula can be generalized to include an explicit time-dependence in and in other ways (see Itô's lemma). Martingale integrators Local martingales An important property of the Itô integral is that it preserves the local martingale property. If is a local martingale and is a locally bounded predictable process then is also a local martingale. For integrands which are not locally bounded, there are examples where is not a local martingale. However, this can only occur when is not continuous. If is a continuous local martingale then a predictable process is -integrable if and only if for each , and is always a local martingale. The most general statement for a discontinuous local martingale is that if is locally integrable then exists and is a local martingale. Square integrable martingales For bounded integrands, the Itô stochastic integral preserves the space of square integrable martingales, which is the set of càdlàg martingales such that is finite for all . For any such square integrable martingale , the quadratic variation process is integrable, and the Itô isometry states that This equality holds more generally for any martingale such that is integrable. The Itô isometry is often used as an important step in the construction of the stochastic integral, by defining to be the unique extension of this isometry from a certain class of simple integrands to all bounded and predictable processes. p-Integrable martingales For any , and bounded predictable integrand, the stochastic integral preserves the space of -integrable martingales. These are càdlàg martingales such that is finite for all . However, this is not always true in the case where . There are examples of integrals of bounded predictable processes with respect to martingales which are not themselves martingales. The maximum process of a càdlàg process is written as . For any and bounded predictable integrand, the stochastic integral preserves the space of càdlàg martingales such that is finite for all . If then this is the same as the space of -integrable martingales, by Doob's inequalities. The Burkholder–Davis–Gundy inequalities state that, for any given , there exist positive constants ,  that depend on , but not or on such that for all càdlàg local martingales . These are used to show that if is integrable and is a bounded predictable process then and, consequently, is a -integrable martingale. More generally, this statement is true whenever is integrable. Existence of the integral Proofs that the Itô integral is well defined typically proceed by first looking at very simple integrands, such as piecewise constant, left continuous and adapted processes where the integral can be written explicitly. Such simple predictable processes are linear combinations of terms of the form for stopping times and -measurable random variables , for which the integral is This is extended to all simple predictable processes by the linearity of in . For a Brownian motion , the property that it has independent increments with zero mean and variance can be used to prove the Itô isometry for simple predictable integrands, By a continuous linear extension, the integral extends uniquely to all predictable integrands satisfying in such way that the Itô isometry still holds. It can then be extended to all -integrable processes by localization. This method allows the integral to be defined with respect to any Itô process. For a general semimartingale , the decomposition into a local martingale plus a finite variation process can be used. Then, the integral can be shown to exist separately with respect to and and combined using linearity, , to get the integral with respect to X. The standard Lebesgue–Stieltjes integral allows integration to be defined with respect to finite variation processes, so the existence of the Itô integral for semimartingales will follow from any construction for local martingales. For a càdlàg square integrable martingale , a generalized form of the Itô isometry can be used. First, the Doob–Meyer decomposition theorem is used to show that a decomposition exists, where is a martingale and is a right-continuous, increasing and predictable process starting at zero. This uniquely defines , which is referred to as the predictable quadratic variation of . The Itô isometry for square integrable martingales is then which can be proved directly for simple predictable integrands. As with the case above for Brownian motion, a continuous linear extension can be used to uniquely extend to all predictable integrands satisfying . This method can be extended to all local square integrable martingales by localization. Finally, the Doob–Meyer decomposition can be used to decompose any local martingale into the sum of a local square integrable martingale and a finite variation process, allowing the Itô integral to be constructed with respect to any semimartingale. Many other proofs exist which apply similar methods but which avoid the need to use the Doob–Meyer decomposition theorem, such as the use of the quadratic variation [M] in the Itô isometry, the use of the Doléans measure for submartingales, or the use of the Burkholder–Davis–Gundy inequalities instead of the Itô isometry. The latter applies directly to local martingales without having to first deal with the square integrable martingale case. Alternative proofs exist only making use of the fact that is càdlàg, adapted, and the set {H · Xt: |H| ≤ 1 is simple previsible} is bounded in probability for each time , which is an alternative definition for to be a semimartingale. A continuous linear extension can be used to construct the integral for all left-continuous and adapted integrands with right limits everywhere (caglad or L-processes). This is general enough to be able to apply techniques such as Itô's lemma . Also, a Khintchine inequality can be used to prove the dominated convergence theorem and extend the integral to general predictable integrands . Differentiation in Itô calculus The Itô calculus is first and foremost defined as an integral calculus as outlined above. However, there are also different notions of "derivative" with respect to Brownian motion: Malliavin derivative Malliavin calculus provides a theory of differentiation for random variables defined over Wiener space, including an integration by parts formula . Martingale representation The following result allows to express martingales as Itô integrals: if is a square-integrable martingale on a time interval with respect to the filtration generated by a Brownian motion , then there is a unique adapted square integrable process on such that almost surely, and for all . This representation theorem can be interpreted formally as saying that α is the "time derivative" of with respect to Brownian motion , since α is precisely the process that must be integrated up to time to obtain , as in deterministic calculus. Itô calculus for physicists In physics, usually stochastic differential equations (SDEs), such as Langevin equations, are used, rather than stochastic integrals. Here an Itô stochastic differential equation (SDE) is often formulated via where is Gaussian white noise with and Einstein's summation convention is used. If is a function of the , then Itô's lemma has to be used: An Itô SDE as above also corresponds to a Stratonovich SDE which reads SDEs frequently occur in physics in Stratonovich form, as limits of stochastic differential equations driven by colored noise if the correlation time of the noise term approaches zero. For a recent treatment of different interpretations of stochastic differential equations see for example . See also Stochastic calculus Itô's lemma Otto calculus Stratonovich integral Semimartingale Wiener process References Hagen Kleinert (2004). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore); Paperback . Fifth edition available online: PDF-files, with generalizations of Itô's lemma for non-Gaussian processes. Mathematical Finance Programming in TI-Basic, which implements Ito calculus for TI-calculators. Definitions of mathematical integration Stochastic calculus Mathematical finance Integral calculus
Itô calculus
[ "Mathematics" ]
3,911
[ "Applied mathematics", "Mathematical finance", "Integral calculus", "Calculus" ]
1,498,759
https://en.wikipedia.org/wiki/Ramsey%20problem
The Ramsey problem, or Ramsey pricing, or Ramsey–Boiteux pricing, is a second-best policy problem concerning what prices a public monopoly should charge for the various products it sells in order to maximize social welfare (the sum of producer and consumer surplus) while earning enough revenue to cover its fixed costs. Under Ramsey pricing, the price markup over marginal cost is inverse to the price elasticity of demand and the Price elasticity of supply: the more elastic the product's demand or supply, the smaller the markup. Frank P. Ramsey found this 1927 in the context of Optimal taxation: the more elastic the demand or supply, the smaller the optimal tax. The rule was later applied by Marcel Boiteux (1956) to natural monopolies (industries with decreasing average cost). A natural monopoly earns negative profits if it sets price equals to marginal cost, so it must set prices for some or all of the products it sells to above marginal cost if it is to be viable without government subsidies. Ramsey pricing says to mark up most the goods with the least elastic (that is, least price-sensitive) demand or supply. Description In a first-best world, without the need to earn enough revenue to cover fixed costs, the optimal solution would be to set the price for each product equal to its marginal cost. If the average cost curve is declining where the demand curve crosses it however, as happens when the fixed cost is large, this would result in a price less than average cost, and the firm could not survive without subsidy. The Ramsey problem is to decide exactly how much to raise each product's price above its marginal cost so the firm's revenue equals its total cost. If there is just one product, the problem is simple: raise the price to where it equals average cost. If there are two products, there is leeway to raise one product's price more and the other's less, so long as the firm can break even overall. The principle is applicable to pricing of goods that the government is the sole supplier of (public utilities) or regulation of natural monopolies, such as telecommunications firms, where it is efficient for only one firm to operate but the government regulates its prices so it does not earn above-market profits. In practice, government regulators are concerned with more than maximizing the sum of producer and consumer surplus. They may wish to put more weight on the surplus of politically powerful consumers, or they may wish to help the poor by putting more weight on their surplus. Moreover, many people will see Ramsey pricing as unfair, especially if they do not understand why it maximizes total surplus. In some contexts, Ramsey pricing is a form of price discrimination because the two products with different elasticities of demand are one physically identical product sold to two different groups of customers, e.g., electricity to residential customers and to commercial customers. Ramsey pricing says to charge whichever group has less elastic demand a higher price in order to maximize overall social welfare. Customers sometimes object to it on that basis, since they care about their own individual welfare, not social welfare. Customers who are charged more may consider unfair, especially they, with less elastic demand, would say they "need" the good more. In such situations regulators may further limit an operator’s ability to adopt Ramsey prices. Formal presentation and solution Consider the problem of a regulator seeking to set prices for a multiproduct monopolist with costs where is the output of good i and is the price. Suppose that the products are sold in separate markets so demands are independent, and demand for good i is with inverse demand function Total revenue is Total welfare is given by The problem is to maximize by choice of the subject to the requirement that profit equal some fixed value . Typically, the fixed value is zero, which is to say that the regulator wants to maximize welfare subject to the constraint that the firm not lose money. The constraint can be stated generally as: This problem may be solved using the Lagrange multiplier technique to yield the optimal output values, and backing out the optimal prices. The first order conditions on are where is a Lagrange multiplier, Ci(q) is the partial derivative of C(q) with respect to qi, evaluated at q, and is the elasticity of demand for good Dividing by and rearranging yields where . That is, the price margin compared to marginal cost for good is again inversely proportional to the elasticity of demand. Note that the Ramsey mark-up is smaller than the ordinary monopoly markup of the Lerner Rule which has , since (the fixed-profit requirement, is non-binding). The Ramsey-price setting monopoly is in a second-best equilibrium, between ordinary monopoly and perfect competition. Ramsey condition An easier way to solve this problem in a two-output context is the Ramsey condition. According to Ramsey, in order to minimize deadweight losses, one must increase prices to rigid and elastic demands/supplies in the same proportion, in relation to the prices that would be charged at the first-best solution (price equal to marginal cost). See also Amoroso–Robinson relation Lerner index References Economic policy Monopoly (economics) Mathematical economics Tax
Ramsey problem
[ "Mathematics" ]
1,058
[ "Applied mathematics", "Mathematical economics" ]
1,498,828
https://en.wikipedia.org/wiki/Amobarbital
Amobarbital (formerly known as amylobarbitone or sodium amytal as the soluble sodium salt) is a drug that is a barbiturate derivative. It has sedative-hypnotic properties. It is a white crystalline powder with no odor and a slightly bitter taste. It was first synthesized in Germany in 1923. It is considered a short to intermediate acting barbiturate. If amobarbital is taken for extended periods of time, physiological and psychological dependence can develop. Amobarbital withdrawal mimics delirium tremens and may be life-threatening. Amobarbital was manufactured by Eli Lilly and Company in the United States under the brand name Amytal in bright blue bullet shaped capsules (known as Pulvules) or pink tablets (known as Diskets) containing 50, 100, or 200 milligrams of the drug. The drug was also manufactured generically. Amobarbital was widely misused, known as "Blue Heavens" on the street. Amytal, as well as Tuinal, a combination drug containing equal quantities of secobarbital and amobarbital, were both manufactured by Eli Lilly until the late 1990s. However, as the popularity of benzodiazepines increased, prescriptions for these medications became increasingly rare beginning in the mid to late 1980s. Pharmacology In an in vitro study in rat thalamic slices, amobarbital worked by activating GABAA receptors, which decreased input resistance, depressed burst and tonic firing, especially in ventrobasal and intralaminar neurons, while at the same time increasing burst duration and mean conductance at individual chloride channels; this increased both the amplitude and decay time of inhibitory postsynaptic currents. Amobarbital has been used in a study to inhibit mitochondrial electron transport in the rat heart in an attempt to preserve mitochondrial function following reperfusion. A 1988 study found that amobarbital increases benzodiazepine receptor binding in vivo with less potency than secobarbital and pentobarbital (in descending order), but greater than phenobarbital and barbital (in descending order). (Secobarbital > pentobarbital > amobarbital > phenobarbital > barbital) It has an in mice of 212mg/kg s.c. Metabolism Amobarbital undergoes both hydroxylation to form 3'-hydroxyamobarbital, and N-glucosidation to form 1-(beta-D-glucopyranosyl)-amobarbital. Indications Approved Anxiety Epilepsy Insomnia Wada test Unapproved/off-label When given slowly by an intravenous route, sodium amobarbital has a reputation for acting as a so-called truth serum. Under the influence, a person will divulge information that under normal circumstances they would block. This was most likely due to loss of inhibition. As such, the drug was first employed clinically by William Bleckwenn at the University of Wisconsin to circumvent inhibitions in psychiatric patients. The use of amobarbital as a truth serum has lost credibility due to the discovery that a subject can be coerced into having a "false memory" of the event. The drug may be used intravenously to interview patients with catatonic mutism, sometimes combined with caffeine to prevent sleep. It was used by the United States armed forces during World War II in an attempt to treat shell shock and return soldiers to the front-line duties. This use has since been discontinued as the powerful sedation, cognitive impairment, and dis-coordination induced by the drug greatly reduced soldiers' usefulness in the field. Contraindications The following drugs should be avoided when taking amobarbital: Antiarrhythmics, such as verapamil and digoxin Antiepileptics, such as phenobarbital or carbamazepine Antihistamines, such as doxylamine and clemastine Antihypertensives, such as atenolol and propranolol Ethanol Benzodiazepines, such as diazepam, clonazepam, nitrazepam, alprazolam, or lorazepam Chloramphenicol Chlorpromazine Cyclophosphamide Ciclosporin Digitoxin Doxorubicin Doxycycline Methoxyflurane Metronidazole Narcotic analgesics, such as morphine and oxycodone Quinine Steroids, such as prednisone and cortisone Theophylline Warfarin Interactions Amobarbital has been known to decrease the effects of hormonal birth control. Overdose Some side effects of overdose include confusion (severe); decrease in or loss of reflexes; drowsiness (severe); fever; irritability (continuing); low body temperature; poor judgment; shortness of breath or slow or troubled breathing; slow heartbeat; slurred speech; staggering; trouble in sleeping; unusual movements of the eyes; weakness (severe). Severe overdose may result in death without intervention. Chemistry Amobarbital (5-ethyl-5-isoamylbarbituric acid), like all barbiturates, is synthesized by reacting malonic acid derivatives with urea derivatives. In particular, in order to make amobarbital, α-ethyl-α-isoamylmalonic ester is reacted with urea (in the presence of sodium ethoxide). Society and culture The drug has been used to gain confessions from murderers such as Andres English-Howard, who strangled his girlfriend to death but claimed innocence. He was surreptitiously administered the drug by his lawyer, and under the influence of it he revealed why he strangled her and under what circumstances. On the night of August 28, 1951, the housekeeper of actor Robert Walker found him to be in an emotional state. She called Walker's psychiatrist who arrived and administered amobarbital for sedation. Walker was allegedly drinking prior to his emotional outburst, and it is believed the combination of amobarbital and alcohol resulted in a severe reaction. As a result, he passed out and stopped breathing, and all efforts to resuscitate him failed. Walker died at 32 years old. The British actor and comedian Tony Hancock killed himself in Australia in 1968 using the drug in combination with alcohol. Eli Lilly manufactured amobarbital under the brand name Amytal until it was discontinued in the 1980s and replaced largely by the benzodiazepine family of drugs. Amytal was also widely abused. Street names for amobarbital include "blues", "blue angels", "blue birds", "blue devils", and "blue heavens" due to their blue capsule. Cultural references In Len Deighton's 1967 novel An Expensive Place to Die, a combination of amytal and LSD is used to make the unnamed protagonist respond to questioning about his activities. In Thomas Pynchon's 1973 novel Gravity's Rainbow, sodium amytal is used by a military intelligence unit as some kind of truth serum to extract Tyrone Slothrop's (the novel's protagonist) ideas on racism of white Americans against Afro-Americans during the 1930s in his home state of Massachusetts (Chapter 1). In the 1975 Columbo episode "A Deadly State of Mind" the victim Nadia Donner has Amobarbital found in her blood according to the autopsy report. In the 1994 comedic action thriller True Lies, the protagonist Harry Tasker (portrayed by Arnold Schwarzenegger) is given sodium amytal as a truth serum, when questioned by terrorists. In 2001, the Law & Order: Special Victims Unit episode "Repression" (Season 3, Episode 1), a therapist (Shirley Knight) treats her patient with "sodium amytal". The character Dr. George Huang (BD Wong) claims that the drug leaves a patient so susceptible to suggestion, that the therapist is able to implant false memories in the patient. In 2003, the animated series Sealab 2021 season 3, episode 7, "Tourist Season", Dr. Quentin Q. Quinn, voiced by Brett Butler, uses "sodium amytal" to induce amnesia in a group of tourists, in order to prevent them from taking action against Sealab as a result of a hare-brained scheme by Capt. Murphy. He gives the dose in a free alcoholic beverage as a parting gift before the tourists leave, and in quantities he describes that could "make an elephant forget". In the 2005 movie, Hellraiser: Hellworld, the main antagonist uses "sodium amytal" to secretly anesthetize the people he believes were responsible for his son's death, and induce extreme hallucinations in them. Hannibal s2e4. In 2016, television drama A Fist Within Four Walls episode 14, Man Zeun (portrayed by Princeton Lock) uses sodium amytal in an attempt to force gambling addict Lee Fat into paying debts. In 2022, in "Diophantine Pseudonym", Episode 06 of Dan Brown's The Lost Symbol, sodium amytal is used by the CIA as a truth serum. In the 2024 television series SAS Rogue Heroes Series 2 Episode 04, sodium amytal is offered to a soldier suffering from PTSD-related insomnia. See also Blue 88 Depressant Tuinal Notes Hypnotics Barbiturates German inventions Drugs developed by Eli Lilly and Company Substances discovered in the 1920s
Amobarbital
[ "Biology" ]
2,023
[ "Hypnotics", "Behavior", "Sleep" ]
1,498,886
https://en.wikipedia.org/wiki/Triumph%20%28comics%29
Triumph (William MacIntyre) is a fictional superhero in the DC Comics universe whose first full appearance was in Justice League America #92 (August 1994). He was created by Brian Augustyn, Mark Waid, and Howard Porter, though the character is primarily associated with writer Christopher Priest. Years after Triumph's initial appearance, Priest revealed that the character was partially based on DC Comics creative director Neal Pozner: "His shtick was: Triumph was always right... it was what made him so annoying to his fellow heroes... He was. At the end of the day, Neal would be proven right. That fact, more than anything else, annoyed many staffers beyond reason". Fictional character biography Triumph is a member of the Justice League who entered a dimensional limbo while saving Earth, erasing the world's memory of him. He later returns to Earth and rejoins the Justice League. However, Triumph's arrogant nature causes him to be expelled from the group. During this time, he loses his soul to the demon Neron. In JLA, Triumph becomes a destitute and resorts to selling stolen League items to supervillains to pay rent. He is influenced by the fifth-dimensional imp Lkz before the Justice League and Justice Society stop him. The Spectre later transforms Triumph into an ice statue that is stored in the Watchtower and later destroyed by Prometheus. Several years after Triumph's death, it is revealed that he has a son named Jonathan. He battles the Teen Titans before Raven convinces him to stand down. In Trinity, Triumph is resurrected in an alternate reality before sacrificing himself to save Tomorrow Woman. Powers and abilities Triumph can manipulate and sense the electromagnetic spectrum, enabling him to control electricity and sense television and radio signals. He can also generate force fields and change the density of other objects. References External links Digital Priest archive of the script for Triumph #1 Christopher J. Priest essay on Triumph (archived) Triumph at the DC Comics wiki Characters created by Mark Waid Comics based on real people Comics characters introduced in 1994 DC Comics characters with superhuman strength DC Comics LGBTQ superheroes DC Comics LGBTQ supervillains DC Comics male superheroes DC Comics male supervillains DC Comics metahumans DC Comics titles Fictional characters with density control abilities Fictional characters with electric or magnetic abilities Fictional gay men
Triumph (comics)
[ "Physics" ]
482
[ "Density", "Fictional characters with density control abilities", "Physical quantities" ]
1,498,921
https://en.wikipedia.org/wiki/Blitzen%20%28computer%29
The Blitzen was a miniaturized SIMD (single instruction, multiple data) computer system designed for NASA in the late 1980s by a team of researchers at Duke University, North Carolina State University and the Microelectronics Center of North Carolina. The Blitzen was composed of a control unit and a set of simple processors connected in a grid topology. The machine influenced, to some extent, the design of the MasPar MP-1 computer. Applications of the Blitzen machine include high-speed image processing, where each processor operates on a pixel of the input image and communicates with its grid neighbours to apply image processing filters on the image. References Classes of computers SIMD computing
Blitzen (computer)
[ "Technology" ]
140
[ "Computer hardware stubs", "Computer systems", "Computing stubs", "Computers", "Classes of computers" ]
1,499,013
https://en.wikipedia.org/wiki/Kenkichi%20Iwasawa
Kenkichi Iwasawa ( Iwasawa Kenkichi, September 11, 1917 – October 26, 1998) was a Japanese mathematician who is known for his influence on algebraic number theory. Biography Iwasawa was born in Shinshuku-mura, a town near Kiryū, in Gunma Prefecture. He attended elementary school there, but later moved to Tokyo to attend Musashi High School. From 1937 to 1940 Iwasawa studied as an undergraduate at the University of Tokyo, after which he entered graduate school at the same institution and became an assistant in the Department of Mathematics. In 1945 he was awarded a Doctor of Science degree. However, this same year Iwasawa became sick with pleurisy, and was unable to return to his position at the university until April 1947. From 1949 to 1955 he worked as assistant professor at the University of Tokyo. In 1950, Iwasawa was invited to Cambridge, Massachusetts to give a lecture at the International Congress of Mathematicians on his method to study Dedekind zeta functions using integration over ideles and duality of adeles; this method was also independently obtained by John Tate and is sometimes called Iwasawa–Tate theory. Iwasawa spent the next two years at the Institute for Advanced Study in Princeton, and in Spring of 1952 was offered a job at the Massachusetts Institute of Technology, where he worked until 1967. From 1967 until his retirement in 1986, Iwasawa served as Professor of Mathematics at Princeton. He returned to Tokyo with his wife in 1987. Among Iwasawa's most famous students are Robert F. Coleman, Bruce Ferrero, Ralph Greenberg, Gustave Solomon, Larry Washington, and Eugene M. Luks. Research Iwasawa is known for introducing what is now called Iwasawa theory, which developed from researches on cyclotomic fields from the later 1950s. Before that he worked on Lie groups and Lie algebras, introducing the general Iwasawa decomposition. List of books available in English Lectures on p-adic L-functions / by Kenkichi Iwasawa (1972) Local class field theory / Kenkichi Iwasawa (1986) Algebraic functions / Kenkichi Iwasawa; translated by Goro Kato (1993) See also Iwasawa group Anabelian geometry Fermat's Last Theorem References Sources External links 1917 births 1998 deaths People from Gunma Prefecture 20th-century Japanese mathematicians Number theorists Institute for Advanced Study visiting scholars Massachusetts Institute of Technology faculty Princeton University faculty University of Tokyo alumni
Kenkichi Iwasawa
[ "Mathematics" ]
511
[ "Number theorists", "Number theory" ]
1,499,029
https://en.wikipedia.org/wiki/Crash%20cart
A crash cart, code cart, crash trolley or "MAX cart" is a set of trays/drawers/shelves on wheels used in hospitals for transportation and dispensing of emergency medication/equipment at site of medical/surgical emergency for life support protocols (ACLS/ALS) to potentially save someone's life. The cart carries instruments for cardiopulmonary resuscitation and other medical supplies while also functioning as a support litter for the patient. The crash cart was originally designed and patented by ECRI Institute founder, Joel J. Nobel, M.D., while a surgical resident at Philadelphia's Pennsylvania Hospital in 1965. MAX helped enhance hospital's efficiency in emergencies by enabling doctors and nurses to save time, thereby increasing the chances of saving a life. The contents and organization of a crash cart vary from hospital to hospital, country to country, and specialty to specialty, but typically contain the tools and drugs needed to treat a person in or near cardiac arrest or another life-threatening condition. These include but are not limited to: Monitor/defibrillators, suction devices, and bag valve masks (BVMs) of different sizes Advanced cardiac life support (ACLS) drugs such as epinephrine, atropine, amiodarone, lidocaine, sodium bicarbonate, dopamine, and vasopressin First line drugs for treatment of common problems such as: adenosine, dextrose, epinephrine for IM use, naloxone, nitroglycerin, and others Drugs for rapid sequence intubation: succinylcholine or another paralytic, and a sedative such as etomidate, propofol or midazolam; endotracheal tubes and other intubating equipment Drugs for peripheral and central venous access Electronic metronome to provide standardized auditory cadence cues during CPR Pediatric equipment (common pediatric drugs, intubation equipment, etc.) Other drugs and equipment as chosen by the facility Hospitals typically have internal intercom codes used for situations when someone has suffered a cardiac arrest or a similar potentially fatal condition outside of the emergency department or intensive care unit (where such conditions already happen frequently and do not require special announcements). When such codes are given, hospital staff and volunteers are expected to clear the corridors, and to direct visitors to stand aside as the crash cart and a team of physicians, pharmacists and nurses may come through at any moment. (See Code Blue.) History in the United States Another version of the cardiac crash cart was created in 1962 at Bethany Medical Center in Kansas City, Kansas, home to the first cardiac care unit in the country. The version of the crash cart was designed by a nurse and fabricated by the father of one of the doctors. It contained an Ambu bag, defibrillator paddles, a bed board and endotracheal tubes. An emergency department nurse, Anita Dorr, developed a prototype of a crash cart in 1967 that looked and worked like crash carts used today. Dorr was supervising the Emergency Department of Meyer Memorial Hospital (now Erie County Medical Center) in Buffalo, New York. She found it took too long to gather supplies for cardiac and respiratory arrests. Dorr, with her fellow nurses, put together a list of emergency response supplies, meds and equipment, and built an "Emergency Nursing Crisis Cart" with her spouse in their garage in 1967. Dorr was not able to patent her cart design but she went on to co-found the Emergency Department Nurses Association (now the Emergency Nurses Association) in 1968. Before Dorr invented her crash cart prototype there was a doctor, Joel Noble MD, who invented a cart called the Max cart which was actually a table for patients to lie on. The Max crash cart had drawers for medical supplies and other equipment such as a cardiac compressor and an electrocardiograph machine. The table had equipment to record ECG readouts. Dorr's crash cart looked and functioned like the crash carts that we use today. Dr Nobel's version was more of a table for patients to lie on with drawers of supplies and equipment. In computing In the computer industry, the term crash cart is used, by analogy to its original meaning in medicine, to mean a cart that can be connected to a server that is malfunctioning so badly that remote access to it is impossible, the intention being to "resuscitate" the server to the point where remote administration works again. Crash carts most commonly include a keyboard, mouse, and monitor, because most servers in a modern high-density environment do not have user input/output devices. Crash carts are a method of last resort in data centers which employ various forms of out-of-band management. In those cases it is used for equipment which does not support the requisite out-of-band infrastructure (OOBI) features or in cases where the OOBI devices (concentrators, switches, terminal servers, etc.) or services themselves have failed. The term "crash cart" can also refer to a bootable removable medium containing an operating system and any relevant software used to recover computer equipment (such as a server or PC) from a state of failure. This is done when such recovery is not possible using the computer's existing operating system and software. Crash carts in this sense are historically tape cartridges ("carts") or more recently, external or removable hard drives. References Notes External links 360-degree view of a crash cart and all its contents Crash Cart Inventory Checklist (PDF) Max, The Lifesaver; Life Medical equipment Out-of-band management
Crash cart
[ "Biology" ]
1,165
[ "Medical equipment", "Medical technology" ]
1,499,089
https://en.wikipedia.org/wiki/Hardy%27s%20theorem
In mathematics, Hardy's theorem is a result in complex analysis describing the behavior of holomorphic functions. Let be a holomorphic function on the open ball centered at zero and radius in the complex plane, and assume that is not a constant function. If one defines for then this function is strictly increasing and is a convex function of . See also Maximum principle Hadamard three-circle theorem References John B. Conway. (1978) Functions of One Complex Variable I. Springer-Verlag, New York, New York. Theorems in complex analysis
Hardy's theorem
[ "Mathematics" ]
115
[ "Theorems in mathematical analysis", "Theorems in complex analysis" ]
1,499,165
https://en.wikipedia.org/wiki/Vertex%20separator
In graph theory, a vertex subset is a vertex separator (or vertex cut, separating set) for nonadjacent vertices and if the removal of from the graph separates and into distinct connected components. Examples Consider a grid graph with rows and columns; the total number of vertices is . For instance, in the illustration, , , and . If is odd, there is a single central row, and otherwise there are two rows equally close to the center; similarly, if is odd, there is a single central column, and otherwise there are two columns equally close to the center. Choosing to be any of these central rows or columns, and removing from the graph, partitions the graph into two smaller connected subgraphs and , each of which has at most vertices. If (as in the illustration), then choosing a central column will give a separator with vertices, and similarly if then choosing a central row will give a separator with at most vertices. Thus, every grid graph has a separator of size at most the removal of which partitions it into two connected components, each of size at most . To give another class of examples, every free tree has a separator consisting of a single vertex, the removal of which partitions into two or more connected components, each of size at most . More precisely, there is always exactly one or exactly two vertices, which amount to such a separator, depending on whether the tree is centered or bicentered. As opposed to these examples, not all vertex separators are balanced, but that property is most useful for applications in computer science, such as the planar separator theorem. Minimal separators Let be an -separator, that is, a vertex subset that separates two nonadjacent vertices and . Then is a minimal -separator if no proper subset of separates and . More generally, is called a minimal separator if it is a minimal separator for some pair of nonadjacent vertices. Notice that this is different from minimal separating set which says that no proper subset of is a minimal -separator for any pair of vertices . The following is a well-known result characterizing the minimal separators: Lemma. A vertex separator in is minimal if and only if the graph , obtained by removing from , has two connected components and such that each vertex in is both adjacent to some vertex in and to some vertex in . The minimal -separators also form an algebraic structure: For two fixed vertices and of a given graph , an -separator can be regarded as a predecessor of another -separator , if every path from to meets before it meets . More rigorously, the predecessor relation is defined as follows: Let and be two -separators in . Then is a predecessor of , in symbols , if for each , every path connecting to meets . It follows from the definition that the predecessor relation yields a preorder on the set of all -separators. Furthermore, proved that the predecessor relation gives rise to a complete lattice when restricted to the set of minimal -separators in . See also Chordal graph, a graph in which every minimal separator is a clique. k-vertex-connected graph Notes References . . Graph connectivity
Vertex separator
[ "Mathematics" ]
677
[ "Mathematical relations", "Graph theory", "Graph connectivity" ]
1,499,203
https://en.wikipedia.org/wiki/Transocean
Transocean Ltd. is an American drilling company. It is the world's largest offshore drilling contractor based on revenue and is based in Steinhausen, Switzerland. The company has offices in 20 countries, including Canada, the United States, Norway, United Kingdom, India, Brazil, Singapore, Indonesia, and Malaysia. In 2010, Transocean was found partially responsible (30% of total liability) for the Deepwater Horizon oil spill resulting from the explosion of one of its oil rigs in the Gulf of Mexico. The primary business of Transocean is contracts with other large companies in the oil and gas industry. In 2019, Royal Dutch Shell accounted for 26% of the company's revenues, while Equinor accounted for 21% of the company's revenues, and Chevron accounted for 17% of the company's revenues. History Transocean was formed as a result of the merger of Southern Natural Gas Company, later Sonat, with many smaller drilling companies. In 1953, the Birmingham, Alabama-based Southern Natural Gas Company created The Offshore Company after acquiring the joint drilling operation DeLong-McDermott from DeLong Engineering and J. Ray McDermott. In 1954, the company launched Rig 51, the first mobile jackup rig, in the Gulf of Mexico. In 1967, the Offshore Company went public. In 1978, SNG turned it into a wholly owned subsidiary. In 1982, it was changed to Sonat Offshore Drilling Inc., reflecting a change in its parent's name. William C. O'Malley, an executive at Sonat's headquarters in Birmingham, was named the company's first Chief Executive Officer in 1992. In 1993, Sonat spun off the majority of its ownership in the company. Sonat sold its remaining 40% stake in the company during a secondary public offering in late 1995. In 1996, the company acquired Norwegian group Transocean ASA for US$1.5 billion. Transocean started in the 1970s as a whaling company and expanded through a series of mergers. The new company was called Transocean Offshore. The new company began building massive drilling operations with drills capable of going to 10,000 feet (as opposed to 3,000 feet at the time) and operating two drill operations on the same ship. Its first ship, Discoverer Enterprise, cost nearly US$430 million and was . The Enterprise class drillship is the largest of the drilling ships. In 1999, Transocean merged with Sedco Forex, the offshore drilling subsidiary of Schlumberger in a $3.2 billion stock transaction in which Schlumberger shareholders received shares of Transocean. Sedco Forex had been formed from a merger of two drilling companies, the Southeastern Drilling Company (Sedco), founded in 1947 by Bill Clements and acquired by Schlumberger in 1985 for $1 billion and French drilling company Forages et Exploitations Pétrolières (Forex) founded in 1942 in German-occupied France for drilling in North Africa. Schlumberger first got a foothold in the company in 1959 and then assumed total control in 1964, and renamed it Forex Neptune Drilling Company. In 2000, Transocean acquired R&B Falcon Corporation, owner of 115 drilling rigs, in a deal valued at $17.7 billion. With the acquisition, Transocean gained control of what at the time was the world's largest offshore operation. Among R&B Falcon's assets was the Deepwater Horizon. R&B Falcon had acquired Cliffs Drilling Company in 1998. In 2005, the company's Discoverer Spirit rig set a world record for the deepest offshore oil and gas well of . In 2007, the US Department of Justice and the Securities and Exchange Commission filed a case against Transocean, alleging violations of the Foreign Corrupt Practices Act. The case alleged that Transocean paid bribes through its freight forwarding agents to Nigerian customs officials. Transocean later admitted to approving the bribes and agreed to pay US$13,440,000 to settle the matter. In 2007, the company merged with GlobalSantaFe Corporation in a transaction that created a company with an enterprise value of $53 billion. Shareholders of GlobalSantaFe Corporation received $15 billion of cash as well as stock in the new company for their shares. Robert E. Rose, who was non-executive chairman of GlobalSantaFe, was made Transocean's chairman. Rose had been chairman of Global Marine prior to its 2001 merger with Santa Fe International Corporation. In 2008, the company moved its headquarters to Switzerland, resulting in a significantly lower tax rate. In September 2009, its Deepwater Horizon rig established a well, the deepest well in history – more than 5,000 feet deeper than its stated design specification. In 2010, Transocean was implicated in the Deepwater Horizon oil spill resulting from the explosion of one of its oil rigs in the Gulf of Mexico that was leased to BP. In 2011, the company acquired Aker Drilling, which owned 4 harsh environment rigs used for drilling near Norway. In 2012, the company sold 38 shallow water rigs and narrowed its focus on high-specification deepwater rigs. In 2013, the company was added to the S&P 500 index. In February 2015, CEO Steven Newman quit following a $2.2 billion quarterly loss. Effective on 30 March 2016, the company delisted its shares from the SIX Swiss Exchange, at which time its shares were removed from the Swiss Market Index. Effective on January 30, 2018, the company completed its acquisition of Songa Offshore. In December 2018, the company acquired Ocean Rig. Controversies Accidents and incidents Transocean was rated as a leader in its industry for many years. However, since the company's 2007 merger with GlobalSantaFe, Transocean's reputation has suffered considerably, according to EnergyPoint Research, an independent oil service industry rating firm. From 2004 to 2007, Transocean was the leader or near the top among deep-water drillers in "job quality" and "overall satisfaction." In 2008 and 2009, surveys ranked Transocean as last among deep-water drillers for "job quality" and next to last in "overall satisfaction." In 2008 and 2009, Transocean ranked first for in-house safety and environmental policies, and in the middle of the pack for perceived environmental and safety record. The Deepwater Horizon explosion and massive oil spill, starting in April 2010, further hurt its reputation. "Transocean is dominant, but the accident has definitely tarnished its reputation for worker safety and for being able to manage and deliver on extraordinarily complex deepwater projects," said Christopher Ruppel, an energy expert and managing director of capital markets at Execution Noble, an investment bank. Transocean Leader accident (2002) On 2 March 2002, a Scottish man was killed in an accident aboard the Transocean Leader drilling rig operated for BP, located about 138 kilometers (86 miles) west of Shetland, Scotland. Galveston Bay explosion (2003) On 17 June 2003, one worker was killed, four others were hospitalised and 21 were evacuated after an explosion on a Transocean gas drilling rig in Galveston Bay, Texas. Maintenance citation on Transocean Rather (2005) On 24 August 2005, the UK Health and Safety Executive issued a notice to Transocean saying that, it had failed to maintain its "remote blowout preventor control panel … in an efficient state, efficient working order and in good repair." On 21 November 2005, Transocean was found to be in compliance for this matter. Sinking of Bourbon Dolphin supply boat and Transocean Rather accident (2007) On 12 April 2007, the Bourbon Dolphin supply boat sank off the coast of Scotland while servicing the Transocean Rather drilling rig, killing eight people. The Norwegian Ministry of Justice established a Commission of Inquiry to investigate the incident, and the commission's report found a series of "unfortunate circumstances" led to the accident "with many of them linked to Bourbon Offshore and Transocean." 2008 fatalities In 2008, two Transocean workers were reportedly killed on the company's vessels. Deepwater Horizon drilling rig explosion (2010) On 20 April 2010, a fire was reported on a Transocean-owned semisubmersible drilling rig, Deepwater Horizon. Deepwater Horizon was a RBS8D design of Reading & Bates Falcon, a firm that was acquired by Transocean in 2001. The fire broke out at 10:00 p.m. CDT UTC−5 in US waters of Mississippi Canyon Block 252 in the Gulf of Mexico. The rig was off the Louisiana coast. The US Coast Guard launched a rescue operation after the explosion which killed 11 workers and critically injured seven of the 126-member crew. Deepwater Horizon was completely destroyed and subsequently sank. As the Deepwater Horizon sank, the riser pipe that connected the well-head to the rig was severed. As a result, oil began to spill into the Gulf of Mexico. Estimates of the leak were about 80,000 barrels per day – for 87 days. Louisiana Governor Bobby Jindal declared a state of emergency on 29 April, as the oil slick grew and headed toward the most important and most sensitive wetlands in North America, threatening to destroy wildlife and the livelihood of thousands of fishermen. The head of BP Group told CNN's Brian Todd on 28 April that the accident could have been prevented and focused blame on Transocean, which owned and partly manned the rig. Transocean came under fire from lawyers, representing the fishing and tourism businesses that were hit by the oil spill, and the United States Department of Justice for seeking to use a Limitation of Liability Act of 1851 to restrict its liability for economic damages to $26.7 million. During Congressional testimony, Transocean and BP blamed each other for the disaster. It emerged that a "heated argument" broke out on the platform 11 hours before the accident, in which Transocean and BP personnel disagreed on an engineering decision related to the closing of the well. On 14 May 2010, US President Barack Obama commented, "I did not appreciate what I considered to be a ridiculous spectacle… executives of BP and Transocean and Halliburton [the firm responsible for cementing the well] falling over each other to point the finger of blame at somebody else. The American people could not have been impressed with that display, and I certainly wasn't." Transocean later claimed that 2010, the year in which the disaster occurred, was "the best year in safety performance in our company’s history". In a regulatory filing, Transocean said, "Notwithstanding the tragic loss of life in the Gulf of Mexico, we achieved an exemplary statistical safety record as measured by our total recordable incident rate and total potential severity rate." They used this justification to award employees about two-thirds of the maximum possible safety bonuses. In response to broad criticism, including from Interior Secretary Ken Salazar, the company announced that its executives would donate the safety portion of the bonuses to a fund supporting the victims' families. Offshore drilling leak off the Brazilian coast (2011) The offshore drilling facility "Sedco 706", operated by Transocean under contract from Chevron, began to leak in November 2011 while working on the "Frade" oil field. Oil began leaking from the seabed at a depth of approximately 1100 to 1200m. Damage included an oil slick (oil floating on the ocean surface) covering an area of approximately 80 km2 and growing. This put the oil at a distance of about 370 km from Rio de Janeiro, but other beautiful beaches are much closer (estimated 140 km). The Brazilian government sued Transocean and attempted to force the company to cease operations in Brazil, but a settlement was reached without a finding of fault or liability. Transocean Winner grounding on the Isle of Lewis, Scotland (2016) In the early hours of Monday 8 August 2016, the semi-submersible drilling rig Transocean Winner ran aground near Dalmore in the Carloway district of the Isle of Lewis in the Outer Hebrides, Scotland. The rig had been under tow by the tug Alp Forward in winds of galeforce, when the tow line broke. The rig subsequently drifted ashore at Dalmore and became stuck fast on rocks at 07.30 BST. Continuing poor weather meant that a damage inspection by salvors has been practically impossible, as personnel require to be airlifted on to the rig, in spite of it being close to the shore. The rig was carrying approximately 280 tons of diesel, to power its generators, of which 53 tons is thought to have leaked into the sea, and dispersed or evaporated in rough conditions. Environmental monitoring of plant and animal life is on-going, particularly in view of the economically important fish farming operations in nearby Loch Ròg. See also List of oilfield service companies List of Texas companies (T) References External links Subsidiaries of Transocean LTD. worldwide (as of December 31, 2018)] (U.S. Securities and Exchange Commission) Companies listed on the New York Stock Exchange Service companies of Switzerland Drilling rig operators Swiss companies established in 1973 Energy engineering and contractor companies Norwegian companies established in 1973 Vernier, Switzerland Tax inversions Energy companies established in 1973
Transocean
[ "Engineering" ]
2,719
[ "Energy engineering and contractor companies", "Engineering companies" ]
1,499,388
https://en.wikipedia.org/wiki/Hadamard%20three-circle%20theorem
In complex analysis, a branch of mathematics, the Hadamard three-circle theorem is a result about the behavior of holomorphic functions. Statement Hadamard three-circle theorem: Let be a holomorphic function on the annulus . Let be the maximum of on the circle Then, is a convex function of the logarithm Moreover, if is not of the form for some constants and , then is strictly convex as a function of The conclusion of the theorem can be restated as for any three concentric circles of radii Proof The three circles theorem follows from the fact that for any real a, the function Re log(zaf(z)) is harmonic between two circles, and therefore takes its maximum value on one of the circles. The theorem follows by choosing the constant a so that this harmonic function has the same maximum value on both circles. The theorem can also be deduced directly from Hadamard's three-line theorem. History A statement and proof for the theorem was given by J.E. Littlewood in 1912, but he attributes it to no one in particular, stating it as a known theorem. Harald Bohr and Edmund Landau attribute the theorem to Jacques Hadamard, writing in 1896; Hadamard published no proof. See also Maximum principle Logarithmically convex function Hardy's theorem Hadamard three-line theorem Borel–Carathéodory theorem Phragmén–Lindelöf principle Notes References E. C. Titchmarsh, The theory of the Riemann Zeta-Function, (1951) Oxford at the Clarendon Press, Oxford. (See chapter 14) External links "proof of Hadamard three-circle theorem" Inequalities Theorems in complex analysis
Hadamard three-circle theorem
[ "Mathematics" ]
362
[ "Theorems in mathematical analysis", "Theorems in complex analysis", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
1,499,523
https://en.wikipedia.org/wiki/Shophouse
A shophouse is a building type serving both as a residence and a commercial business. It is defined in the dictionary as a building type found in Southeast Asia that is "a shop opening on to the pavement and also used as the owner's residence", and became a commonly used term since the 1950s. Variations of the shophouse may also be found in other parts of Asia; in Southern China, Hong Kong, and Macau, it is found in a building type known as Tong lau, and in towns and cities in Sri Lanka. They stand in a terraced house configuration, often fronted with arcades or colonnades, which present a unique townscape in Southeast Asia, Sri Lanka, and South China. Design and features Site and plan: Shophouses were a convenient design for urban settlers, providing both a residence and small business venue. Shophouses were often designed to be narrow and deep so that many businesses can be accommodated along a street. Each building's footprint was narrow in width and long in depth. The front area along the street was formal space for customers, while the rear areas were informal spaces for family members, toilets, bathrooms, kitchens, and infrastructure. Veranda: Merchandise was displayed in front of the house, and was protected by a veranda from rain and sunshine. The veranda also served as reception for customers. The veranda along the street was an important area for the house owner and customers. Unless there was a communal arrangement, verandas may not connected to each other to form continuous colonnades. Where the colonnades are present by design, they form the five foot way. Courtyard and upper floor: Traditional shophouses may have between one and three floors. The shophouse was usually built between parallel masonry party walls. The upper part of the house was used as living quarters. To ensure air circulation, an inner "courtyard" (air-well) was placed midway between the front and rear of the house. Covered walkways In 1822, instructions were issued by Sir Stamford Raffles for the Town Plan of Singapore which specified that each house had to provide a "verandah of a certain depth, open at all times as a continued and covered passage on each side of the street". Raffles' instructions created a regular and uniform townscape in Singapore with arcades or colonnades forming a continuous public pathways. Later in other Straits Settlements, the "continued covered passage" known as "five foot way" was also mandated, and it became a distinctive feature of the "Strait Settlement Style" buildings. This feature also spread to other South East Asian countries after the mid-19th century such as Thailand and the Philippines, as well as some East Asian countries. Covered walkways are found in a building type called qilou found in Southern China, Taiwan and Hong Kong that was developed under the influence of Singaporean shophouses. In Taipei at the end of the Qing dynasty period, Taiwan under the Taiwan under Japanese rule, and in Southern China under the Republic of China, similar regulations were applied, mandating a wider space. In 1876, the Hong Kong colonial authority allowed the lease holder to build overhangs above the verandah (public sidewalk in Hong Kong colony) to provide more living space with no intention of creating regular and uniform townscapes. Facade design The facades of the building and sometimes the pillars may be decorated. The facade ornamentation draws inspiration from the Chinese, European, and Malay traditions, but with the European elements dominant. European neo-classical motifs include egg-and-dart moldings, and Ionic or Corinthian capitals on decorative pilasters. The degree of a shophouse's ornamentation depended on the prosperity of its owner and the surrounding area; shophouse facades in cities and (former) boom towns are generally more elaborate than spartan rural shophouses. Masonry-heavy Art Deco and Streamline Moderne styles eventually prevailed between the 1930s and 1950s. Modern variations through the 1950s up to the 1980s were devoid of ornamental decorations, and tended to be designed with imposing geometrical and utilitarian forms inspired by International and Brutalist styles. Beginning in the 1990s, buildings began to adopt postmodern and revival styles. Function The front of the shop on the ground floor in most cases is used for commercial purposes, while the upper floors are intended for residential use. The ground floor may serve as food and drink shops, offices, shops, or workshops. If the ground floor include living spaces (usually located at the back), it may be used as reception, guestrooms, and formal family rooms with ancestor altars. As the settlement prospered and population increased, some front shops were put to professional uses such as clinics, drugstores, law offices, pawnshops, travel agencies. Food and drink shops usually served economical selections, such as a variety of ready-cooked food of Chinese style, Padang style (Halal), or Siamese style. Cooking stalls rented a portion of space from the shop owner and served specific food such as fried noodles, fried rice, Indian pancakes, noodle soup. A variety of drinks was served by a different stall, sometimes by the shop owner. Such stalls have been replaced by food courts. Street corners were prized as the best location for food and drink shops. Modern construction Modern shophouses are made of reinforced concrete. Loads are carried by beams and piers, built on a grid system. The spacing of the piers is determined by economic factors: wider beams require larger amounts of steel. A plot of land that measures 40 m wide and 12 m deep, could be used to create 10 shophouses, each measuring 4 m x 12 m, or eight shophouses measuring 5 m x 12 m, or something in between. Walls are infill, which means that a row of shophouses can easily be reconfigured, to allow a business to occupy two or more shophouses, by simply removing the dividing walls. A row of shophouses can be built in stages by exposing around 50–60 cm of rebar in the left-right beams at each end of the row. When continuing construction, new rebar is tied to the existing rebar to allow the beam to be continued, thereby removing the need for new structural piers. Singapore shophouses The shophouses of Singapore evolved from the early-19th century during the colonial era. It was first introduced by Stamford Raffles who specified in his Town Plan for Singapore the uniformity and regularity of the building, the material used as well as features of the buildings such as a covered passageway. After the colonial era, shophouses became old and dilapidated, leading to a fraction of them abandoned or razed (by demolition work or, on occasion, fire). In Singapore, the Land Acquisition Act for urban development, passed during the early-1960s and amended in 1973, affected owners of shophouses and worked a significant compensatory unfairness upon them when their shophouses were seized to satisfy redevelopment efforts. Over the decades, entire blocks of historical shophouses in the urban centre were leveled for high-density developments or governmental facilities. Owners and occupants of colonial shophouses in Malaysia underwent different experiences involving a series of rent control legislation put in place between 1956 and 1966. Under the most recent 1966 Control of Rent Act, privately owned buildings constructed before 1948, including scores of shophouses, were subjected to rent price controls to alleviate housing shortages, with the intent of providing the increasingly urbanised population with sufficient affordable housing. In the decades following the introduction of the act in 1966, development of sites that the shophouses rest on were often unprofitable due to poor rental takings, leading to historical urban districts stagnating but being effectively preserved, although entire blocks of shophouses were known to be demolished for a variety of reasons during the upsurge of the economy (from government acquisitions to destruction from fires). With the repeal of the act in 1997, landowners were eventually granted authority to determine rent levels and be enticed to develop or sell off pre-1948 shophouses; as a result, poorer tenants were priced out and many of the buildings were extensively altered or demolished for redevelopment over the course of the 2000s and 2010s. Shophouses have also been documented to be illegally sealed for use to cultivate and harvest edible bird's nests, leading to long-term internal damage of the buildings. Many shophouses in Singapore that escaped the effects of the Land Acquisition Act have now undergone a revival of sorts, with some restored and renovated as budget hotels, tea houses, and cinemas. Some shophouses are now considered architectural landmarks and have substantially increased in value. In 2011 in Singapore, two of every three shophouse units sold for between S$1.7–5.5 million (US$1.4–4.4 million), while larger units sold for between S$10–12.5 million (US$8–10 million), a sharp increase from 2010, while average per-square-foot prices increased 21% from 2010. The median price in Singapore in 2011 was 74% higher than in 2007. Heritage shophouses in Malaysia While the preservation of historic shophouses has suffered substantially in heavily developed states like Johor, Kuala Lumpur, Negeri Sembilan, Perak, and Selangor, shophouses in Malacca and Penang (which state capitals, Malacca Town and George Town, have been gazetted as UNESCO World Heritage Sites in 2008) received more care and attention due to emerging historical preservation movements in both states, experiencing similar levels of rejuvenation as in Singapore. However, the gentrification of both cities has led to older tenants of shophouses being driven out by the rising costs of renting or buying properties within historical districts. In 2012, the cost of buying a pre-World War II shophouse in George Town reached RM2,000 per square foot (US$660), equivalent to the price of the most expensive Kuala Lumpur city centre condominium units. Indonesian shophouses Shophouses have been very popular since the Dutch colonial period, particularly in pecinan ('Chinese quarter'). Traditional shophouses are now replaced by modern ones, called ruko (rumah toko). See also Ancestral houses of the Philippines Architecture of Portugal Architecture of Singapore Bahay na Bato Bruges merchant houses Chinese architecture Lingnan culture Malay houses Medieval Merchant's House in Southampton Nipa hut Rumah adat Sino-Portuguese architecture Strip mall in North America Terraced house Tong Lau, in Hong Kong and southern China References Further reading Chang, TC & Teo, P, "The shophouse hotel: vernacular heritage in a creative city", Urban Studies 46(2), 2009, 341–367. Chua Beng Huat (Chua, B.H.), "The Golden Shoe: Building Singapore's Financial District". Singapore: Urban Redevelopment Authority, 1989. Davis, Howard, Living Over the Store: Architecture and Local Urban Life, Routledge, 2012. Goh, Robbie & Yeoh, Brenda, International Conference on the City, Theorizing The Southeast Asian City As Text: Urban Landscapes, Cultural Documents, And Interpretative Experiences, World Scientific Pub Co Inc., 2003. Retrieved 2012-3-30. Web article with photographs. Lee Ho Yin, "The Singapore Shophouse: An Anglo-Chinese Urban Vernacular", in Asia's Old Dwellings: Tradition, Resilience, and Change, ed. Ronald G. Knapp (New York: Oxford University Press), 2003, 115-134. Lee Kip Lim. "The Singapore House, 1849-1942". Singapore: Times, 1988. Ongsavangchai Nawit & Funo Shuji, "Spatial Formation And Transformation of Shophouse in the Old Chinese Quarter of Pattani, Thailand", Journal of Architecture and Planning, Transactions of AIJ, V.598, pp. 1–9, 2005. ISSN 1340-4210 Ongsavangchai Nawit, "Formation and Transformation of Shophouses in Khlong Suan Market Town", Proceedings, Architectural Institute of Korea, 2006. Phuong, D. Q. & Groves, D., "Sense of Place in Hanoi's Shop-House: The Influences of Local Belief on Interior Architecture", Journal of Interior Design, 36: 1–20, 2010. doi: 10.1111/j.1939-1668.2010.01045.x Yeoh, Brenda, Contesting Space: Power Relationships and the Urban Built Environment in Colonial Singapore (South-East Asian Social Science Monographs), Oxford University Press, USA, 1996. ; Singapore University Press, 2003. External links Vernacular architecture Commercial buildings Buildings and structures in Asia Architectural design Urban studies and planning terminology House types
Shophouse
[ "Engineering" ]
2,594
[ "Design", "Architectural design", "Architecture" ]
1,499,625
https://en.wikipedia.org/wiki/Hyperbolic%20motion%20%28relativity%29
Hyperbolic motion is the motion of an object with constant proper acceleration in special relativity. It is called hyperbolic motion because the equation describing the path of the object through spacetime is a hyperbola, as can be seen when graphed on a Minkowski diagram whose coordinates represent a suitable inertial (non-accelerated) frame. This motion has several interesting features, among them that it is possible to outrun a photon if given a sufficient head start, as may be concluded from the diagram. History Hermann Minkowski (1908) showed the relation between a point on a worldline and the magnitude of four-acceleration and a "curvature hyperbola" (). In the context of Born rigidity, Max Born (1909) subsequently coined the term "hyperbolic motion" () for the case of constant magnitude of four-acceleration, then provided a detailed description for charged particles in hyperbolic motion, and introduced the corresponding "hyperbolically accelerated reference system" (). Born's formulas were simplified and extended by Arnold Sommerfeld (1910). For early reviews see the textbooks by Max von Laue (1911, 1921) or Wolfgang Pauli (1921). See also Galeriu (2015) or Gourgoulhon (2013), and Acceleration (special relativity)#History. Worldline The proper acceleration of a particle is defined as the acceleration that a particle "feels" as it accelerates from one inertial reference frame to another. If the proper acceleration is directed parallel to the line of motion, it is related to the ordinary three-acceleration in special relativity by where is the instantaneous speed of the particle, the Lorentz factor, is the speed of light, and is the coordinate time. Solving for the equation of motion gives the desired formulas, which can be expressed in terms of coordinate time as well as proper time . For simplification, all initial values for time, location, and velocity can be set to 0, thus: This gives , which is a hyperbola in time T and the spatial location variable . In this case, the accelerated object is located at at time . If instead there are initial values different from zero, the formulas for hyperbolic motion assume the form: Rapidity The worldline for hyperbolic motion (which from now on will be written as a function of proper time) can be simplified in several ways. For instance, the expression can be subjected to a spatial shift of amount , thus , by which the observer is at position at time . Furthermore, by setting and introducing the rapidity , the equations for hyperbolic motion reduce to with the hyperbola . Charged particles in hyperbolic motion Born (1909), Sommerfeld (1910), von Laue (1911), Pauli (1921) also formulated the equations for the electromagnetic field of charged particles in hyperbolic motion. This was extended by Hermann Bondi & Thomas Gold (1955) and Fulton & Rohrlich (1960) This is related to the controversially discussed question, whether charges in perpetual hyperbolic motion do radiate or not, and whether this is consistent with the equivalence principle – even though it is about an ideal situation, because perpetual hyperbolic motion is not possible. While early authors such as Born (1909) or Pauli (1921) argued that no radiation arises, later authors such as Bondi & Gold and Fulton & Rohrlich showed that radiation does indeed arise. Proper reference frame In equation () for hyperbolic motion, the expression was constant, whereas the rapidity was variable. However, as pointed out by Sommerfeld, one can define as a variable, while making constant. This means, that the equations become transformations indicating the simultaneous rest shape of an accelerated body with hyperbolic coordinates as seen by a comoving observer By means of this transformation, the proper time becomes the time of the hyperbolically accelerated frame. These coordinates, which are commonly called Rindler coordinates (similar variants are called Kottler-Møller coordinates or Lass coordinates), can be seen as a special case of Fermi coordinates or Proper coordinates, and are often used in connection with the Unruh effect. Using these coordinates, it turns out that observers in hyperbolic motion possess an apparent event horizon, beyond which no signal can reach them. Special conformal transformation A lesser known method for defining a reference frame in hyperbolic motion is the employment of the special conformal transformation, consisting of an inversion, a translation, and another inversion. It is commonly interpreted as a gauge transformation in Minkowski space, though some authors alternatively use it as an acceleration transformation (see Kastrup for a critical historical survey). It has the form Using only one spatial dimension by , and further simplifying by setting , and using the acceleration , it follows with the hyperbola . It turns out that at the time becomes singular, to which Fulton & Rohrlich & Witten remark that one has to stay away from this limit, while Kastrup (who is very critical of the acceleration interpretation) remarks that this is one of the strange results of this interpretation. Notes References Ludwik Silberstein (1914): The Theory of Relativity, page 190. Naber, Gregory L., The Geometry of Minkowski Spacetime, Springer-Verlag, New York, 1992. (hardcover), (Dover paperback edition). pp 58–60. External links Physics FAQ: The Relativistic Rocket Mathpages: Accelerated Travels, Does A Uniformly Accelerating Charge Radiate? Theory of relativity Special relativity General relativity Acceleration
Hyperbolic motion (relativity)
[ "Physics", "Mathematics" ]
1,132
[ "Physical quantities", "Acceleration", "Quantity", "General relativity", "Special relativity", "Theory of relativity", "Wikipedia categories named after physical quantities" ]
1,499,654
https://en.wikipedia.org/wiki/SN%201181
First observed between August 4 and August 6, 1181, Chinese and Japanese astronomers recorded the supernova now known as SN 1181 in eight separate texts. One of only five supernovae in the Milky Way confidently identified in pre-telescopic records, it appeared in the constellation Cassiopeia and was visible and motionless against the fixed stars for 185 days. F. R. Stephenson first recognized that the 1181 AD "guest star" must be a supernova, because such a bright transient that lasts for 185 days and does not move in the sky can only be a galactic supernova. Pa 30 Pa 30 was discovered in 2013 by American amateur astronomer Dana Patchick while searching for planetary nebulae in WISE infrared data. It was the 30th nebula discovered by his searches, and as a result it is designated Pa 30. Pa 30 appeared as a nearly-round nebula roughly 171x156 arc-seconds in size, with an extremely blue central star. Pa 30 refers to both the nebula (originally catalogued as IRAS 00500+6713) and the central star (designated as WD J005311). The shell is bright in the infrared, but very faint in the optical, at first visible only by light in the [O III] band. In 2019, optical spectroscopy of the central star revealed a very hot star with an intense stellar wind expanding at a very high velocity of 16,000 km/s and a composition mainly of carbon, oxygen, and neon (with no hydrogen or helium). Such a speed could only arise from a supernova or an event of similar magnitude, more specifically from a merger of two white dwarfs. X-ray spectroscopy studies of the shell also revealed a very hot nebula containing carbon-burning ashes which can only be produced in a supernova. However, the remnant star of Pa 30 is a white dwarf, not one of the conventional supernova remnants (neutron stars or black holes). It has been suggested that Pa 30 is the remnant of a rare class of supernovae known as "sub-luminous Type Iax Supernova" and that a merger of a CO white dwarf and an ONe white dwarf produced the remnant shell along with its supermassive white dwarf remnant. More recent observations in the [SII] band also revealed fine filamentary structures within the shell that had not previously been seen. A 2021 study measured the expansion velocity of ~1,100 km/s for the nebula from optical spectroscopy of the [S II] doublet. Together with the angular size of Pa 30 and the GAIA distance of 2.3 kpc, the age of the nebula could be estimated to be approximately 1,000 years. This made Pa 30 the new prime candidate for the remnant of the SN 1181 event. Furthermore, the expansion velocity of the nebula and the inferred absolute brightness of the 1181 event are consistent with a Type Iax Supernova, making Pa 30 the only SN Iax remnant in our Galaxy and the only one which can be studied in detail. Observations with Keck Cosmic Web Imager spectrograph were published in 2024. The study showed that the expansion of Pa 30 constrained the explosion date to the year , consistent with SN 1181. The observations also revealed that the explosion was likely asymmetric because redshifted filaments are brighter than blueshifted filaments in Pa 30. The observations also confirmed the presence of a cavity at which the filaments end. The filamentary shell has an inner radius of 0.6 parsec and an outer radius of 1.0 parsec. These filaments have velocities that are consistent with them being ballistic. With a temperature near 200,000 K, WD J005311 is among the hottest stars known. The extreme properties of the central star are being powered by the residual radioactive decay of 56Ni, where the usual half-life of 6.0 days from electron capture is increased to many centuries due to the nickel being completely ionized. 3C 58 Before 2013, the only plausible conventional supernova remnant in the old historical area for the supernova was the supernova remnant 3C 58. This remnant has a radio and X-ray pulsar that rotates about 15 times per second. So historically, SN 1181 had been associated with 3C 58 and its pulsar, although many researchers noted that this association is problematic. For example, if the supernova and pulsar are associated, then the star is still rotating about as quickly as it did when it first formed. This is in contrast to the Crab pulsar, known to be the remnant of the SN 1054 supernova in the year 1054, which has lost two-thirds of its rotational energy in essentially the same span of time. The age of the 3C 58 remnant has been estimated by many measures. Most directly, the proper motion of the expanding shell of 3C 58 has been measured three times, resulting in a distance-independent estimated age of around 3500 years. The measures of the decline rate of the radio flux have substantial variability and uncertainty, so they are not useful for estimating the remnant's age. Age estimates involving the remnant's energy and the swept-up mass are both not useful due to large uncertainties with the distance as well as the presumed energetics and densities. The pulsar is offset from the center of 3C 58, implying an age of ~3700 years, although it is possible to be substantially younger if its transverse velocity happens to be high. The pulsar spin-down age is 5380 years. The neutron star cooling age is >5000 years. With these age estimates, 3C 58 is much too old a remnant to be associated with SN 1181. The possible sky position of the 1181 supernova has been revised to include additional information on the proximity of the "guest star" to adjacent Chinese constellations, resulting in a greatly smaller error region. This improved region does not contain 3C 58, because the guest star does not have proximity to two constellations as reported. So SN 1181 is not associated with 3C 58. Rather, this new small region contains Pa 30, which is independently known to be a ~800 year old supernova remnant. Gallery See also Guest star (astronomy) List of supernovae List of supernova remnants SN 1054 References Supernova remnants Cassiopeia (constellation) 12th century in science 1181 810804 Historical supernovae
SN 1181
[ "Astronomy" ]
1,340
[ "Cassiopeia (constellation)", "Historical supernovae", "Constellations", "History of astronomy" ]
1,499,717
https://en.wikipedia.org/wiki/Myth%20of%20redemptive%20violence
The Myth of Redemptive Violence is an archetypal plot in literature, especially in imperial cultures. At its core, the myth is the story of the victory of order over chaos by means of violence. One of the oldest versions of this story is the creation myth of Babylon (the Enûma Elish) from around 1250 B.C. Walter Wink coined the term as part of an analysis of its impact on modern culture and its role in maintaining oppressive power structures in his book The Powers That Be. The story that the rulers of domination societies told each other and their subordinates is what we today might call the Myth of Redemptive Violence. It enshrines the belief that violence saves, that war brings peace, that might makes right. It is one of the oldest continuously repeated stories in the world. Wink argues that it is the dominant religion in the world, found in everything from children's cartoons to ancient myths from Syria, Phoenicia, Egypt, Greece, Rome, Germany, Ireland, India, and China. Much of modern entertainment relies on this myth, including spy thrillers, westerns, cop shows, and combat programs. It is the ideology of conquest, that life is combat and any form of order is preferable to chaos, according to this myth. See also Joseph Campbell Carl Jung Richard Dawkins René Girard Liberation Theology Dominator culture Tale of Two Brothers References External links The Powers that be: Theology for a New Millenium. New York: DOubleday Publishing, 1999. Facing the Myth of Redemptive Violence article by Walter Wink, August 2006. Narratology Violence
Myth of redemptive violence
[ "Biology" ]
332
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
1,499,722
https://en.wikipedia.org/wiki/Instant%20replay
Instant replay or action replay is a video reproduction of something that recently occurred, both shot and broadcast live. After being shown live, the video is replayed so viewers can see it again and analyze what just happened. Sports—such as American football, association football, Badminton, cricket, and tennis—allow officiating calls to be overturned after a play review. Instant replay is most commonly used in sports but is also used in other fields of live TV. While the first near-instant replay system was developed and used in Canada, the first instant replay was developed and deployed in the United States. Apart from live-action sports, instant replay is also used to cover large pageants or processions involving prominent dignitaries (e.g., monarchs, religious leaders such as the Catholic Pope, revolutionary leaders with mass appeal), political debate, legal proceedings (e.g., O.J. Simpson murder case), royal weddings, red carpet events at significant award ceremonies (e.g., the Oscars), grandiose opening ceremonies (e.g., 2022 Winter Olympics opening ceremony), or live feeds to acts of terrorism currently in progress. Instant replay is used because the events are too large to cover from a single camera angle or too fast-moving to capture all the nuance on the first viewing. In media studies, the timing and length of the replay clips as well as the selection of camera angles, are forms of editorial content that have a large impact on how the audience perceives the events covered. Because of the origin of television as a broadcast technology, a "channel" of coverage is traditionally a single video feed consumed in the same way by all viewers. In the age of streaming media, live current events can be accessed by the final viewer with multiple streams of the same content playing concurrently in different windows or on various devices, often with direct end-user control over rewinding to a past moment, as well as an ability to select accelerated, slow-motion or stop-action replay speed. History During a 1955 Hockey Night in Canada broadcast on CBC Television, producer George Retzlaff used a "wet-film" (kinescope) replay, which aired several minutes later. Videotape was introduced in 1956 with the Ampex Quadruplex system. However, it could not display slow motion, instant replay, or freeze-frames, and it wasn't easy to rewind and set index points. The end of the March 24, 1962, boxing match between Benny Paret and Emile Griffith was reviewed a few minutes after the bout ended, in slow motion, by Griffith and commentator Don Dunphy. In hindsight, this has been cited as the first known use of slow-motion replay in television history. CBS Sports Director Tony Verna invented a system to enable the standard videotape machine to instantly replay on December 7, 1963, for the network's coverage of the US military's Army–Navy Game. The instant replay machine weighed . After technical hitches, the only replay broadcast was Rollie Stichweh's touchdown. It was replayed at the original speed, with commentator Lindsey Nelson advising viewers, "Ladies and gentlemen, Army did not score again!" The problem with older technology was finding the desired starting point; Verna's system activated audio tones as the exciting events unfolded, which technicians could hear during the rewinding process. CBS tried out the replay from analog disk storage in 1965, and the Ampex HS-100, which had a 30-second capacity and freeze frame capability, was commercialized in 1967. Instant replay has been credited as a primary factor in the rise of televised American football, although it was popular on television even before then. In contrast, one camera was set up to show the overall "live" action; other cameras, linked to a separate videotape machine, framed close-ups of key players. Within a few seconds of a crucial play, the videotape machine would replay the action from various close-up angles in slow motion. Before instant replay, it was almost impossible to portray the essence of an American football game on television. Viewers struggled to assimilate the action from a wide shot of the field on a small black-and-white television screen. However, as Erik Barnouw says in his book Tube of Plenty: The Evolution of American Television," With replay technology, brutal collisions became ballets, and end runs and forward passes became miracles of human coordination." Thanks largely to instant replay, televised football became evening entertainment. ABC-TV's Monday Night Football perfected it and enjoyed it by a wide audience. Marshall McLuhan, the noted communication theorist, famously said that any new medium contains all prior media. McLuhan gave Tony Verna's invention of instant replay as a good example. "Until the advent of the instant replay, televised football had served simply as a substitute for physically attending the game; the advent of instant replay – which is possible only with the television – marks a post-convergent moment in the medium of television." In sports production for television During the live television transmission of sports events, instant replay is often used to show again a passage of play that was especially important or remarkable, or that was unclear at first viewing. Replays are typically shown during a break or lull in the action; in modern broadcasts, it will be at the next break in play, although older systems were sometimes less instant. The replay may be slow-motion or feature shots from multiple camera angles. With their advanced technology, video servers, have allowed for more complex replays, such as freeze frame, frame-by-frame review, replay at variable speeds, overlaying of virtual graphics, and instant analysis tools such as ball speed or immediate distance calculation. Sports commentators analyze the replay footage when it is being played rather than describing the concurrent live action. Instant replays are used today in broadcasting extreme sports, where the speed of the action is too high to be easily interpreted by the naked eye. They use combinations of advanced technologies such as video servers and high-speed cameras recording at up to several thousand frames per second. Sports production facilities often dedicate one or more cameras to cover star players or key players likely to make a big play in a specific context (e.g., on last down and long in North American football, production crews will often isolate a wide receiver with sure hands in a crowd and/or superior foot speed). These cameras are sometimes called isolation, isolated, or iso-cams for short. Production equipment EVS Broadcast Equipment is a leading manufacturer of replay production servers used by major broadcasters for large events such as the FIFA World Cup, Olympics, Super Bowl, MLB Playoffs, and NBA Playoffs. A 2019 Sports Video Group survey revealed that 213 of 257 HD mobile production trucks were using some form of EVS replay gear. Evertz Microsystems' DreamCatcher replay system is also widely used by college and pro sports clubs, including teams in the NBA, MLB, and NHL. Use by officials Some sports organizations allow referees or other officials to consult replay footage before making or revising a decision about an unclear or dubious play; this is variously called video-assisted referee (VAR), video referee, video umpire, instant replay official, television match official, third umpire, or challenge. Other organizations allow video evidence only after the end of the contest, for example, to penalize a player for misconduct not noticed by the officials during play. The role of the video referee differs; often, they can only be called upon to adjudicate on specific events. When instant replay does not provide conclusive proof, rules may say whether the original call stands or whether a particular call must be done (most usually no score). Leagues using instant replay in official decision-making include the National Hockey League, National Football League, Canadian Football League, National Basketball Association, and Major League Baseball. It is also used internationally in field hockey and rugby union. Since 2017, some association football competitions have employed a "Video Assistant Referee" (aka "VAR"). Due to the cost of television cameras and other equipment needed for a video referee to function, most sports only employ them at a professional or top-class level. Baseball In Major League Baseball, instant replay has been introduced to address "boundary calls," which including questions on whether a hit should be considered a home run (HR). Among reviewable plays are Fair Ball-HR, Foul Ball, Ball Clearing Wall-HR, Ball Staying in Play-Live Ball, Ball Leaving Field of Play-HR, and Ball or Player interfered with by spectators (called Spectator Interference). The latest MLB collective bargaining agreement expands instant replay to include Fair Ball Foul Ball along foul lines or Ball Caught for Out Ball Trapped Against Ground or Wall. It expands interference calls to all walls regardless of whether they are "boundary calls" or not. In Little League Baseball, instant replay was initially adopted for the Little League World Series only but later expanded to include the qualifying regional tournaments. All "boundary call" plays are reviewable at the Major League Level. It also adds a review to plays involving force outs, tag plays on the base paths, hit batters, and defensive appeals regarding whether a runner missed touching a base. In Little League Baseball, instant replay was initially adopted for the Little League World Series only but later expanded to include the qualifying regional tournaments. It consists of all "boundary call" plays reviewable at the Major League Level and adding review to plays involving force outs, tag plays on the base paths, hit batters, and defensive appeals regarding whether a runner missed touching a base. Basketball In NBA basketball, the officials must watch an instant replay of a potential buzzer beater to determine if the shot was released before time expired. Since 2002, the NBA has mandated the installation of LED light strips on both the backboard and the scorer's table that illuminate when time expires, to assist with any potential review. Instant replay first came to the NBA in the 2002–03 season. In Game 4 of the 2002 Western Conference Finals, Los Angeles Lakers forward Samaki Walker made a three-point field goal from half-court at the end of the second quarter. However, the replay showed that Walker's shot was late and the ball was still in his hand when the clock expired. The use of instant replay was instituted afterward. Beginning with the 2007–08 season, replay can also determine players being ejected from contests involving brawls or flagrant fouls. In the 2008–09 season, replay may also be used to correctly determine whether a scored field goal is worth two or three points. It may also choose the correct number of free throws awarded for a missed field goal. It may also be used in cases where the game clock malfunctions and play continues to decide how much time to take off the clock. In 2014, the NBA consolidated its replay work in a remote instant replay center to support officials in multiple games. In college basketball, the same procedure may also be used to determine if a shot was released before time expired in either half or an overtime period. In addition, NCAA rules allow the officials to use instant replay to determine if a field goal is worth two or three points, which is to take a free throw, whether a fight occurred, and who participated in a fight. The officials may also check if the shot was made before the expiration of the shot clock, but only when such a situation occurs at the end of a half or an overtime period. Such rules have required the NCAA to write new rules stating that, when looking at instant replay video, the zeros on the clock, not the horn or red light, determine the end of the game. In Italy, host broadcaster Sky agreed with Serie A to adopt instant replay for special tournaments and playoff games, and in 2005, for the entire season. Instant replay would be used automatically in situations similar to the NCAA, but coaches may, like the NFL, have one coach's challenge to challenge a two or three-point shot. Officials may determine who last touched the ball in out-of-bounds or back-court violations. The adoption of instant replay was crucial in the 2005 Serie A championship between Armani Jeans Milano and Climamio Bologna. Bologna led the best-of-five series, 2–1, with Game 4 in Milan and the home team leading 65–64, as Climamio's Ruben Douglas connected on a three-point basket at the end of the game to win the Serie A championship. Knowing the 12,000 fans on both sides, officials would learn the series' fate on their call and watch replays of the shot before determining whether it was valid. The EuroLeague Basketball (company) adopted instant replay for the 2006 EuroLeague Final Four. It changed the rule that the lights on the backboard, not the horn, will end a period, thus assisting with instant replay. On April 6, 2006, FIBA announced instant replay for last-second shots would be legal for their competitions. "The referee may use technical equipment to determine whether the ball has or has not left the player's hand(s) within the playing time on a last shot made at the end of each period or extra period." 2019, FIBA updated its IRS (Instant Replay System) manual further to summarize the accepted workflows and methods for video review. Before the beginning of the 2013-2014 NBA season, new instant replay rules were put into effect. They say that instant replay can be used for block/charge plays to determine if an off-ball foul occurred before or after a shooting motion began in a successful shot attempt or if the ball is released on a throw-in. They also began to use instant replay to determine correct penalties for flagrant fouls. Cricket Cricket also uses an instant replay. It is used for run-outs, stumpings, doubtful catches, and whether the ball has crossed the boundary for a six or short of a four. The International Cricket Council decided to trial a referral system during the Indian tour of Sri Lanka through late July and August 2008. This new referral system allows players to seek reviews, by the third umpire, of decisions by the on-field umpires on whether or not a batsman has been dismissed. Each team can make two unsuccessful requests per inning, which must be made within a few seconds of the ball becoming dead; once made, the requests cannot be withdrawn. Only the batsman involved in a dismissal can ask for a review of an "out" decision; in a "not out", only the captain or acting captain of the fielding team. Players can consult on-field teammates in both cases, but signals from off the field are not permitted. The player with a 'T' sign can make a review request; the umpire will consult the TV umpire, who will review TV coverage of the incident before relaying back fact-based information. The field umpire can either reverse his decision or stand by it; he indicates "out" with a raised finger and "not out" by crossing his hands horizontally from side to side in front and above his waist three times. The TV umpire can use regular slow-motion or high-speed camera angles (usually called ultra-motion) or super-slow replays, the mat, sound from the stump mics, and approved ball tracking technology, which refers to Hawk-Eye technology that would only show the TV umpire where the ball pitched and where it hit the batsman's leg and it is not to be used for predicting the height or the direction of the ball. Snicko and Hot Spot can also be used. Fencing Video refereeing is compulsory at World Championships, Grand Prix competitions, and the Olympic Games and is used when the referee cannot decide if a touch is to be awarded at the request of a player (although only two incorrect video appeals are allowed per player in individual competitions) or if the score is tied in the last point and both lights turn on. An assistant official, a "video referee," watches the live match and helps the referee decide through a slow-motion replay on a monitor close to the piste. It is used to determine the right of way in foil and sabre. A player must gesture as a rectangle (monitor) to the referee to appeal. In individual matches, if the player has appealed twice incorrectly, they can no longer appeal again. Association football In association football, FIFA did not formally permit video evidence during matches until the 2018 FIFA World Cup, although it had been on trial in various competitions beforehand, and it was permitted for subsequent disciplinary sanctions. The 1970 meeting of the International Football Association Board "agreed to request the television authorities to refrain from any slow-motion play-back which reflected, or might reflect, adversely on any decision of the referee". In 2005, Urs Linsi, general secretary of FIFA, said: Players, coaches and referees all make mistakes. It's part of the game. It's what I would call the "first match". What you see after the fact on video simply doesn't come into it; that's the "second match", if you like. Video evidence is useful for disciplinary sanctions, but that's all. As we've always emphazised at FIFA, football's human element must be retained. It mirrors life itself and we have to protect it. There have been allegations that referees had made or changed decisions on the advice of a fourth official who had seen the in-stadium replay of an incident. This was denied by FIFA in relation to the Zidane headbutt of Materazzi in the 2006 World Cup final, and in relation to the 2009 FIFA Confederations Cup match between Brazil and Egypt, in which Howard Webb signaled initially for a corner kick but then a penalty kick. It has been said that instant replay is needed given the difficulty of tracking the activities of 22 players on such a large field, FIFA officials approached researchers at the University of Glasgow in Scotland for help, but came up with nothing that could satisfy the league's stringent requirements. Opponents of instant replay like former FIFA President Sepp Blatter argue that refereeing mistakes add to the "fascination and popularity of football." It has been proposed that instant replay be limited to use in penalty incidents, fouls which lead to bookings or red cards and whether the ball has crossed the goal line, since those events are more likely than others to be game-changing. In 2007, FIFA authorized tests of two systems, one involving an implanted chip in the ball and the other using a modified version of Tennis's Hawk-Eye system, to assist referees in deciding whether a ball had crossed over the goal line. The following year, however, the IFAB and FIFA halted testing of all goal-line technology, fearing that its success would lead to its possible expansion to other parts of the game. Sepp Blatter claimed the technologies were flawed and too expensive to be implemented on a widespread basis, adding, "Let it be as it is and let's leave (soccer) with errors. The television companies will have the right to say (the referee) was right or wrong, but still the referee makes the decision — a man, not a machine". This sudden change of course surprised and angered Paul Hawkins, as the inventor of the Hawk-Eye system had invested a great deal of money into adapting the Hawk-Eye technology to football. In 2009, Hawkins sent an open letter to Blatter refuting the FIFA president's assertion that the Hawk-Eye goal line technology was flawed and arguing that Hawk-Eye met all of the criteria established by the IFAB for a suitable goal line technology system. The controversy over goal line technology was re-ignited in 2009 after Brazil had a potential equalizing goal disallowed during the 2009 Confederations Cup Final; and during the 2010 FIFA World Cup after England's Frank Lampard's shot off the underside of the crossbar during a 4–1 defeat against Germany was not ruled a goal, despite replays clearly showing it was 60 centimeters over the line. In July 2012, International Football Association Board voted unanimously to officially amend the Laws of the Game to permit (but not require) goal-line technology. The technology was used at the 2014 FIFA World Cup. In April 2016, it was announced that Serie A was selected by the International Football Association Board to test video replays, which were initially private for the 2016–17 season, allowing them to become a live pilot phase, with replay assistance implemented in the 2017–18 season. On the decision, FIGC President Carlo Tavecchio said, "We were among the first supporters of using technology on the pitch and we believe we have everything required to offer our contribution to this important experiment". In September 2016, the video review system known as Video Assistant Referees (VAR), was first used in an international friendly between Italy and France. The system was implemented at a FIFA World Cup for the first time at the 2018 FIFA World Cup. Major League Soccer in the United States introduced VAR in competitive matches during its 2017 season after the 2017 MLS All-Star Game on 2 August 2017. Gridiron football codes In American and Canadian football, instant replay can take place in the event of a close or otherwise controversial call, either at the request of a team's head coach (with limitations) or the officials themselves. There are restrictions on what types of plays can be reviewed. In general, most penalty calls or lack thereof cannot be reviewed, nor can a play that is whistled dead by the officials before the play could come to its rightful end. American and Canadian football leagues vary in their application and use of instant replay review. In the National Football League, each coach is allowed two opportunities per game to make a coach's challenge, and get a third challenge if both of the original two challenges were successful. A challenge can only be made on certain reviewable calls on plays that begin before the two-minute warning and only when a team has at least one timeout remaining in the half. The Canadian Football League uses similar rules as the NFL, except the game has a three-minute warning near the end of each half instead of two. In NCAA football, each team only has one challenge per game, and gets a second challenge if the first one is successful. In all three rules codes, the challenging team is charged with a timeout if their challenge is unsuccessful. U.S. high school rules prohibited the use of replay review, even if the venue had equipment that allows the practice, before 2019, when the National Federation of State High School Associations (NFHS) gave its member associations the option to allow its use in postseason games only. In Texas, where high schools have always based their rules on those of the NCAA, the University Interscholastic League, which governs public-school sports, allows its use only in state championship finals. The main governing body for Texas private schools, the Texas Association of Private and Parochial Schools, follows pre-2019 NFHS practice by banning replay review. Field hockey In field hockey, the International Hockey Federation allows the match umpire to request the opinion of a video umpire as to whether or not a goal has been validly scored, and whether there was a violation in the build-up to a goal. The video umpire can advise on whether the ball crossed the line there was a violation. Ordinarily, teams are not allowed to make such a request or to press the match umpire to do so. On a trial basis, the 2009 Men's Champions Trophy allows for "team referral" by each team captain, to query a goal, penalty stroke, or penalty corner decision. The team retains the right to a referral if its previous referrals were upheld. Ice hockey The video goal judge reviews replays of disputed goals. As the referee does not have access to television monitors, the video goal judge's decision in disputed goals is taken as final. In the NHL, goals may only be reviewed in the following situations: puck crossing the goal line completely and before time expired, puck in the net prior to goal frame being dislodged, puck being directed into the net by hand or foot, puck deflected into the net off an official, and puck deflected into the goal by a high stick (stick above the goal) by an attacking player. The video goal judge also reviews replays to establish the correct time on the game clock. All NHL goals and time remaining on the game clock are subject to review, and although most arenas have a video goal judge, often officials in the Situation Room (also known as the "War Room") at the NHL office in Toronto make the final decision. Review challenges Beginning in the 2015-16 NHL season, instant replay reviews have been expanded to include a coach's challenge. Each coach receives one challenge per game, which requires the use of a timeout. Coaches may only challenge over situations whether the goal should have been disallowed because of goaltender interference or an offside, or whether a goal disallowed because of goaltender interference should be allowed instead. The challenging team retains its timeout and its challenge after every goaltender interference call that has been overturned. There are two situations that happen when a challenge is upheld: If an offside review is upheld, the challenging team receives a minor penalty for delay of game. If a goaltender interference review is upheld, the challenging team loses its timeout. Challenges are not allowed during the final minute of regulation, as well as at any point during overtime. In this situation, officials in the Situation Room reviews all instances where the puck entered the net, and then determines the final ruling. However, for reviews that take place during coach's challenges, the on-ice officials determine the final ruling. Junior Hockey Similar to the National Hockey League, Junior Hockey leagues, such as the CJHL also use instant replay with a Video Goal Judge to initiate and be responsible for the review of all goals. The Video Goal Judge may also be asked to verify the time during a game. Referees utilize instant replay for on-ice review of Major Penalties as well as Match Penalties, in which they look to confirm or modify their original call on the ice. Video review on a play involving a goal must be done immediately after the play has concluded, and before the puck is dropped again. On-ice calls cannot be overturned once the puck is dropped again and play has resumed. Motorsports In international motorsport championships, race stewards often use instant replay to decide whether to penalize competitors for avoidable contact or pitlane infractions. NASCAR utilizes instant replay to supplement their electronic scoring system. Video replays are used to review rules infractions and scoring disputes. Video replay supplements electronic scoring at the finish line (particularly the race winner) in "photo finish" situations. Video replay supplements electronic scoring to determine the final positions when a race ends under the safety car on either the last lap (both in regulation and in a green-white-checker finish) or when it is evident the race will reach the expiration of time without a subsequent restart. Starting in 2022, if the safety car situation subsequently results in the race being concluded for weather, curfew, other situation, video replay can be used to determine the final positions. Previously, if a race is prematurely ended, the race was determined by the last completed scoring loop. This happened as a result of the October 2021 Talladega Superspeedway Sparks 300 Xfinity Series race that was curtailed because of darkness, where NASCAR determined results based on the last completed scoring loop. Video replay is used to determine if a car has crossed the pit entrance before the pit was closed when a safety car situation begins. It also determines if drivers are following pit road speed limits. Video replay supplements electronic scoring to determine the positions in which cars exit the pits (during the safety car). IndyCar also utilizes instant replay for similar reasons. The most notable use of replay in recent years occurred during the 2008 Peak 300 at Chicagoland Speedway. On the final lap, Scott Dixon and Hélio Castroneves crossed the finish line side by side, with computer scoring showing Dixon the winner by a margin of 0.0010 seconds. However, video replay evidence clearly showed that the nose of Castroneves' car touched the line first. Castroneves was declared the winner officially by 0.0033 seconds or 12⅛ inches, in the second closest finish in the twelve-year history of the series. It was later determined that the deliberate improper installation of Dixon's scoring transponder was the source of the scoring error. Video replay was also used extensively in the aftermath of the controversial 2002 Indianapolis 500. However, fully conclusive evidence was lacking. Broadcast stations utilize replays to show viewers a crash in greater detail. Rodeo The Professional Bull Riders, beginning with the 2006–07 season, has instituted an instant replay review system. A bull rider, a fellow competitor, or a judge may request a replay review by filing a protest to the replay official within 30 seconds of any decision. Any competitor (it does not have to be the rider who is riding the bull in question, as fellow riders can observe the action and spot fouls by bull or rider) may file the complaint to the replay official by sounding a signal at the arena and explaining to the replay official why he is filing the request. The designated replay official (one of the four officials in the arena) may request different angles and/or slow motion, as well as freeze particular frames. The replay judge will use all available technology to assess the call in question and supply his ruling. This includes using his own hand-held stopwatch to time bull rides in case of a clock malfunction, as well as a graphic overlay of the official eight-second clock used in PBR competition that starts when the bull exits the bucking chute. The replay will be used to evaluate timing issues, fouls against the rider for touching the bull or ground with his free hand or using the fence to stay on the bull, or fouls by the bull, such as dragging the rider across the fence. If an appeal is successful, the decision will be overturned and there will be no charge to the individual filing the protest. If the appeal is unsuccessful, a $500 charge is levied against the protester which is donated to PBR charities such as the Western Sports Foundation to assist injured bull riders and western sports athletes. Rugby league Since being introduced by the Super League in 1996, video referees have been adopted in Australasia's National Rugby League and international competition as well. In rugby league the video referee can be called upon by the match official to determine the outcome of a possible try. The "video ref" can make judgements on knock-ons, offside, obstructions, hold-ups and whether or not a player has gone dead, but cannot rule on a forward pass. If a forward pass has gone un-noticed by the on-field officials it must be disregarded by the video ref, as such judgements cannot reliably be made due to camera angle effects. Rugby union Use of video referee by referees was introduced to rugby union in 2001. The laws of the game allow for "an official who uses technological devices" to be consulted by the referee in decisions relating to scoring a try or a kick at goal. The decision to call on the video referee (now called "Television Match Official (TMO)") is made by the referee following discussion with the assistant referees/touch judges and cannot be instigated by the players or coaches of either team. When concerning an act of foul play, the TMO may alert the referee and initiate the replay process. In a possible try/no try situation, the referee shall signal his initial on-field decision (the "soft signal") and request the TMO to review all available footage and provide "advice and recommendations" to the on field referee. The referee should only change their decision where there is "clear and obvious" evidence that it was incorrect. In stadia with screens, the TMO may show footage directly to the referee. As per Law 6.5 A of the Laws of Rugby Union, "The referee is the sole judge of fact and of law during a match". Once a final decision is made, it is to be signaled by the referee. Tennis In tennis, systems such as Hawk-Eye and MacCAM calculate the trajectory of the ball by processing the input of several video cameras. They can play a computer rendering of the path and determine whether the ball landed in or out. Players can appeal to have the system's calculation used to override a disputed call by the umpire. In March 2008, the International Tennis Federation, Association of Tennis Professionals, Women's Tennis Association and Grand Slam Committee agreed unified challenge rules: a player can make up to three unsuccessful challenges per set, and a fourth in a tie-break. Television broadcasts may use the footage to replay points even when not challenged by a player. See also Photo finish Multicam (LSM), remote controller used for instant replays with XT3 servers. References Telecommunications-related introductions in 1955 Canadian inventions Sports television technology Sports officiating technology Rules of basketball National Hockey League on television Major League Baseball on television College football on television Slow motion Film and video technology
Instant replay
[ "Physics" ]
6,818
[ "Spacetime", "Slow motion", "Physical quantities", "Time" ]
1,499,906
https://en.wikipedia.org/wiki/Glass%20electrode
A glass electrode is a type of ion-selective electrode made of a doped glass membrane that is sensitive to a specific ion. The most common application of ion-selective glass electrodes is for the measurement of pH. The pH electrode is an example of a glass electrode that is sensitive to hydrogen ions. Glass electrodes play an important part in the instrumentation for chemical analysis, and physicochemical studies. The voltage of the glass electrode, relative to some reference value, is sensitive to changes in the activity of a certain type of ions. History The first studies of glass electrodes (GE) found different sensitivities of different glasses to change the medium's acidity (pH), due to the effects of the alkali metal ions. In 1906, M. Cremer, the father of Erika Cremer, determined that the electric potential that arises between parts of the fluid, located on opposite sides of the glass membrane is proportional to the concentration of acid (hydrogen ion concentration). In 1909, S. P. L. Sørensen introduced the concept of pH, and in the same year F. Haber and Z. Klemensiewicz reported results of their research on the glass electrode in The Society of Chemistry in Karlsruhe. In 1922, W. S. Hughes showed that the alkali-silicate glass electrodes are similar to hydrogen electrodes, reversible concerning H+. In 1925, P. M. Tookey Kerridge developed the first glass electrode for analysis of blood samples and highlighted some of the practical problems with the equipment such as the high resistance of glass (50–150 MΩ). During her PhD, Kerridge developed a glass electrode aimed to measure small volume of solution. Her clever and careful design was a pioneering work in the making of glass electrodes. Applications Glass electrodes are commonly used for pH measurements. There are also specialized ion-sensitive glass electrodes used for the determination of the concentration of lithium, sodium, ammonium, and other ions. Glass electrodes find a wide diversity of uses in a large range of applications including research labs, control of industrial processes, analysis of foods and cosmetics, monitoring of environmental pollution, or soil acidity measurements... . Micro-electrodes are specifically designed for pH measurements on very small volumes of fluid, or for direct measurements in geochemical micro-environments, or in biochemical studies such as for determining the electrical potential of cell membrane. Heavy duty electrodes withstanding several tens of bar of hydraulic pressure also allow measurements in water wells in deep aquifers, or to directly determine in situ the pH of pore water in deep clay formations. For long-term in situ measurements, it is critical to minimize the KCl leak from the reference electrode compartment , and to use glycerol-free electrodes to avoid fuelling microbial growth, and to prevent unexpected but severe perturbations related to bacterial activity (pH decrease due to sulfate-reducing bacteria, or even methanogen bacteria). Types All commercial electrodes respond to single-charged ions, such as H+, Na+, Ag+. The most common glass electrode is the pH-electrode. Only a few chalcogenide glass electrodes are presently known to be sensitive to double-charged ions, such as Pb2+, Cd2+, and some other divalent cations. There are two main types of glass-forming systems: The most common one: a silicate matrix based on an amorphous molecular network of silicon dioxide (SiO2, the network former) with additions of other metal oxides (network modifiers), such as Na, K, Li, Al, B, Ca..., and; A less used one: a chalcogenide matrix based on a molecular network of AsS, AsSe, or AsTe. Interfering ions Because of the ion-exchange nature of the glass membrane, it is possible for some other ions to concurrently interact with ion-exchange sites of the glass, and distort the linear dependence of the measured electrode potential on pH or other electrode functions. In some cases, it is possible to change the electrode function from one ion to another. For example, some silicate pPNA electrodes can be changed to pAg function by soaking in a silver salt solution. Interference effects are commonly described by the semi-empirical Nicolsky-Shultz-Eisenman equation (also known as Nikolsky-Shultz-Eisenman equation), an extension to the Nernst equation. It is given by: where E is the electromotive force (emf), E0 the standard electrode potential, z the ionic valency including the sign, a the activity, i the ion of interest, j the interfering ions and kij is the selectivity coefficient quantifying the ion-exchange equilibrium between the ions i and j. The smaller the selectivity coefficient, the less is the interference by j. To see the interfering effect of Na+ to a pH-electrode: Range of a pH glass electrode The pH range at constant concentration can be divided into 3 parts: Undisturbed electrode function, where potential linearly depends on pH, realizing an ion-selective electrode for hydronium. where F is Faraday's constant (see Nernst equation). Alkali error range – at low concentration of hydrogen ions (high values of pH) contributions of interfering alkali metals ions (such as Li+, Na+, K+) are comparable with one of the hydrogen ions. In this situation dependence of the potential on pH become non-linear. The effect is usually noticeable at pH > 12, and at concentrations of lithium or sodium ions of 0.1  mol/L or more. Potassium ions usually cause less error than sodium ions. Acidic error range – at a very high concentration of hydrogen ions (low values of pH) the dependence of the electrode on pH becomes non-linear, and the influence of the anions in the solution also becomes noticeable. These effects usually become noticeable at pH < -1. Special electrodes exist for working in extreme pH ranges. Construction A typical modern pH probe is a combination electrode, which combines both the glass and reference electrodes into one body. The combination electrode consists of the following parts (see the drawing): A sensing part of electrode, a bulb made from a specific glass. Internal electrode, usually silver chloride electrode or calomel electrode. Internal solution, usually a pH=7 buffered solution of 0.1 mol/L KCl for pH electrodes or 0.1 mol/L MCl for pM electrodes. When using the silver chloride electrode, a small amount of AgCl can precipitate inside the glass electrode. Reference electrode, usually the same type as 2. Reference internal solution, usually 3.0 mol/L KCl. Junction with studied solution, usually made from ceramics or capillary with asbestos or quartz fiber. Body of electrode, made from non-conductive glass or plastics. The bottom of a pH electrode balloons out into a round thin glass bulb. The pH electrode is best thought of as a tube within a tube. The inner tube contains an unchanging 1×10−7 mol/L HCl solution. Also inside the inner tube is the cathode terminus of the reference probe. The anodic terminus wraps itself around the outside of the inner tube and ends with the same sort of reference probe as was on the inside of the inner tube. It is filled with a reference solution of KCl and has contact with the solution on the outside of the pH probe by way of a porous plug that serves as a salt bridge. Galvanic cell schematic representation This section describes the functioning of two distinct types of electrodes as one unit which combines both the glass electrode and the reference electrode into one body. It deserves some explanation. This device is essentially a galvanic cell that can be schematically represented as: Internal electrode | Internal buffer solution || Test Solution || Reference solution | Reference electrode Ag(s) | AgCl(s) | 0.1 M KCl(aq), 1×10−7M H+ solution || Test Solution || KCl(aq) | AgCl(s) | Ag(s) The double "pipe symbols" (||) indicate diffusive barriers – the glass membrane and the ceramic junction. The barriers prevent (glass membrane), or slow down (ceramic junction), the mixing of the different solutions. In this schematic representation of the galvanic cell, one will note the symmetry between the left and the right members as seen from the center of the row occupied by the "Test Solution" (the solution whose pH must be measured). In other words, the glass membrane and the ceramic junction occupy both the same relative places in each electrode. By using the same electrodes on the left and right, any potentials generated at the interfaces cancel each other (in principle), resulting in the system voltage being dependent only on the interaction of the glass membrane and the test solution. The measuring part of the electrode, the glass bulb on the bottom, is coated both inside and out with a ~10 nm layer of a hydrated gel. These two layers are separated by a layer of dry glass. The silica glass structure (that is, the conformation of its atomic structure) is shaped so that it allows Na+ ions some mobility. The metal cations (Na+) in the hydrated gel diffuse out of the glass and into solution while H+ from solution can diffuse into the hydrated gel. It is the hydrated gel which makes the pH electrode an ion-selective electrode. H+ does not cross through the glass membrane of the pH electrode, it is the Na+ which crosses and leads to a change in free energy. When an ion diffuses from a region of activity to another region of activity, there is a free energy change and this is what the pH meter actually measures. The hydrated gel membrane is connected by Na+ transport and thus the concentration of H+ on the outside of the membrane is 'relayed' to the inside of the membrane by Na+. All glass pH electrodes have extremely high electric resistance from 50 to 500 MΩ. Therefore, the glass electrode can be used only with a high input-impedance measuring device like a pH meter, or, more generically, a high input-impedance voltmeter which is called an electrometer. Limitations The glass electrode has some inherent limitations due to the nature of its construction. Acid and alkaline errors are discussed above. An important limitation results from the existence of asymmetry potentials that are present at glass/liquid interfaces. The existence of these phenomena means that glass electrodes must always be calibrated before use; a common method of calibration involves the use of standard buffer solutions. Also, there is a slow deterioration due to diffusion into and out of the internal solution. These effects are masked when the electrode is calibrated against buffer solutions but deviations from ideal response are easily observed by means of a Gran plot. Typically, the slope of the electrode response decreases over a period of months. Storage Between measurements any glass and membrane electrodes should be kept in a solution of its own ion. It is necessary to prevent the glass membrane from drying out because the performance is dependent on the existence of a hydrated layer, which forms slowly. See also Potentiometry Ion-selective electrodes ISFET pH electrode Chalcogenide glass Quinhydrone electrode Solid State Electrode References Further reading Nikol'skii, E. P., Schul'tz, M. M., et al., (1963). Vestn. Leningr. Univ., Ser. Fiz. i Khim., 18, No. 4, 73–186 (this series of articles summarizes Russian works on the effect of varying the glass composition on electrode properties and chemical stability of a great variety of glasses). External links pH electrode practical/theoretical information Titration with the glass electrode and pH calculation - freeware Electrodes Glass applications
Glass electrode
[ "Chemistry" ]
2,507
[ "Electrochemistry", "Electrodes" ]
2,145,741
https://en.wikipedia.org/wiki/Jan%20Burgers
Johannes (Jan) Martinus Burgers (January 13, 1895 – June 7, 1981) was a Dutch physicist and the brother of the physicist Wilhelm G. Burgers. Burgers studied in Leiden under Paul Ehrenfest, where he obtained his PhD in 1918. He is known for the Burgers' equation, the Burgers vector in dislocation theory and the Burgers material in viscoelasticity. Jan Burgers was one of the co-founders of the International Union of Theoretical and Applied Mechanics (IUTAM) in 1946, and was its secretary-general from 1946 until 1952. In 1931 he became member of the Royal Netherlands Academy of Arts and Sciences, in 1955 he became foreign member. Early life and education Burgers was born in Arnhem, Netherlands. There he attended both primary and secondary school. He attended Leiden University from 1914 until 1917. Burgers became a Doctor of Mathematical and Physical Sciences in 1918, writing a thesis entitled "Het Atoommodel van Rutherford-Bohr" (The Model of the Atom according to Rutherford and Bohr). Career Jan Burgers took his first position out of graduate school as Conservator at the Physical Laboratory of the Teyler's Foundation. From September 1918 until October 1955, Dr. Burgers was professor of Aerodynamics and Hydrodynamics at the Delft University of Technology. He was also secretary of the Department of Mechanical Engineering and Shipbuilding (1921-1924) and later the department's chairman (1929-1931). Burgers also worked with scientists including Theodore von Karman, L. Prandt, R. von Mises, G.I. Taylor and W.F. Durand, and Paul Ehrenfest. Jan Burgers researched fluid dynamics, worked on the theory of turbulence, and explored what came to be known as the Burgers' equation. He also studied crystallography with his brother Willy Gerard Burgers. Burgers and his wife, Anna immigrated to the United States in 1955 where Burgers accepted a position of research professor at the Institute for Fluid Dynamics and Applied Mathematics (now the Institute for Physical Science and Technology) at the University of Maryland, College Park. Burgers continued his interest in fluid dynamics while at the University of Maryland, and was recognized for his studies in gas dynamics, plasma physics, shock waves, and related phenomena. Burgers retired from the University of Maryland in 1965. Notes References External links A.J.Q. Alkemade, Burgers, Johannes Martinus (1895–1981), in Biografisch Woordenboek van Nederland. "Johannes Martinus Burgers; 13 January 1895 to 7 June 1981", biography at the University of Maryland JM Burgers Centrum The Burgers program for fluid dynamics at the University of Maryland Oral History interview transcript with Johannes Burgers on 9 June 1962, American Institute of Physics, Niels Bohr Library & Archives , 32nd Gibbs lecture delivered by Burgers at Philadelphia, Tuesday, 20 January 1959 20th-century Dutch physicists 1895 births 1981 deaths People from Arnhem Academic staff of the Delft University of Technology Leiden University alumni Members of the Royal Netherlands Academy of Arts and Sciences ASME Medal recipients University of Maryland, College Park faculty Fluid dynamicists 20th-century American engineers Fellows of the American Physical Society
Jan Burgers
[ "Chemistry" ]
674
[ "Fluid dynamicists", "Fluid dynamics" ]
2,145,763
https://en.wikipedia.org/wiki/Burgers%20material
A Burgers material is a viscoelastic material having the properties both of elasticity and viscosity. It is named after the Dutch physicist Johannes Martinus Burgers. Overview Maxwell representation Given that one Maxwell material has an elasticity and viscosity , and the other Maxwell material has an elasticity and viscosity , the Burgers model has the constitutive equation where is the stress and is the strain. Kelvin representation Given that the Kelvin material has an elasticity and viscosity , the spring has an elasticity and the dashpot has a viscosity , the Burgers model has the constitutive equation where is the stress and is the strain. Model characteristics This model incorporates viscous flow into the standard linear solid model, giving a linearly increasing asymptote for strain under fixed loading conditions. See also Generalized Maxwell model Kelvin–Voigt material Maxwell material Standard linear solid model References External links Creep and Stress Relaxation for Four-Element Viscoelastic Solids and Liquids, Wolfram Demonstrations Project Non-Newtonian fluids
Burgers material
[ "Physics" ]
215
[ "Materials stubs", "Materials", "Matter" ]
2,145,845
https://en.wikipedia.org/wiki/Speech%20Synthesis%20Markup%20Language
Speech Synthesis Markup Language (SSML) is an XML-based markup language for speech synthesis applications. It is a recommendation of the W3C's Voice Browser Working Group. SSML is often embedded in VoiceXML scripts to drive interactive telephony systems. However, it also may be used alone, such as for creating audio books. For desktop applications, other markup languages are popular, including Apple's embedded speech commands, and Microsoft's SAPI Text to speech (TTS) markup, also an XML language. It is also used to produce sounds via Azure Cognitive Services' Text to Speech API or when writing third-party skills for Google Assistant or Amazon Alexa. SSML is based on the Java Speech Markup Language (JSML) developed by Sun Microsystems, although the current recommendation was developed mostly by speech synthesis vendors. It covers virtually all aspects of synthesis, although some areas have been left unspecified, so each vendor accepts a different variant of the language. Also, in the absence of markup, the synthesizer is expected to do its own interpretation of the text. Example Here is an example of an SSML document: <?xml version="1.0"?> <speak xmlns="http://www.w3.org/2001/10/synthesis" xmlns:dc="http://purl.org/dc/elements/1.1/" version="1.0"> <metadata> <dc:title xml:lang="en">Telephone Menu: Level 1</dc:title> </metadata> <p> <s xml:lang="en-US"> <voice name="David" gender="male" age="25"> For English, press <emphasis>one</emphasis>. </voice> </s> <s xml:lang="es-MX"> <voice name="Miguel" gender="male" age="25"> Para español, oprima el <emphasis>dos</emphasis>. </voice> </s> </p> </speak> Features SSML specifies a fair amount of markup for prosody, which is not included in the above example. This includes markup for pitch contour pitch range rate duration volume See also Pronunciation Lexicon Specification (PLS) Speech Recognition Grammar Specification (SRGS) Semantic Interpretation for Speech Recognition (SISR) SABLE speech synthesis markup language, intended to combine SSML, STML, and JSML References External links W3C SSML 1.1 Recommendation W3C SSML 1.0 Recommendation Introduction to SSML on XML.com Speech synthesis XML-based standards World Wide Web Consortium standards Markup languages 2004 introductions
Speech Synthesis Markup Language
[ "Technology" ]
582
[ "Computer standards", "XML-based standards" ]
2,145,881
https://en.wikipedia.org/wiki/Heptachlor
Heptachlor is an organochlorine compound that was used as an insecticide. Usually sold as a white or tan powder, heptachlor is one of the cyclodiene insecticides. In 1962, Rachel Carson's Silent Spring questioned the safety of heptachlor and other chlorinated insecticides. Due to its highly stable structure, heptachlor can persist in the environment for decades. In the United States, the Environmental Protection Agency has limited the sale of heptachlor products to the specific application of fire ant control in underground transformers. The amount that can be present in different foods is regulated. Synthesis Analogous to the synthesis of other cyclodienes, heptachlor is produced via the Diels-Alder reaction of hexachlorocyclopentadiene and cyclopentadiene. The resulting adduct is chlorinated followed by treatment with hydrogen chloride in nitromethane in the presence of aluminum trichloride or with iodine monochloride. Compared to chlordane, it is about 3–5 times more active as an insecticide, but more inert chemically, being resistant to water and caustic alkalies. Metabolism Soil microorganisms transform heptachlor by epoxidation, hydrolysis, and reduction. When the compound was incubated with a mixed culture of organisms, chlordene (hexachlorocyclopentadine, its precursor) formed, which was further metabolized to chlordene epoxide. Other metabolites include 1-hydroxychlordene, 1-hydroxy-2,3-epoxychlordene, and heptachlor epoxide. Soil microorganisms hydrolyze heptachlor to give ketochlordene. Rats metabolize heptachlor to the epoxide 1-exo-1-hydroxyheptachlor epoxide and 1,2-dihydrooxydihydrochlordene. When heptachlor epoxide was incubated with microsomal preparations form liver of pigs and from houseflies, the products found were diol and 1-hydroxy-2,3-epoxychlordene. The metabolic scheme in rats shows two pathways with the same metabolite. The first involves following scheme: heptachlor → heptachlor epoxide → dehydrogenated derivative of 1-exo-hydroxy-2,3-exo-epoxychlordene → 1,2-dihydrooxydihydrochlordene. The second involves: heptachlor → 1-exo-hydroxychlordene → 1-exo-hydroxy, 2,3-exo-epoxychlordene → 1,2-dihydrooxydihydrochlordene. Environmental impact Heptachlor is a persistent organic pollutant (POP). It has a half life of ~1.3-4.2 days (air), ~0.03-0.11 years (water), and ~0.11-0.34 years (soil). One study described its half life to be 2 years and claimed that its residues could be found in soil 14 years after its initial application. Like other POPs, heptachlor is lipophilic and poorly soluble in water (0.056 mg/L at 25 °C), thus it tends to accumulate in the body fat of humans and animals. Heptachlor epoxide is more likely to be found in the environment than its parent compound. The epoxide also dissolves more easily in water than its parent compound and is more persistent. Heptachlor and its epoxide absorb to soil particles and evaporate. Toxicity of heptachlor and related derivatives The range of oral rat values are 40 mg/kg to 162 mg/kg. Daily oral doses of heptachlor at 50 and 100 mg/kg were found to be lethal to rats after 10 days. For heptachlor epoxide, the oral LD50 values ranging from 46.5 to 60 mg/kg. With rat oral of LD5047mg/kg, heptachlor epoxide is more toxic. A product of hydrogenation of heptachlor, β-dihydroheptachlor, has high insecticidal activity and low mammalian toxicity, rat oral LD50>5,000mg/kg. Human impact Humans may be exposed to heptachlor through drinking water and foods, including breast milk. Heptachlor epoxide is derived from a pesticide that was banned in the U.S. in the 1980s. It is still found in soil and water supplies and can turn up in food. It can be passed along in breast milk. The International Agency for Research on Cancer and the EPA have classified the compound as a possible human carcinogen. Animals exposed to heptachlor epoxide during gestation and infancy are found to have changes in nervous system and immune function. Exposure to higher doses of heptachlor in newborn animals leads to decreased body weight and death. The U.S. EPA MCL for drinking water is 0.0004 mg/L for heptachlor and 0.0002 mg/L for heptachlor epoxide. The U.S. FDA limit on food crops is 0.01 ppm, in milk 0.1 ppm, and on edible seafoods 0.3 ppm. The Occupational Safety and Health Administration has limit of 0.5 mg/m3 (cubic meter of workplace air) for 8-hour shifts and 40-hour work weeks. An ATSDR report in 1993 found no studies with respect to death in humans after oral exposure to heptachlor or heptachlor epoxide. Chemical properties The octanol-water partition coefficient (Kow) of heptachlor is ~105.27. Henry's Law constant is 2.3 · 10−3atm-m3/mol and the vapor pressure is 3 · 10−4mmHg at 20 °C. References External links ASTDR ToxFAQs for Heptachlor CDC - NIOSH Pocket Guide to Chemical Hazards Obsolete pesticides Organochloride insecticides IARC Group 2B carcinogens Endocrine disruptors Persistent organic pollutants under the Stockholm Convention Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Heptachlor
[ "Chemistry" ]
1,355
[ "Endocrine disruptors", "Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution", "Persistent organic pollutants under the Stockholm Convention" ]
2,145,921
https://en.wikipedia.org/wiki/Coherent%20file%20distribution%20protocol
Coherent File Distribution Protocol (CFDP) is an IETF-documented experimental protocol intended for high-speed one-to-many file transfers. Class 1 is assured delivery, class 2 is blind unassured delivery. References Network file systems
Coherent file distribution protocol
[ "Technology" ]
50
[ "Computing stubs", "Computer network stubs" ]
2,146,034
https://en.wikipedia.org/wiki/CRISPR
CRISPR () (an acronym for clustered regularly interspaced short palindromic repeats) is a family of DNA sequences found in the genomes of prokaryotic organisms such as bacteria and archaea. Each sequence within an individual prokaryotic cell is derived from a DNA fragment of a bacteriophage that had previously infected the prokaryote or one of its ancestors. These sequences are used to detect and destroy DNA from similar bacteriophages during subsequent infections. Hence these sequences play a key role in the antiviral (i.e. anti-phage) defense system of prokaryotes and provide a form of heritable, acquired immunity. CRISPR is found in approximately 50% of sequenced bacterial genomes and nearly 90% of sequenced archaea. Cas9 (or "CRISPR-associated protein 9") is an enzyme that uses CRISPR sequences as a guide to recognize and open up specific strands of DNA that are complementary to the CRISPR sequence. Cas9 enzymes together with CRISPR sequences form the basis of a technology known as CRISPR-Cas9 that can be used to edit genes within living organisms. This editing process has a wide variety of applications including basic biological research, development of biotechnological products, and treatment of diseases. The development of the CRISPR-Cas9 genome editing technique was recognized by the Nobel Prize in Chemistry in 2020 awarded to Emmanuelle Charpentier and Jennifer Doudna. History Repeated sequences The discovery of clustered DNA repeats took place independently in three parts of the world. The first description of what would later be called CRISPR is from Osaka University researcher Yoshizumi Ishino and his colleagues in 1987. They accidentally cloned part of a CRISPR sequence together with the "iap" gene (isozyme conversion of alkaline phosphatase) from their target genome, that of Escherichia coli. The organization of the repeats was unusual. Repeated sequences are typically arranged consecutively, without interspersing different sequences. They did not know the function of the interrupted clustered repeats. In 1993, researchers of Mycobacterium tuberculosis in the Netherlands published two articles about a cluster of interrupted direct repeats (DR) in that bacterium. They recognized the diversity of the sequences that intervened in the direct repeats among different strains of M. tuberculosis and used this property to design a typing method called spoligotyping, still in use today. Francisco Mojica at the University of Alicante in Spain studied the function of repeats in the archaeal species Haloferax and Haloarcula. Mojica's supervisor surmised that the clustered repeats had a role in correctly segregating replicated DNA into daughter cells during cell division, because plasmids and chromosomes with identical repeat arrays could not coexist in Haloferax volcanii. Transcription of the interrupted repeats was also noted for the first time; this was the first full characterization of CRISPR. By 2000, Mojica and his students, after an automated search of published genomes, identified interrupted repeats in 20 species of microbes as belonging to the same family. Because those sequences were interspaced, Mojica initially called these sequences "short regularly spaced repeats" (SRSR). In 2001, Mojica and Ruud Jansen, who were searching for an additional interrupted repeats, proposed the acronym CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) to unify the numerous acronyms used to describe these sequences. In 2002, Tang, et al. showed evidence that CRISPR repeat regions from the genome of Archaeoglobus fulgidus were transcribed into long RNA molecules subsequently processed into unit-length small RNAs, plus some longer forms of 2, 3, or more spacer-repeat units. In 2005, yogurt researcher Rodolphe Barrangou discovered that Streptococcus thermophilus, after iterative phage infection challenges, develops increased phage resistance due to the incorporation of additional CRISPR spacer sequences. Barrangou's employer, the Danish food company Danisco, then developed phage-resistant S. thermophilus strains for yogurt production. Danisco was later bought by DuPont, which owns about 50 percent of the global dairy culture market, and the technology spread widely. CRISPR-associated systems A major advance in understanding CRISPR came with Jansen's observation that the prokaryote repeat cluster was accompanied by four homologous genes that make up CRISPR-associated systems, cas 1–4. The Cas proteins showed helicase and nuclease motifs, suggesting a role in the dynamic structure of the CRISPR loci. In this publication, the acronym CRISPR was used as the universal name of this pattern, but its function remained enigmatic. In 2005, three independent research groups showed that some CRISPR spacers are derived from phage DNA and extrachromosomal DNA such as plasmids. In effect, the spacers are fragments of DNA gathered from viruses that previously attacked the cell. The source of the spacers was a sign that the CRISPR-cas system could have a role in adaptive immunity in bacteria. All three studies proposing this idea were initially rejected by high-profile journals, but eventually appeared in other journals. The first publication proposing a role of CRISPR-Cas in microbial immunity, by Mojica and collaborators at the University of Alicante, predicted a role for the RNA transcript of spacers on target recognition in a mechanism that could be analogous to the RNA interference system used by eukaryotic cells. Koonin and colleagues extended this RNA interference hypothesis by proposing mechanisms of action for the different CRISPR-Cas subtypes according to the predicted function of their proteins. Experimental work by several groups revealed the basic mechanisms of CRISPR-Cas immunity. In 2007, the first experimental evidence that CRISPR was an adaptive immune system was published. A CRISPR region in Streptococcus thermophilus acquired spacers from the DNA of an infecting bacteriophage. The researchers manipulated the resistance of S. thermophilus to different types of phages by adding and deleting spacers whose sequence matched those found in the tested phages. In 2008, Brouns and Van der Oost identified a complex of Cas proteins called Cascade, that in E. coli cut the CRISPR RNA precursor within the repeats into mature spacer-containing RNA molecules called CRISPR RNA (crRNA), which remained bound to the protein complex. Moreover, it was found that Cascade, crRNA and a helicase/nuclease (Cas3) were required to provide a bacterial host with immunity against infection by a DNA virus. By designing an anti-virus CRISPR, they demonstrated that two orientations of the crRNA (sense/antisense) provided immunity, indicating that the crRNA guides were targeting dsDNA. That year Marraffini and Sontheimer confirmed that a CRISPR sequence of S. epidermidis targeted DNA and not RNA to prevent conjugation. This finding was at odds with the proposed RNA-interference-like mechanism of CRISPR-Cas immunity, although a CRISPR-Cas system that targets foreign RNA was later found in Pyrococcus furiosus. A 2010 study showed that CRISPR-Cas cuts strands of both phage and plasmid DNA in S. thermophilus. Cas9 A simpler CRISPR system from Streptococcus pyogenes relies on the protein Cas9. The Cas9 endonuclease is a four-component system that includes two small molecules: crRNA and trans-activating CRISPR RNA (tracrRNA). In 2012, Jennifer Doudna and Emmanuelle Charpentier re-engineered the Cas9 endonuclease into a more manageable two-component system by fusing the two RNA molecules into a "single-guide RNA" that, when combined with Cas9, could find and cut the DNA target specified by the guide RNA. This contribution was so significant that it was recognized by the Nobel Prize in Chemistry in 2020. By manipulating the nucleotide sequence of the guide RNA, the artificial Cas9 system could be programmed to target any DNA sequence for separation. Another collaboration comprising Virginijus Šikšnys, Gasiūnas, Barrangou, and Horvath showed that Cas9 from the S. thermophilus CRISPR system can also be reprogrammed to target a site of their choosing by changing the sequence of its crRNA. These advances fueled efforts to edit genomes with the modified CRISPR-Cas9 system. Groups led by Feng Zhang and George Church simultaneously published descriptions of genome editing in human cell cultures using CRISPR-Cas9 for the first time. It has since been used in a wide range of organisms, including baker's yeast (Saccharomyces cerevisiae), the opportunistic pathogen Candida albicans, zebrafish (Danio rerio), fruit flies (Drosophila melanogaster), ants (Harpegnathos saltator and Ooceraea biroi), mosquitoes (Aedes aegypti), nematodes (Caenorhabditis elegans), plants, mice (Mus musculus domesticus), monkeys and human embryos. CRISPR has been modified to make programmable transcription factors that allows activation or silencing of targeted genes. The CRISPR-Cas9 system has been shown to make effective gene edits in Human tripronuclear zygotes, as first described in a 2015 paper by Chinese scientists P. Liang and Y. Xu. The system made a successful cleavage of mutant Beta-Hemoglobin (HBB) in 28 out of 54 embryos. Four out of the 28 embryos were successfully recombined using a donor template. The scientists showed that during DNA recombination of the cleaved strand, the homologous endogenous sequence HBD competes with the exogenous donor template. DNA repair in human embryos is much more complicated and particular than in derived stem cells. Cas12a In 2015, the nuclease Cas12a (formerly called ) was characterized in the CRISPR-Cpf1 system of the bacterium Francisella novicida. Its original name, from a TIGRFAMs protein family definition built in 2012, reflects the prevalence of its CRISPR-Cas subtype in the Prevotella and Francisella lineages. Cas12a showed several key differences from Cas9 including: causing a 'staggered' cut in double stranded DNA as opposed to the 'blunt' cut produced by Cas9, relying on a 'T rich' PAM (providing alternative targeting sites to Cas9), and requiring only a CRISPR RNA (crRNA) for successful targeting. By contrast, Cas9 requires both crRNA and a trans-activating crRNA (tracrRNA). These differences may give Cas12a some advantages over Cas9. For example, Cas12a's small crRNAs are ideal for multiplexed genome editing, as more of them can be packaged in one vector than can Cas9's sgRNAs. The sticky 5′ overhangs left by Cas12a can also be used for DNA assembly that is much more target-specific than traditional restriction enzyme cloning. Finally, Cas12a cleaves DNA 18–23 base pairs downstream from the PAM site. This means there is no disruption to the recognition sequence after repair, and so Cas12a enables multiple rounds of DNA cleavage. By contrast, since Cas9 cuts only 3 base pairs upstream of the PAM site, the NHEJ pathway results in indel mutations that destroy the recognition sequence, thereby preventing further rounds of cutting. In theory, repeated rounds of DNA cleavage should cause an increased opportunity for the desired genomic editing to occur. A distinctive feature of Cas12a, as compared to Cas9, is that after cutting its target, Cas12a remains bound to the target and then cleaves other ssDNA molecules non-discriminately. This property is called "collateral cleavage" or "trans-cleavage" activity and has been exploited for the development of various diagnostic technologies. Cas13 In 2016, the nuclease (formerly known as ) from the bacterium Leptotrichia shahii was characterized. Cas13 is an RNA-guided RNA endonuclease, which means that it does not cleave DNA, but only single-stranded RNA. Cas13 is guided by its crRNA to a ssRNA target and binds and cleaves the target. Similar to Cas12a, the Cas13 remains bound to the target and then cleaves other ssRNA molecules non-discriminately. This collateral cleavage property has been exploited for the development of various diagnostic technologies. In 2021, Dr. Hui Yang characterized novel miniature Cas13 protein (mCas13) variants, Cas13X and Cas13Y. Using a small portion of N gene sequence from SARS-CoV-2 as a target in characterization of mCas13, revealed the sensitivity and specificity of mCas13 coupled with RT-LAMP for detection of SARS-CoV-2 in both synthetic and clinical samples over other available standard tests like RT-qPCR (1 copy/μL). Locus structure Repeats and spacers The CRISPR array is made up of an AT-rich leader sequence followed by short repeats that are separated by unique spacers. CRISPR repeats typically range in size from 28 to 37 base pairs (bps), though there can be as few as 23 bp and as many as 55 bp. Some show dyad symmetry, implying the formation of a secondary structure such as a stem-loop ('hairpin') in the RNA, while others are designed to be unstructured. The size of spacers in different CRISPR arrays is typically 32 to 38 bp (range 21 to 72 bp). New spacers can appear rapidly as part of the immune response to phage infection. There are usually fewer than 50 units of the repeat-spacer sequence in a CRISPR array. CRISPR RNA structures Cas genes and CRISPR subtypes Small clusters of cas genes are often located next to CRISPR repeat-spacer arrays. Collectively the 93 cas genes are grouped into 35 families based on sequence similarity of the encoded proteins. 11 of the 35 families form the cas core, which includes the protein families Cas1 through Cas9. A complete CRISPR-Cas locus has at least one gene belonging to the cas core. CRISPR-Cas systems fall into two classes. Class 1 systems use a complex of multiple Cas proteins to degrade foreign nucleic acids. Class 2 systems use a single large Cas protein for the same purpose. Class 1 is divided into types I, III, and IV; class 2 is divided into types II, V, and VI. The 6 system types are divided into 33 subtypes. Each type and most subtypes are characterized by a "signature gene" found almost exclusively in the category. Classification is also based on the complement of cas genes that are present. Most CRISPR-Cas systems have a Cas1 protein. The phylogeny of Cas1 proteins generally agrees with the classification system, but exceptions exist due to module shuffling. Many organisms contain multiple CRISPR-Cas systems suggesting that they are compatible and may share components. The sporadic distribution of the CRISPR-Cas subtypes suggests that the CRISPR-Cas system is subject to horizontal gene transfer during microbial evolution. Mechanism CRISPR-Cas immunity is a natural process of bacteria and archaea. CRISPR-Cas prevents bacteriophage infection, conjugation and natural transformation by degrading foreign nucleic acids that enter the cell. Spacer acquisition When a microbe is invaded by a bacteriophage, the first stage of the immune response is to capture phage DNA and insert it into a CRISPR locus in the form of a spacer. Cas1 and Cas2 are found in both types of CRISPR-Cas immune systems, which indicates that they are involved in spacer acquisition. Mutation studies confirmed this hypothesis, showing that removal of Cas1 or Cas2 stopped spacer acquisition, without affecting CRISPR immune response. Multiple Cas1 proteins have been characterised and their structures resolved. Cas1 proteins have diverse amino acid sequences. However, their crystal structures are similar and all purified Cas1 proteins are metal-dependent nucleases/integrases that bind to DNA in a sequence-independent manner. Representative Cas2 proteins have been characterised and possess either (single strand) ssRNA- or (double strand) dsDNA- specific endoribonuclease activity. In the I-E system of E. coli Cas1 and Cas2 form a complex where a Cas2 dimer bridges two Cas1 dimers. In this complex Cas2 performs a non-enzymatic scaffolding role, binding double-stranded fragments of invading DNA, while Cas1 binds the single-stranded flanks of the DNA and catalyses their integration into CRISPR arrays. New spacers are usually added at the beginning of the CRISPR next to the leader sequence creating a chronological record of viral infections. In E. coli a histone like protein called integration host factor (IHF), which binds to the leader sequence, is responsible for the accuracy of this integration. IHF also enhances integration efficiency in the type I-F system of Pectobacterium atrosepticum. but in other systems, different host factors may be required Protospacer adjacent motifs (PAM) Bioinformatic analysis of regions of phage genomes that were excised as spacers (termed protospacers) revealed that they were not randomly selected but instead were found adjacent to short (3–5 bp) DNA sequences termed protospacer adjacent motifs (PAM). Analysis of CRISPR-Cas systems showed PAMs to be important for type I and type II, but not type III systems during acquisition. In type I and type II systems, protospacers are excised at positions adjacent to a PAM sequence, with the other end of the spacer cut using a ruler mechanism, thus maintaining the regularity of the spacer size in the CRISPR array. The conservation of the PAM sequence differs between CRISPR-Cas systems and appears to be evolutionarily linked to Cas1 and the leader sequence. New spacers are added to a CRISPR array in a directional manner, occurring preferentially, but not exclusively, adjacent to the leader sequence. Analysis of the type I-E system from E. coli demonstrated that the first direct repeat adjacent to the leader sequence is copied, with the newly acquired spacer inserted between the first and second direct repeats. The PAM sequence appears to be important during spacer insertion in type I-E systems. That sequence contains a strongly conserved final nucleotide (nt) adjacent to the first nt of the protospacer. This nt becomes the final base in the first direct repeat. This suggests that the spacer acquisition machinery generates single-stranded overhangs in the second-to-last position of the direct repeat and in the PAM during spacer insertion. However, not all CRISPR-Cas systems appear to share this mechanism as PAMs in other organisms do not show the same level of conservation in the final position. It is likely that in those systems, a blunt end is generated at the very end of the direct repeat and the protospacer during acquisition. Insertion variants Analysis of Sulfolobus solfataricus CRISPRs revealed further complexities to the canonical model of spacer insertion, as one of its six CRISPR loci inserted new spacers randomly throughout its CRISPR array, as opposed to inserting closest to the leader sequence. Multiple CRISPRs contain many spacers to the same phage. The mechanism that causes this phenomenon was discovered in the type I-E system of E. coli. A significant enhancement in spacer acquisition was detected where spacers already target the phage, even mismatches to the protospacer. This 'priming' requires the Cas proteins involved in both acquisition and interference to interact with each other. Newly acquired spacers that result from the priming mechanism are always found on the same strand as the priming spacer. This observation led to the hypothesis that the acquisition machinery slides along the foreign DNA after priming to find a new protospacer. Biogenesis CRISPR-RNA (crRNA), which later guides the Cas nuclease to the target during the interference step, must be generated from the CRISPR sequence. The crRNA is initially transcribed as part of a single long transcript encompassing much of the CRISPR array. This transcript is then cleaved by Cas proteins to form crRNAs. The mechanism to produce crRNAs differs among CRISPR-Cas systems. In type I-E and type I-F systems, the proteins Cas6e and Cas6f respectively, recognise stem-loops created by the pairing of identical repeats that flank the crRNA. These Cas proteins cleave the longer transcript at the edge of the paired region, leaving a single crRNA along with a small remnant of the paired repeat region. Type III systems also use Cas6, however, their repeats do not produce stem-loops. Cleavage instead occurs by the longer transcript wrapping around the Cas6 to allow cleavage just upstream of the repeat sequence. Type II systems lack the Cas6 gene and instead utilize RNaseIII for cleavage. Functional type II systems encode an extra small RNA that is complementary to the repeat sequence, known as a trans-activating crRNA (tracrRNA). Transcription of the tracrRNA and the primary CRISPR transcript results in base pairing and the formation of dsRNA at the repeat sequence, which is subsequently targeted by RNaseIII to produce crRNAs. Unlike the other two systems, the crRNA does not contain the full spacer, which is instead truncated at one end. CrRNAs associate with Cas proteins to form ribonucleotide complexes that recognize foreign nucleic acids. CrRNAs show no preference between the coding and non-coding strands, which is indicative of an RNA-guided DNA-targeting system. The type I-E complex (commonly referred to as Cascade) requires five Cas proteins bound to a single crRNA. Interference During the interference stage in type I systems, the PAM sequence is recognized on the crRNA-complementary strand and is required along with crRNA annealing. In type I systems correct base pairing between the crRNA and the protospacer signals a conformational change in Cascade that recruits Cas3 for DNA degradation. Type II systems rely on a single multifunctional protein, Cas9, for the interference step. Cas9 requires both the crRNA and the tracrRNA to function and cleave DNA using its dual HNH and RuvC/RNaseH-like endonuclease domains. Basepairing between the PAM and the phage genome is required in type II systems. However, the PAM is recognized on the same strand as the crRNA (the opposite strand to type I systems). Type III systems, like type I require six or seven Cas proteins binding to crRNAs. The type III systems analysed from S. solfataricus and P. furiosus both target the mRNA of phages rather than phage DNA genome, which may make these systems uniquely capable of targeting RNA-based phage genomes. Type III systems were also found to target DNA in addition to RNA using a different Cas protein in the complex, Cas10. The DNA cleavage was shown to be transcription dependent. The mechanism for distinguishing self from foreign DNA during interference is built into the crRNAs and is therefore likely common to all three systems. Throughout the distinctive maturation process of each major type, all crRNAs contain a spacer sequence and some portion of the repeat at one or both ends. It is the partial repeat sequence that prevents the CRISPR-Cas system from targeting the chromosome as base pairing beyond the spacer sequence signals self and prevents DNA cleavage. RNA-guided CRISPR enzymes are classified as type V restriction enzymes. Evolution The cas genes in the adaptor and effector modules of the CRISPR-Cas system are believed to have evolved from two different ancestral modules. A transposon-like element called casposon encoding the Cas1-like integrase and potentially other components of the adaptation module was inserted next to the ancestral effector module, which likely functioned as an independent innate immune system. The highly conserved cas1 and cas2 genes of the adaptor module evolved from the ancestral module while a variety of class 1 effector cas genes evolved from the ancestral effector module. The evolution of these various class 1 effector module cas genes was guided by various mechanisms, such as duplication events. On the other hand, each type of class 2 effector module arose from subsequent independent insertions of mobile genetic elements. These mobile genetic elements took the place of the multiple gene effector modules to create single gene effector modules that produce large proteins which perform all the necessary tasks of the effector module. The spacer regions of CRISPR-Cas systems are taken directly from foreign mobile genetic elements and thus their long-term evolution is hard to trace. The non-random evolution of these spacer regions has been found to be highly dependent on the environment and the particular foreign mobile genetic elements it contains. CRISPR-Cas can immunize bacteria against certain phages and thus halt transmission. For this reason, Koonin described CRISPR-Cas as a Lamarckian inheritance mechanism. However, this was disputed by a critic who noted, "We should remember [Lamarck] for the good he contributed to science, not for things that resemble his theory only superficially. Indeed, thinking of CRISPR and other phenomena as Lamarckian only obscures the simple and elegant way evolution really works". But as more recent studies have been conducted, it has become apparent that the acquired spacer regions of CRISPR-Cas systems are indeed a form of Lamarckian evolution because they are genetic mutations that are acquired and then passed on. On the other hand, the evolution of the Cas gene machinery that facilitates the system evolves through classic Darwinian evolution. Coevolution Analysis of CRISPR sequences revealed coevolution of host and viral genomes. The basic model of CRISPR evolution is newly incorporated spacers driving phages to mutate their genomes to avoid the bacterial immune response, creating diversity in both the phage and host populations. To resist a phage infection, the sequence of the CRISPR spacer must correspond perfectly to the sequence of the target phage gene. Phages can continue to infect their hosts' given point mutations in the spacer. Similar stringency is required in PAM or the bacterial strain remains phage sensitive. Rates A study of 124 S. thermophilus strains showed that 26% of all spacers were unique and that different CRISPR loci showed different rates of spacer acquisition. Some CRISPR loci evolve more rapidly than others, which allowed the strains' phylogenetic relationships to be determined. A comparative genomic analysis showed that E. coli and S. enterica evolve much more slowly than S. thermophilus. The latter's strains that diverged 250,000 years ago still contained the same spacer complement. Metagenomic analysis of two acid-mine-drainage biofilms showed that one of the analyzed CRISPRs contained extensive deletions and spacer additions versus the other biofilm, suggesting a higher phage activity/prevalence in one community than the other. In the oral cavity, a temporal study determined that 7–22% of spacers were shared over 17 months within an individual while less than 2% were shared across individuals. From the same environment, a single strain was tracked using PCR primers specific to its CRISPR system. Broad-level results of spacer presence/absence showed significant diversity. However, this CRISPR added three spacers over 17 months, suggesting that even in an environment with significant CRISPR diversity some loci evolve slowly. CRISPRs were analysed from the metagenomes produced for the Human Microbiome Project. Although most were body-site specific, some within a body site are widely shared among individuals. One of these loci originated from streptococcal species and contained ≈15,000 spacers, 50% of which were unique. Similar to the targeted studies of the oral cavity, some showed little evolution over time. CRISPR evolution was studied in chemostats using S. thermophilus to directly examine spacer acquisition rates. In one week, S. thermophilus strains acquired up to three spacers when challenged with a single phage. During the same interval, the phage developed single-nucleotide polymorphisms that became fixed in the population, suggesting that targeting had prevented phage replication absent these mutations. Another S. thermophilus experiment showed that phages can infect and replicate in hosts that have only one targeting spacer. Yet another showed that sensitive hosts can exist in environments with high-phage titres. The chemostat and observational studies suggest many nuances to CRISPR and phage (co)evolution. Identification CRISPRs are widely distributed among bacteria and archaea and show some sequence similarities. Their most notable characteristic is their repeating spacers and direct repeats. This characteristic makes CRISPRs easily identifiable in long sequences of DNA, since the number of repeats decreases the likelihood of a false positive match. Analysis of CRISPRs in metagenomic data is more challenging, as CRISPR loci do not typically assemble, due to their repetitive nature or through strain variation, which confuses assembly algorithms. Where many reference genomes are available, polymerase chain reaction (PCR) can be used to amplify CRISPR arrays and analyse spacer content. However, this approach yields information only for specifically targeted CRISPRs and for organisms with sufficient representation in public databases to design reliable polymerase PCR primers. Degenerate repeat-specific primers can be used to amplify CRISPR spacers directly from environmental samples; amplicons containing two or three spacers can be then computationally assembled to reconstruct long CRISPR arrays. The alternative is to extract and reconstruct CRISPR arrays from shotgun metagenomic data. This is computationally more difficult, particularly with second generation sequencing technologies (e.g. 454, Illumina), as the short read lengths prevent more than two or three repeat units appearing in a single read. CRISPR identification in raw reads has been achieved using purely de novo identification or by using direct repeat sequences in partially assembled CRISPR arrays from contigs (overlapping DNA segments that together represent a consensus region of DNA) and direct repeat sequences from published genomes as a hook for identifying direct repeats in individual reads. Use by phages Another way for bacteria to defend against phage infection is by having chromosomal islands. A subtype of chromosomal islands called phage-inducible chromosomal island (PICI) is excised from a bacterial chromosome upon phage infection and can inhibit phage replication. PICIs are induced, excised, replicated, and finally packaged into small capsids by certain staphylococcal temperate phages. PICIs use several mechanisms to block phage reproduction. In the first mechanism, PICI-encoded Ppi differentially blocks phage maturation by binding or interacting specifically with phage TerS, hence blocking phage TerS/TerL complex formation responsible for phage DNA packaging. In the second mechanism PICI CpmAB redirects the phage capsid morphogenetic protein to make 95% of SaPI-sized capsid and phage DNA can package only 1/3rd of their genome in these small capsids and hence become nonviable phage. The third mechanism involves two proteins, PtiA and PtiB, that target the LtrC, which is responsible for the production of virion and lysis proteins. This interference mechanism is modulated by a modulatory protein, PtiM, binds to one of the interference-mediating proteins, PtiA, and hence achieves the required level of interference. One study showed that lytic ICP1 phage, which specifically targets Vibrio cholerae serogroup O1, has acquired a CRISPR-Cas system that targets a V. cholera PICI-like element. The system has 2 CRISPR loci and 9 Cas genes. It seems to be homologous to the I-F system found in Yersinia pestis. Moreover, like the bacterial CRISPR-Cas system, ICP1 CRISPR-Cas can acquire new sequences, which allows phage and host to co-evolve. Certain archaeal viruses were shown to carry mini-CRISPR arrays containing one or two spacers. It has been shown that spacers within the virus-borne CRISPR arrays target other viruses and plasmids, suggesting that mini-CRISPR arrays represent a mechanism of heterotypic superinfection exclusion and participate in interviral conflicts. Applications CRISPR gene editing is a revolutionary technology that allows for precise, targeted modifications to the DNA of living organisms. Developed from a natural defense mechanism found in bacteria, CRISPR-Cas9 is the most commonly used system, that allows "cutting" of DNA at specific locations and either delete, modify, or insert genetic material. This technology has transformed fields such as genetics, medicine, and agriculture, offering potential treatments for genetic disorders, advancements in crop engineering, and research into the fundamental workings of life. However, its ethical implications and potential unintended consequences have sparked significant debate. See also CRISPR activation Anti-CRISPR CRISPR/Cas Tools CRISPR gene editing The CRISPR Journal "Designer baby" DRACO Gene knockout Genome-wide CRISPR-Cas9 knockout screens Glossary of genetics Human germline engineering Human Nature (2019 documentary film) MAGESTIC New eugenics Prime editing RNAi SiRNA Surveyor nuclease assay Synthetic biology Zinc finger Notes References Further reading External links Protein Data Bank 1987 in biotechnology 2015 in biotechnology Biological engineering Biotechnology Genetic engineering Genome editing Jennifer Doudna Molecular biology Non-coding RNA Repetitive DNA sequences Immune system Prokaryote genes
CRISPR
[ "Chemistry", "Engineering", "Biology" ]
7,130
[ "Genetics techniques", "Biological engineering", "Prokaryote genes", "Genome editing", "Immune system", "Prokaryotes", "Genetic engineering", "Biotechnology", "Organ systems", "Molecular genetics", "Repetitive DNA sequences", "nan", "Molecular biology", "Biochemistry" ]
2,146,043
https://en.wikipedia.org/wiki/Thermochromic%20ink
Thermochromic ink (also called thermochromatic ink) is a type of dye that changes color in response to a change in temperature. It was first used in the 1970s in novelty toys like mood rings, but has found some practical uses in things such as thermometers, product packaging, and pens. The ink has also found applications within the medical field for specific medical simulations in medical training. Thermochromic ink can also turn transparent when heat is applied; an example of this type of ink can be found on the corners of an examination mark sheet to prove that the sheet has not been edited or photocopied. Composition There are two main variants of thermochromic ink, one composed of leuco dyes and one composed of liquid crystals. For both types of ink, the chemicals need to be contained within capsules around 3 to 5 microns long. This protects the dyes and crystals from mixing with other chemicals that might affect the functionality of the ink. Leuco dyes The leuco dye variant is typically composed of leuco dyes with additional chemicals to add different desired effects. It is the most commonly used type because it is easier to manufacture. They can be designed to react to changes in temperature that range from -15 °C to 60 °C. Most common applications of the ink have activation temperatures at -10 °C (cold), 31 °C (body temperature), or 43 °C (warm). At lower temperatures, the ink appears to be a certain color, and once the temperature increases, the ink becomes either translucent or lightly colored, allowing hidden patterns to be seen. This gives the effect of a change in color, and the process can also be reversed by lowering the temperature again. Liquid crystals Liquid crystals can change from liquid to solid in response to a change in temperature. At lower temperatures, the crystals are mostly solid and hardly reflect any light, causing it to appear black. As it gradually increases in temperature, the crystals become more spaced out, causing light to reflect differently and changing the color of the crystals. The temperatures at which these crystals change their properties can range from -30 °C to 90 °C. Applications On June 20, 2017, the United States Postal Service released the first application of thermochromic ink to postage stamps in its Total Eclipse of the Sun Forever stamp to commemorate the solar eclipse of August 21, 2017. When pressed with a finger, body heat turns the black circle in the center of the stamp into an image of the full moon. The stamp image is a photo of a total solar eclipse seen in Jalu, Libya, on March 29, 2006. The photo was taken by retired NASA astrophysicist Fred Espenak, aka "Mr. Eclipse". Medical uses In medical training, thermochromic ink can be used to imitate human blood because it shares its color changing property. It is currently being tested in medical simulations involving extracorporeal membrane oxygenation (ECMO). In these procedures, a change in color of blood between a dark and light red indicates blood oxygenation and blood deoxygenation, which describes the oxygen concentration levels within a person's blood sample. It's important to accurately identify this change in order to safely and correctly operate the ECMO machines. This has led to simulation-based trainings (SBT) which allows medical students to run simulations that mimic real ECMO machines before using them in serious situations. By using thermochromic ink in these simulations, the color changing effect can be realistically copied and observed without using real human blood or other costly methods. Artificial blood or animal blood is typically used in these simulations; however, there are some advantages in using thermochromic ink as an alternative. It can be reused for multiple simulations with minimal variance in the outcomes and it is more cost effective. There are limitations to using this as the ink does not share any other properties with blood, so its only practical use is to observe the change in color of blood. Product packaging Product packaging is an important aspect of maintaining the quality of consumer goods. Modern day packaging is split into 2 categories; active packaging and smart packaging. Thermochromic ink has found use in smart packaging, which is the aspect of packaging that deals with monitoring the condition of the products. Since most consumer goods are affected by changes in temperature, using thermochromic ink as an indicator of those temperature changes allows consumers to recognize when the quality of a product has changed. It can also be used to tell consumers the right temperatures to consume the product. Erasable ink pens In 2006, Pilot Corporation, Japan developed a pen with erasable ink that utilized thermochromic ink. It was composed of a solvent, a colorant, and a resin film-forming agent. At temperatures below 65 °C, the ink stayed in a colored state. Once temperatures went above 65 °C, the ink began to melt and became colorless, creating the effect of erasable ink. The ink was able to return to its colored state by cooling the temperature down to below -10 °C. See also Thermochromism Security printing Active packaging References Thermochromism Dyes Spectroscopy Materials science
Thermochromic ink
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,066
[ "Molecular physics", "Spectrum (physical sciences)", "Applied and interdisciplinary physics", "Instrumental analysis", "Chromism", "Materials science", "nan", "Smart materials", "Spectroscopy", "Thermochromism" ]
2,146,106
https://en.wikipedia.org/wiki/Edip%20Y%C3%BCksel
Edip Yüksel (born December 20, 1957) is an American-Kurdish activist and prominent figure in the Quranism movement. Born in Güroymak, Yuksel is the author of more than twenty books on religion, politics, philosophy and law in Turkish. After settling in the United States, where he began his career as a lawyer, he became a colleague and friend of Rashad Khalifa. However, his interpretation of the Qur'an has differed with Khalifa on a number of issues, and his work has represented a new trend within the Quranist movement. Biography Yüksel comes from a Kurdish family who lived in Turkey, and is brother of Metin Yüksel. He is the author of more than twenty books on religion, politics, philosophy and law in Turkish. He has also written various articles and essays in English. He was a Turkish Islamist and a popular Islamic commentator until the mid-1980s when he rejected his previous religious beliefs and only used the Quran as the source of divine laws. He became a Quran-only Muslim, or known as Quranist. However, this movement is very controversial in the main Muslim circles, and thus Yüksel gained the rejection and hostility of many religious Islamic authorities in his home country. In 1989 Yüksel was forced to emigrate. He then settled in the United States of America, where he began his career as a lawyer. In the US, he worked with Rashad Khalifa, who claimed to have discovered a Quran code, also known as Code 19, in the Quran and called on Muslims to return to the Quran alone and to abandon all hadiths. Yüksel is critical of Islamic creationists, such as Harun Yahya. Professor Aisha Musa, from Florida International University, says in her book Hadith as Scripture about Yüksel: References External links 1957 births Living people People from Güroymak American people of Kurdish descent Turkish emigrants to the United States Turkish Kurdish people Turkish Quranist Muslims American Quranist Muslims Kurdish scholars Theistic evolutionists Muslim evolutionists
Edip Yüksel
[ "Biology" ]
416
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
2,146,163
https://en.wikipedia.org/wiki/SN%201572
SN 1572 (Tycho's Star, Tycho's Nova, Tycho's Supernova), or B Cassiopeiae (B Cas), was a supernova of Type Ia in the constellation Cassiopeia, one of eight supernovae visible to the naked eye in historical records. It appeared in early November 1572 and was independently discovered by many individuals. Its supernova remnant has been observed optically but was first detected at radio wavelengths. It is often known as 3C 10, a radio-source designation, although increasingly as Tycho's supernova remnant. Historic description The appearance of the Milky Way supernova of 1572 belongs among the most important observation events in the history of astronomy. The appearance of the "new star" helped to revise ancient models of the heavens and to speed on a revolution in astronomy that began with the realisation of the need to produce better astrometric star catalogues, and thus the need for more precise astronomical observing instruments. It also challenged the Aristotelian dogma of the unchangeability of the realm of stars. The supernova of 1572 is often called "Tycho's supernova", because of Tycho Brahe's extensive work De nova et nullius aevi memoria prius visa stella ("Concerning the Star, new and never before seen in the life or memory of anyone", published in 1573 with reprints overseen by Johannes Kepler in 1602 and 1610), a work containing both Brahe's own observations and the analysis of sightings from many other observers. Comparisons between Brahe's observations and those of Spanish scientist Jerónimo Muñoz revealed that the object was more distant than the Moon. This led Brahe to approach the Great Comet of 1577 as an astronomical body as well. Other Europeans to sight the supernova included Wolfgang Schuler, Christopher Clavius, Thomas Digges, John Dee, Francesco Maurolico, Tadeáš Hájek and . In England, Queen Elizabeth had the mathematician and astrologer Thomas Allen come and visit "to have his advice about the new star that appeared in the Swan or Cassiopeia ... to which he gave his judgement very learnedly", as the antiquary John Aubrey recorded in his memoranda a century later. In Ming dynasty China, the star became an issue between Zhang Juzheng and the young Wanli Emperor: in accordance with the cosmological tradition, the emperor was warned to consider his misbehavior, since the new star was interpreted as an evil omen. The more reliable contemporary reports state that the new star itself burst forth soon after November 2, 1572 and by November 11 it was already brighter than Jupiter. Around November 16, 1572, it reached its peak brightness at about magnitude −4.0, with some descriptions giving it as equal to Venus when that planet was at its brightest. Contrarily, Brahe described the supernova as "brighter than Venus". The supernova remained visible to the naked eye into early 1574, gradually fading until it disappeared from view. Supernova The supernova was classified as type I on the basis of its historical light curve soon after type I and type II supernovae were first defined on the basis of their spectra. The X-ray spectrum of the remnant showed that it was almost certainly of type Ia, but its detailed classification within the type Ia class continued to be debated until the spectrum of its light at peak luminosity was measured in a light echo in 2008. This gave final confirmation that it was a normal type Ia. The classification as a type Ia supernova of normal luminosity allows an accurate measure of the distance to SN 1572. The peak absolute magnitude can be calculated from the B-band decline rate to be . Given estimates of the peak apparent magnitude and the known extinction of magnitudes, the distance is kpc. Supernova remnant The distance to the supernova remnant has been estimated to between 2 and 5 kpc (approx. 6,500 and 16,300 light-years), with recent studies suggesting a narrower range of 2.5 and 3 kpc (approximately 8,000 and 9,800 light-years). Tycho's SNR has a roughly spherical morphology and spreads over an angular diameter of about 8 arcminutes. Its physical size corresponds to radius of the order of a few parsecs. Its measured expansion rate is about 11–12%/year in radio and X-ray. The average forward shock speed is between 4,000 and 5,000 km/s, dropping to lower speed when encountering local interstellar clouds. An older source says that the gas shell has reached an apparent diameter of 3.7 arcminutes. Initial radio detection The search for a supernova remnant was futile until 1952, when Robert Hanbury Brown and Cyril Hazard reported a radio detection at 158.5 MHz, obtained at the Jodrell Bank Observatory. This was confirmed, and its position more accurately measured in 1957 by Baldwin and Edge using the Cambridge Radio Telescope working at a wavelength of . The remnant was also identified tentatively in the second Cambridge Catalogue of Radio Sources as object "2C 34", and more firmly as "3C 10" in the third Cambridge list. There is no dispute that 3C 10 is the remnant of the supernova observed in 1572–1573. Following a 1964 review article by Minkowski, the designation 3C 10 appears to be that most commonly used in the literature when referring to the radio remnant of B Cas, although some authors use the tabulated galactic designation G120.7+2.1 and many authors commonly refer to it as Tycho's supernova remnant. Because the radio remnant was reported before the optical supernova-remnant wisps were discovered, the designation 3C 10 is used by some to signify the remnant at all wavelengths. X-ray observation An X-ray source designated Cepheus X-1 (or Cep X-1) was detected by the Uhuru X-ray observatory at 4U 0022+63. Earlier catalog designations are X120+2 and XRS 00224+638. Cepheus X-1 is actually in the constellation Cassiopeia, and it is SN 1572, the Tycho SNR. Optical detection The supernova remnant of B Cas was discovered in the 1960s by scientists with a Palomar Mountain telescope as a very faint nebula. It was later photographed by a telescope on the international ROSAT spacecraft. The supernova has been confirmed as Type Ia, in which a white dwarf star has accreted matter from a companion until it approaches the Chandrasekhar limit and explodes. This type of supernova does not typically create the spectacular nebula more typical of Type II supernovas, such as SN 1054 which created the Crab Nebula. A shell of gas is still expanding from its center at about 9,000 km/s. A recent study indicates a rate of expansion below 5,000 km/s. Companion star In October 2004, a letter in Nature reported the discovery of a G2 star, similar in type to our own Sun and named Tycho G. It is thought to be the companion star that contributed mass to the white dwarf that ultimately resulted in the supernova. A subsequent study, published in March 2005, revealed further details about this star: Tycho G was probably a main-sequence star or subgiant before the explosion, but some of its mass was stripped away and its outer layers were shock-heated by the supernova. Tycho G's current velocity is perhaps the strongest evidence that it was the companion star to the white dwarf, as it is traveling at a rate of 136 km/s, which is more than four times faster than the mean velocity of other stars in its stellar neighbourhood. This find has been challenged in recent years. The star is relatively far away from the center and does not show rotation which might be expected of a companion star. In Gaia DR2, the star was calculated to be light-years away, on the lower end of SN 1572's possible range of distances, which in turn lowered the calculated velocity from 136 km/s to only 56 km/s. In literature In the ninth episode of James Joyce's Ulysses, Stephen Dedalus associates the appearance of the supernova with the youthful William Shakespeare, and in the November 1998 issue of Sky & Telescope, three researchers from Southwest Texas State University, Don Olson and Russell Doescher of the Physics Department and Marilynn Olson of the English Department, argued that this supernova is described in Shakespeare's Hamlet, specifically by Bernardo in Act I, Scene i. The supernova inspired the poem "Al Aaraaf" by Edgar Allan Poe. The protagonist in Arthur C. Clarke's 1955 short story "The Star" casually mentions the supernova. It is a major element in Frederik Pohl's spoof science article, "The Martian Star-Gazers", first published in Galaxy Science Fiction Magazine in 1962. See also List of supernova remnants References External links Light curve and spectrum of Tycho's Supernova solstation.com: Tycho's Star The Search for the Companion Star of Tycho Brahe's 1572 Supernova cnn.com: Important days in history of universe Historical supernovae Supernova remnants 1572 1572 in science Tycho Brahe Cassiopeia (constellation) Articles containing video clips 15721104 Cassiopeiae, B 0092 Durchmusterung objects 10
SN 1572
[ "Astronomy" ]
1,972
[ "Historical supernovae", "Cassiopeia (constellation)", "Constellations", "History of astronomy" ]
2,146,220
https://en.wikipedia.org/wiki/Steenrod%20algebra
In algebraic topology, a Steenrod algebra was defined by to be the algebra of stable cohomology operations for mod cohomology. For a given prime number , the Steenrod algebra is the graded Hopf algebra over the field of order , consisting of all stable cohomology operations for mod cohomology. It is generated by the Steenrod squares introduced by for , and by the Steenrod reduced th powers introduced in and the Bockstein homomorphism for . The term "Steenrod algebra" is also sometimes used for the algebra of cohomology operations of a generalized cohomology theory. Cohomology operations A cohomology operation is a natural transformation between cohomology functors. For example, if we take cohomology with coefficients in a ring , the cup product squaring operation yields a family of cohomology operations: Cohomology operations need not be homomorphisms of graded rings; see the Cartan formula below. These operations do not commute with suspension—that is, they are unstable. (This is because if is a suspension of a space , the cup product on the cohomology of is trivial.) Steenrod constructed stable operations for all greater than zero. The notation and their name, the Steenrod squares, comes from the fact that restricted to classes of degree is the cup square. There are analogous operations for odd primary coefficients, usually denoted and called the reduced -th power operations: The generate a connected graded algebra over , where the multiplication is given by composition of operations. This is the mod 2 Steenrod algebra. In the case , the mod Steenrod algebra is generated by the and the Bockstein operation associated to the short exact sequence . In the case , the Bockstein element is and the reduced -th power is . As a cohomology ring We can summarize the properties of the Steenrod operations as generators in the cohomology ring of Eilenberg–Maclane spectra , since there is an isomorphism giving a direct sum decomposition of all possible cohomology operations with coefficients in . Note the inverse limit of cohomology groups appears because it is a computation in the stable range of cohomology groups of Eilenberg–Maclane spaces. This result was originally computed by and . Note there is a dual characterization using homology for the dual Steenrod algebra. Remark about generalizing to generalized cohomology theories It should be observed if the Eilenberg–Maclane spectrum is replaced by an arbitrary spectrum , then there are many challenges for studying the cohomology ring . In this case, the generalized dual Steenrod algebra should be considered instead because it has much better properties and can be tractably studied in many cases (such as ). In fact, these ring spectra are commutative and the bimodules are flat. In this case, these is a canonical coaction of on for any space , such that this action behaves well with respect to the stable homotopy category, i.e., there is an isomorphism hence we can use the unit the ring spectrum to get a coaction of on . Axiomatic characterization showed that the Steenrod squares are characterized by the following 5 axioms: Naturality: is an additive homomorphism and is natural with respect to any , so . is the identity homomorphism. for . If then Cartan Formula: In addition the Steenrod squares have the following properties: is the Bockstein homomorphism of the exact sequence commutes with the connecting morphism of the long exact sequence in cohomology. In particular, it commutes with respect to suspension They satisfy the Adem relations, described below Similarly the following axioms characterize the reduced -th powers for . Naturality: is an additive homomorphism and natural. is the identity homomorphism. is the cup -th power on classes of degree . If then Cartan Formula: As before, the reduced p-th powers also satisfy the Adem relations and commute with the suspension and boundary operators. Adem relations The Adem relations for were conjectured by and established by . They are given by for all such that . (The binomial coefficients are to be interpreted mod 2.) The Adem relations allow one to write an arbitrary composition of Steenrod squares as a sum of Serre–Cartan basis elements. For odd the Adem relations are for a<pb and for . Bullett–Macdonald identities reformulated the Adem relations as the following identities. For put then the Adem relations are equivalent to For put then the Adem relations are equivalent to the statement that is symmetric in and . Here is the Bockstein operation and . Geometric interpretation There is a nice straightforward geometric interpretation of the Steenrod squares using manifolds representing cohomology classes. Suppose is a smooth manifold and consider a cohomology class represented geometrically as a smooth submanifold . Cohomologically, if we let represent the fundamental class of then the pushforward map gives a representation of . In addition, associated to this immersion is a real vector bundle call the normal bundle . The Steenrod squares of can now be understood — they are the pushforward of the Stiefel–Whitney class of the normal bundle which gives a geometric reason for why the Steenrod products eventually vanish. Note that because the Steenrod maps are group homomorphisms, if we have a class which can be represented as a sum where the are represented as manifolds, we can interpret the squares of the classes as sums of the pushforwards of the normal bundles of their underlying smooth manifolds, i.e., Also, this equivalence is strongly related to the Wu formula. Computations Complex projective spaces On the complex projective plane , there are only the following non-trivial cohomology groups, , as can be computed using a cellular decomposition. This implies that the only possible non-trivial Steenrod product is on since it gives the cup product on cohomology. As the cup product structure on is nontrivial, this square is nontrivial. There is a similar computation on the complex projective space , where the only non-trivial squares are and the squaring operations on the cohomology groups representing the cup product. In the square can be computed using the geometric techniques outlined above and the relation between Chern classes and Stiefel–Whitney classes; note that represents the non-zero class in . It can also be computed directly using the Cartan formula since and Infinite Real Projective Space The Steenrod operations for real projective spaces can be readily computed using the formal properties of the Steenrod squares. Recall that where For the operations on we know that The Cartan relation implies that the total square is a ring homomorphism Hence Since there is only one degree component of the previous sum, we have that Construction Suppose that is any degree subgroup of the symmetric group on points, a cohomology class in , an abelian group acted on by , and a cohomology class in . showed how to construct a reduced power in , as follows. Taking the external product of with itself times gives an equivariant cocycle on with coefficients in . Choose to be a contractible space on which acts freely and an equivariant map from to Pulling back by this map gives an equivariant cocycle on and therefore a cocycle of with coefficients in . Taking the slant product with in gives a cocycle of with coefficients in . The Steenrod squares and reduced powers are special cases of this construction where is a cyclic group of prime order acting as a cyclic permutation of elements, and the groups and are cyclic of order , so that is also cyclic of order . Properties of the Steenrod algebra In addition to the axiomatic structure the Steenrod algebra satisfies, it has a number of additional useful properties. Basis for the Steenrod algebra (for ) and (for ) described the structure of the Steenrod algebra of stable mod cohomology operations, showing that it is generated by the Bockstein homomorphism together with the Steenrod reduced powers, and the Adem relations generate the ideal of relations between these generators. In particular they found an explicit basis for the Steenrod algebra. This basis relies on a certain notion of admissibility for integer sequences. We say a sequence is admissible if for each , we have that . Then the elements where is an admissible sequence, form a basis (the Serre–Cartan basis) for the mod 2 Steenrod algebra, called the admissible basis. There is a similar basis for the case consisting of the elements , such that Hopf algebra structure and the Milnor basis The Steenrod algebra has more structure than a graded -algebra. It is also a Hopf algebra, so that in particular there is a diagonal or comultiplication map induced by the Cartan formula for the action of the Steenrod algebra on the cup product. This map is easier to describe than the product map, and is given by . These formulas imply that the Steenrod algebra is co-commutative. The linear dual of makes the (graded) linear dual of A into an algebra. proved, for , that is a polynomial algebra, with one generator of degree , for every k, and for the dual Steenrod algebra is the tensor product of the polynomial algebra in generators of degree and the exterior algebra in generators τk of degree . The monomial basis for then gives another choice of basis for A, called the Milnor basis. The dual to the Steenrod algebra is often more convenient to work with, because the multiplication is (super) commutative. The comultiplication for is the dual of the product on A; it is given by where , and if . The only primitive elements of for are the elements of the form , and these are dual to the (the only indecomposables of A). Relation to formal groups The dual Steenrod algebras are supercommutative Hopf algebras, so their spectra are algebra supergroup schemes. These group schemes are closely related to the automorphisms of 1-dimensional additive formal groups. For example, if then the dual Steenrod algebra is the group scheme of automorphisms of the 1-dimensional additive formal group scheme that are the identity to first order. These automorphisms are of the form Finite sub-Hopf algebras The Steenrod algebra admits a filtration by finite sub-Hopf algebras. As is generated by the elements , we can form subalgebras generated by the Steenrod squares , giving the filtration These algebras are significant because they can be used to simplify many Adams spectral sequence computations, such as for , and . Algebraic construction gave the following algebraic construction of the Steenrod algebra over a finite field of order q. If V is a vector space over then write SV for the symmetric algebra of V. There is an algebra homomorphism where F is the Frobenius endomorphism of SV. If we put or for then if V is infinite dimensional the elements generate an algebra isomorphism to the subalgebra of the Steenrod algebra generated by the reduced p′th powers for p odd, or the even Steenrod squares for . Applications Early applications of the Steenrod algebra were calculations by Jean-Pierre Serre of some homotopy groups of spheres, using the compatibility of transgressive differentials in the Serre spectral sequence with the Steenrod operations, and the classification by René Thom of smooth manifolds up to cobordism, through the identification of the graded ring of bordism classes with the homotopy groups of Thom complexes, in a stable range. The latter was refined to the case of oriented manifolds by C. T. C. Wall. A famous application of the Steenrod operations, involving factorizations through secondary cohomology operations associated to appropriate Adem relations, was the solution by J. Frank Adams of the Hopf invariant one problem. One application of the mod 2 Steenrod algebra that is fairly elementary is the following theorem. Theorem. If there is a map of Hopf invariant one, then n is a power of 2. The proof uses the fact that each is decomposable for k which is not a power of 2; that is, such an element is a product of squares of strictly smaller degree. Michael A. Mandell gave a proof of the following theorem by studying the Steenrod algebra (with coefficients in the algebraic closure of ): Theorem. The singular cochain functor with coefficients in the algebraic closure of induces a contravariant equivalence from the homotopy category of connected -complete nilpotent spaces of finite -type to a full subcategory of the homotopy category of [[-algebras]] with coefficients in the algebraic closure of . Connection to the Adams spectral sequence and the homotopy groups of spheres The cohomology of the Steenrod algebra is the term for the (p-local) Adams spectral sequence, whose abutment is the p-component of the stable homotopy groups of spheres. More specifically, the term of this spectral sequence may be identified as This is what is meant by the aphorism "the cohomology of the Steenrod algebra is an approximation to the stable homotopy groups of spheres." See also Pontryagin cohomology operation Dual Steenrod algebra Cohomology operation References Pedagogical Characteristic classes – contains more calculations, such as for Wu manifolds Steenrod squares in Adams spectral sequence – contains interpretations of Ext terms and Streenrod squares Motivic setting Reduced power operations in motivic cohomology Motivic cohomology with Z/2-coefficients Motivic Eilenberg–Maclane spaces The homotopy of -motivic modular forms – relates to motivic tmf References Allen Hatcher, Algebraic Topology. Cambridge University Press, 2002. Available free online from the author's home page. Algebraic topology Hopf algebras
Steenrod algebra
[ "Mathematics" ]
2,920
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
2,146,240
https://en.wikipedia.org/wiki/Graphite%20%28smart%20font%20technology%29
Graphite is a programmable Unicode-compliant smart font technology and rendering system developed by SIL International as free software, distributed under the terms of the GNU Lesser General Public License and the Common Public License. Capabilities and comparison to other smart font technologies Graphite is based on the TrueType font format, and adds three of its own tables. It allows for a variety of rendering rules, including ligatures, glyph substitution, glyph insertion, glyph rearrangement, anchoring diacritics, kerning, and justification. Graphite rules may be sensitive to the context. For instance, there might be a glyph substitution rule that replaces every non-final s by an ſ. In a Graphite font, all smart rendering information resides within the font file. In order to display the Graphite smart rendering, an application needs only Graphite support, but no built-in knowledge about the writing system’s rendering. This makes Graphite especially suited for minority writing systems that cannot rely on applications to provide built-in rendering information. In this regard, Graphite is similar to AAT and different from OpenType which requires applications to provide built-in rendering information. Graphite support Graphite was originally implemented on Windows. It has been ported to Linux. It is also available on Mac OS X Snow Leopard although with AAT, macOS already provides a technology suitable for minority scripts. Applications that support Graphite include the SIL WorldPad, XeTeX, OpenOffice.org (since version 3.2, except for the macOS version), LibreOffice (formerly except for the macOS version, since version 5.3, Graphite is available on all platforms). It was built into Thunderbird 11 and Firefox 11, and was turned on by default since version 22, but was disabled in Firefox version 45.0.1 and re-enabled in version 49.0. See also OpenType Apple Advanced Typography Uniscribe HarfBuzz International Components for Unicode References External links List of Graphite-enabled fonts SIL Language Technology products, which include Graphite and fonts SIL Graphite Sourceforge website Project SILA — Graphite and Mozilla integration project Presentation of Graphite for aKademy 2007, by S Correll SIL Graphite Font Demo for testing browsers Font formats Free typesetting software Text rendering libraries Free software programmed in C++ Software using the GNU Lesser General Public License
Graphite (smart font technology)
[ "Technology" ]
511
[ "Computing stubs", "Digital typography stubs" ]
2,146,241
https://en.wikipedia.org/wiki/Hydrobiology
Hydrobiology is the science of life and life processes in water. Much of modern hydrobiology can be viewed as a sub-discipline of ecology but the sphere of hydrobiology includes taxonomy, economic and industrial biology, morphology, and physiology. The one distinguishing aspect is that all fields relate to aquatic organisms. Most work is related to limnology and can be divided into lotic system ecology (flowing waters) and lentic system ecology (still waters). One of the significant areas of current research is eutrophication. Special attention is paid to biotic interactions in plankton assemblage including the microbial loop, the mechanism of influencing algal blooms, phosphorus load, and lake turnover. Another subject of research is the acidification of mountain lakes. Long-term studies are carried out on changes in the ionic composition of the water of rivers, lakes and reservoirs in connection with acid rain and fertilization. One goal of current research is elucidation of the basic environmental functions of the ecosystem in reservoirs, which are important for water quality management and water supply. Much of the early work of hydrobiologists concentrated on the biological processes utilized in sewage treatment and water purification especially slow sand filters. Other historically important work sought to provide biotic indices for classifying waters according to the biotic communities that they supported. This work continues to this day in Europe in the development of classification tools for assessing water bodies for the EU water framework directive. A hydrobiologist technician conducts field analysis for hydrobiology. They identify plants and living species, locate their habitat, and count them. They also identify pollutants and nuisances that can affect the aquatic fauna and flora. They take the samples and write reports of their observations for publications. A hydrobiologist engineer intervenes more in the process of the study. They define the intervention protocols and what samples should be taken. They plan and program the study campaigns, and then summarize their results. In the event of pollution, they propose solutions to improve the biological quality of water within the framework of the regulations in force. In the case of complex programs, hydrobiologists can work in a multidisciplinary team with botanists and zoologists. The hydrobiologist works on behalf of large public institutions of a scientific and technological nature (CNRS, INRA, IRD, CIRAD, IRSTEA ...), public institutions (Water Agencies, Regional Directorates environment, Higher Council of Fisheries, CEMAGREF ...), companies (EDF, Veolia environment, Suez environment, Saur, ...), local authorities, research departments, and associations (Federations of fishing, Permanent Centers for Environmental Initiatives ...). Training and studies The biologist technician usually has a training level bac +2 or bac +3: - DUT biological engineering options biological and biochemical analyzes (ABB), environmental engineering, - BTSA water professions - BTS GEMEAU - water management and control - BTS and regional controls - BTSA Agricultural, Biological and Biotechnological Analyzes (ANABIOTEC) - DEUST analysis of biological media - Bachelor's degree in biology The engineer in hydrobiology has a training level bac +5: - engineering school diploma: INA, ENSA, Polytech Montpellier sciences and water technologies - master's degree in environmental sciences or biology (training examples) environmental management and coastal ecology (University of La Rochelle) biology of organisms and populations (University of Burgundy) continental and coastal environments sciences Environment, Soils, Waters and Biodiversity (University of Rouen) operation and restoration of continental aquatic environments (University of Clermont Ferrand), etc. Field of research interests The following are the research interests of Hydrobiologists: Acidification impact on lake and reservoir ecosystems Ocean acidification Paleolimnology of remote mountain lakes Molecular ecology, phylogeography and taxonomy of Cladocera Chemical communication in plankton (prey-predator interaction) Biomanipulation of water reservoirs Phosphorus and nitrogen nutrient cycles Organizations American Society of Limnology and Oceanography (ASLO) International Association of Theoretical and Applied Limnology (SIL) American Fisheries Society Freshwater Biological Association, England Marine Biological Laboratory (USA) Australian Society for Fish Biology Fisheries and Marine Institute of Memorial University of Newfoundland Department of Hydrobiology (Charles University, Prague) Dresden University of Technology Institute of Hydrobiology Institute of Hydrobiology and Fishery Science Water Research Institute T.G.M. Hydrobiological Institute, Academy of Science of Czech Republic Research Institute of Fish Culture and Hydrobiology Department of Hydrobiology, Slovak Academy of Science, Bratislava, Slovakia Max-Planck-Institut fur Limnologie in Ploen, Germany CNR-Istituto Italiano di Idrobiologia Institute of Hydrobiology, Chinese Academy of Sciences Hydrobiology Pty Ltd Brisbane, Australia based private consulting company Institute of Zoology and Hydrobiology University of Tartu, Estonia Department of Hydrobiology Bulgarian Academy of Sciences Department of General and Applied Hydrobiology Faculty of Biology, Sofia University "Sveti Kliment Ohridski", Bulgaria Journals Fundamental and Applied Limnology International Review of Hydrobiology Indian Hydrobiology Hydrobiologia The African Journal of Tropical Hydrobiology and Fisheries Review of Hydrobiology The Biological Bulletin Notable researchers Jacques Cousteau Jane Lubchenco See also Aquatic ecosystem Aquatic toxicology Freshwater biology Marine biology Marine life References E.P.H. Best (Editor), Jan P. Bakker (Editor) Netherlands-Wetlands (Developments in Hydrobiology series) (Kluwer Academic Publishers, Dordrecht, 1993, 328 pp) R.I. Jones (Editor), V. Ilmavirta (Editor) Flagellates in Freshwater Ecosystems (Developments in Hydrobiology series) (Kluwer Academic Publishers, Dordrecht, 1992, 498pp.) Jürgen Schwoerbel Methods of hydrobiology (freshwater biology) (Pergamon Press; [1st English ed.] edition, 1970, 200pp.) Hydrobiologist by Josée Lesparre © CIDJ - 09/2019 Citations Kopáček, Jiří; Kaňa, Jiří; Bičárová, Svetlana; Brahney, Janice; Navrátil, Tomáš; Norton, Stephen A.; Porcal, Petr; Stuchlík, Evžen (2019-09-06). "Climate change accelerates recovery of the Tatra Mountain lakes from acidification and increases their nutrient and chlorophyll a concentrations". Aquatic Sciences. Beamish, Richard J.; Harvey, Harold H. (2011-04-13). "Acidification of the La Cloche Mountain Lakes, Ontario, and Resulting Fish Mortalities". Journal of the Fisheries Board of Canada. Qu, Bin; Zhang, Yulan; Kang, Shichang; Sillanpää, Mika (2019-02-01). "Water quality in the Tibetan Plateau: Major ions and trace elements in rivers of the "Water Tower of Asia"". Science of the Total Environment. Galizia Tundisi, José (2018-08-01). "Reservoirs: New challenges for ecosystem studies and environmental management". Water Security. "Introduction to the EU Water Framework Directive  - Environment - European Commission". ec.europa.eu. Retrieved 2022-03-01 Doney, Scott C.; Busch, D. Shallin; Cooley, Sarah R.; Kroeker, Kristy J. (2020-10-17). "The Impacts of Ocean Acidification on Marine Ecosystems and Reliant Human Communities". Annual Review of Environment and Resources. Moser, Katrina A.; Hundey, Elizabeth J.; Sia, Maria E.; Doyle, Rebecca M.; Dunne, Holly; Longstaffe, Fred J. (2020-05-01). "Factors Leading to Increased Algal Production in Mountain Lakes: A Paleolimnological Perspective from the Uinta Mountains, Utah, USA": Bekker, Eugeniya I.; Karabanov, Dmitry P.; Galimov, Yan R.; Haag, Christoph R.; Neretina, Tatiana V.; Kotov, Alexey A. (2018-03-15). "Phylogeography of Daphnia magna Straus (Crustacea: Cladocera) in Northern Eurasia: Evidence for a deep longitudinal split between mitochondrial lineages". Shemi, Adva; Alcolombri, Uria; Schatz, Daniella; Farstey, Viviana; Vincent, Flora; Rotkopf, Ron; Ben-Dor, Shifra; Frada, Miguel J.; Tawfik, Dan S.; Vardi, Assaf (2021-11). "Dimethyl sulfide mediates microbial predator–prey interactions between zooplankton and algae in the ocean". Nature Microbiology Jůza, Tomáš; Duras, Jindřich; Blabolil, Petr; Sajdlová, Zuzana; Hess, Josef; Chocholoušková, Zdeňka; Kubečka, Jan (2019-10-01). "Recovery of the Velky Bolevecky pond (Plzen, Czech Republic) via biomanipulation – Key study for management". Ecological Engineering Hecky, R. E.; Bootsma, H. A.; Mugidde, R. M.; Bugenyi, F. W. B. (1996), "Phosphorus Pumps, Nitrogen Sinks, and Silicon Drains: Plumbing Nutrients in the African Great Lakes", The Limnology, Climatology and Paleoclimatology of the East African Lakes, Routledge External links Hydrobiology website Website for Hydrobiology, Aquacultures, Ichthyology, Water purification and Biological Oceanology - Bulgaria. The International Congress on the Biology of Fish Annual Larval Fish Conference Annual Northeast Fish and Wildlife Conference The Waterrose Aquatic Ecology Page Developments in Hydrobiology-Springer Book Series Prints proceedings for international conferences on Hydrobiology Aquatic biomes Aquatic ecology Branches of biology
Hydrobiology
[ "Biology" ]
2,072
[ "Aquatic ecology", "Ecosystems", "nan" ]
2,146,253
https://en.wikipedia.org/wiki/Niobium%28V%29%20chloride
Niobium(V) chloride, also known as niobium pentachloride, is a yellow crystalline solid. It hydrolyzes in air, and samples are often contaminated with small amounts of NbOCl3. It is often used as a precursor to other compounds of niobium. NbCl5 may be purified by sublimation. Structure and properties Niobium(V) chloride forms chloro-bridged dimers in the solid state (see figure). Each niobium centre is six-coordinate, but the octahedral coordination is significantly distorted. The equatorial niobium–chlorine bond lengths are 225 pm (terminal) and 256 pm (bridging), whilst the axial niobium-chlorine bonds are 229.2 pm and are deflected inwards to form an angle of 83.7° with the equatorial plane of the molecule. The Nb–Cl–Nb angle at the bridge is 101.3°. The Nb–Nb distance is 398.8 pm, too long for any metal-metal interaction. NbBr5, NbI5, TaCl5 TaBr5 and TaI5 are isostructural with NbCl5. Preparation Industrially, niobium pentachloride is obtained by direct chlorination of niobium metal at 300 to 350 °C: 2Nb + 5Cl2 → 2NbCl5 In the laboratory, niobium pentachloride is often prepared from Nb2O5, the main challenge being incomplete reaction to give NbOCl3. The conversion can be effected with thionyl chloride: It also can be prepared by chlorination of niobium pentoxide in the presence of carbon at 300 °C. Uses Niobium(V) chloride is the main precursor to the alkoxides of niobium, which find uses in sol-gel processing. It is also the precursor to many other Nb-containing reagents, including most organoniobium compounds. In organic synthesis, NbCl5 is a very specialized Lewis acid in activating alkenes for the carbonyl-ene reaction and the Diels-Alder reaction. Niobium chloride can also generate N-acyliminium compounds from certain pyrrolidines which are substrates for nucleophiles such as allyltrimethylsilane, indole, or the silyl enol ether of benzophenone. References External links Safety information from ChemExper NIST Standard Reference Database Niobium(V) compounds Chlorides Metal halides
Niobium(V) chloride
[ "Chemistry" ]
550
[ "Chlorides", "Inorganic compounds", "Metal halides", "Salts" ]
2,146,454
https://en.wikipedia.org/wiki/Media%20preservation
Preservation of documents, pictures, recordings, digital content, etc., is a major aspect of archival science. It is also an important consideration for people who are creating time capsules, family history, historical documents, scrapbooks and family trees. Common storage media are not permanent, and there are few reliable methods of preserving documents and pictures for the future. Paper/prints (photos) Color negatives and ordinary color prints may fade away to nothing in a relatively short period if not stored and handled properly. This happens even if the negatives and prints are kept in the dark, because ambient light is not the determining factor, but heat and humidity are. The color degradation is the result of the dyes used in the color processes. Because color processing results in a less stable image than traditional black-and-white processing, black-and-white pictures from the 1920s are more likely to survive long-term than color films and photographs from after the middle 20th century. Black-and-white photographic films using silver halide emulsions are the only film types that have proven to last for archival storage. The determining factors for longevity include the film base type, proper processing (develop, stop, fix and wash) and proper storage. Early films used a Cellulose nitrate base which was prone to decomposition and highly flammable. Nitrate film was replaced with acetate-base films. These Cellulose acetate films were later discovered to outgass acids (also referred to as vinegar syndrome). Acetate films were replaced in the early 1980s by polyester film base materials which have been determined to be more stable than film stocks with a nitrate or acetate base. Color prints made on most inkjet printers look very good at first but they have a very short lifespan, measured in months rather than in years. Even prints from commercial photo labs will start to fade in a matter of years if not processed properly and stored in cool, dry environments. Documents/books With documents for which the media are not so critical as what the documents contain, the information in documents can be copied by using photocopiers and image scanners. Books and manuscripts can also have their information saved without destruction by using a book scanner. Where the medium itself needs to be preserved, for example if a document is a crayon sketch by a famous artist on paper, a complex process of preservation may be used. Depending on the condition and importance of the item this can include gluing the media onto more stable media, or protective enclosing of the media. Polyester sleeves, acid-free folders, and pH buffered document boxes are common supportive protective enclosures whose selection must match the media's chemical and physical properties. Other considerations in preserving paper/books are: Damaging light, particularly UV light, which fades and destroys media over time by breaking down the molecules. Atmosphere contains small traces of sulfur dioxide and nitric acid which turn media yellow and break the fibers down. Humidity and moisture also aid in the breakdown of media. If there is too much, the document can be attacked by bacteria, and if too little, cellulose material breaks down. Temperature, particularly elevated ones, can destroy some media. Low temperatures can cause the water to form crystals which expands destroying the structure of paper-based documents. Online photo albums Although there are many websites that allow the upload of photographs and videos, digital preservation for the long-term is still considered an issue. There is a lack of confidence that such websites are capable of storing data for long periods of time (ex. 50 years) without data degradation or loss. Optical media - CD, DVD, Blu-ray, M-Disc Write-once optical media, such as CD-Rs and DVD-Rs, typically contain an organic dye that distinguishes data reading from data writing based on the dye's transparency along the disc. Conventional CDs and DVDs have finite shelf-life due to natural degradation of the dye; the newer M-DISC uses inorganic material technology to produce molded DVDs and Blu-Rays (up to 3-layer 100GB BDXL) with a claimed lifespan of 100-1000 years if stored correctly with most BD & BDXL rated read/writers enabling the higher power mode for the M-Disc format after 2011. The National Archives and Records Administration lists published life expectancies to be 10 or 25 years or more for normal CDs and DVDs and conservative life expectancies to be between 2 and 5 years. Storage environments, such as temperature and humidity, as well as handling conditions such as frequency of media use and compatibility between the recorder and media, affect media shelf-life. Improvements in media storage and migrations to new recording technologies can make certain formats obsolete within their respective lifespan. Technologists have pointed to internet streaming services, where services such as video-on-demand have contributed to the 33 percent decline in DVD sales the past 5 years, as a challenge for digital preservation. Magnetic media - video cassettes, tapes, hard drives Magnetic media such as audio and video tape and floppy disks also have limited life spans. Audio and video tapes require specific care and handling to ensure that the recorded information will be preserved. For information that must be preserved indefinitely, periodic transcription from old media to new ones is necessary, not only because the media are unstable but also because the recording technology may become obsolete. Magnetic media also deteriorates naturally with typical shelf lives between 10 and 20 years. Magnetic tape can degrade from binder hydrolysis or magnetic remanence decay. Binder hydrolysis, also known as sticky-shed syndrome, refers to the breakdown of binder, or glue, that holds the magnetic particles to the polyester base of the tape. Tapes which have been stored in hot, humid conditions are particularly vulnerable to this phenomenon and may suffer from accelerated degradation. Severe binder can cause the magnetic material to fall off or sheds from the base, leaving a pile of dust and clear backing. Archivists can bake the tape, which evaporates water molecules on the tape, to temporarily restore the binder before making a copy. Magnetic tape can also be destabilized by magnetic remanence decay, which refers to the weakening of the tape's magnetization over time. This weakens the affected tape's readability, leading to reduced sound clarity and volume or picture hue and contrast. Baking the tape will not restore magnetization. Media at risk include recorded media such as master audio recordings of symphonies and videotape recordings of the news gathered over the last 40 years. Threats to media that must be considered when archiving important record media include accidental erasure, physical loss due to disasters such as fires and floods, and media degradation. Along with the actual media being degraded over the years, the machines that are available to play back or reproduce the audio sources are becoming archaic themselves. Manufacturers and their support (parts, technical updates) for their machines have disappeared throughout the years. Even if the medium is vaulted and archived correctly, the mechanical properties of the machines have deteriorated to the point that they could do more harm than good to the tape being played. Many major film studios are now backing up their libraries by converting them to electronic media files, such as .AIFF or .WAV-based files via digital audio workstations. That way, even if the digital platform manufacturer goes out of business or no longer supports their product, the files can still be played on any common computer. There is a detailed process that must take place previous to the final archival product now that a digital solution is in place. Sample rates and their conversion and reference speed are both critical in this process. In floppy disks, the lubricants inside the plastic jackets of many older floppies promote the decay of the magnetic medium. Also, the alignment of the magnetic particles of the disk substrate may gradually degrade, leading to a loss of formatting and data. Early laser disk media were prone to degradation as the layers of the disk substrate were bonded with an adhesive that was vulnerable to decay and would crumble over time. This would lead the different layers of the disk to peel apart, damaging the pitted data surface and rendering the disk unreadable. See also References Museology Preservation (library and archival science) Recording Digital media Conservation and restoration of cultural heritage
Media preservation
[ "Technology" ]
1,686
[ "Multimedia", "Digital media" ]
2,146,459
https://en.wikipedia.org/wiki/Apple%20Interactive%20Television%20Box
The Apple Interactive Television Box (AITB) was a television set-top box developed by Apple Computer (now Apple Inc.) in partnership with a number of global telecommunications firms, including British Telecom and Belgacom. Prototypes of the unit were deployed at large test markets in parts of the United States and Europe in 1994 and 1995, but the product was canceled shortly thereafter, and was never mass-produced or marketed. Overview The AITB was designed as an interface between a consumer and an interactive television service. The unit's remote control would allow a user to choose what content would be shown on a connected television, and to seek with fast forward and rewind. In this regard it is similar to a modern satellite receiver or TiVo unit. The box would only pass along the user's choices to a central content server for streaming instead of issuing content itself. There were also plans for game shows, educational material for children, and other forms of content made possible by the interactive qualities of the device. Early conceptual prototypes have an unfinished feel. Near-completion units have a high production quality, the internal components often lack prototype indicators, and some units have FCC approval stickers. These facts, along with a full online manual suggest the product was very near completion before being canceled. Infrastructure Because the machine was designed to be part of a subscription data service, the AITB units are mostly inoperable. The ROM contains only what is required to continue booting from an external hard drive or from its network connection. Many of the prototypes do not appear to even attempt to boot. This is likely dependent on changes in the ROM. The ROM itself contains parts of a downsized System 7.1, enabling it to establish a network connection to the media servers provided by Oracle. The Oracle Media Server (OMS) initially ran on hardware produced by Larry Ellison's nCube Systems company, but was later also made available by Oracle on SGI, Alpha, Sun, SCO, Netware, Windows NT, and AIX systems. These servers also provided the parts of the OS not implemented in ROM of the AITB via the OMS Boot Service. Therefore, an AITB must establish a network connection successfully in order to finish the boot process. Using a command key combination and a PowerBook SCSI adapter, it is possible to get the AITB to boot into a preinstalled System 7.1 through an external SCSI hard drive. In July 2016, images were published on a video game forum that appear to show a Super Nintendo Entertainment System cartridge designed to work with the British Telecom variant of the AITB. The cartridge is labeled "BT GameCart" and includes an 8-pin serial connector designed to connect to the Apple System/Peripheral 8 port on the rear of the box. A BT promotional film for the service trial discusses a way users could download and play Nintendo video games via the system. Specifications The Apple Interactive Television Box is based upon the Macintosh Quadra 605 or LC 475. Because the box was never marketed, not all specifications have been stated by Apple. It supports MPEG-2 Transport containing ISO11172 (MPEG-1) bit streams, Apple Desktop Bus, RF in and out, S-Video out, RCA audiovideo out, RJ-45 connector for either E1 data stream on PAL devices or T1 data stream on NTSC devices, serial port, and HDI-30 SCSI. Apple intended to offer the AITB with a matching black ADB mouse, keyboard, Apple 300e CD-ROM drive, StyleWriter printer, and one of several styles of remote controls. The hard drive contains parts of a regular North American System 7.1.1 with Finder, several sockets for network connection protocols, and customized MPEG1 decoding components for the QuickTime Player software. History A few units contain a special boot ROM which allows the device to boot locally from a SCSI hard drive that has the OS and applications contained within the box; these devices were used primarily by developers inside Apple and Oracle, and for limited demonstration purposes. In normal network use, content and program code was served to the box by Oracle OMS over the network to implement the box's interactivity. A few hundred to a few thousand units were deployed at Disneyland California hotels and provided in room shopping and park navigation. Approximately 2,500 units were installed and used in consumer homes in England during the second interactive television trial conducted by British Telecom and Oracle, which was in Ipswich, UK. The set-top applications were developed using Oracle's Oracle Media Objects (OMO) product, which is somewhat similar to HyperCard, but was enhanced significantly to operate in a network-based interactive TV environment. See also Apple TV Macintosh TV Apple Pippin IPTV (Internet Protocol Television) References External links Apple Interactive Television History - Computer Town Patent filed by Apple over the set-top box Apple Inc. hardware Set-top box Television technology Discontinued Apple Inc. products
Apple Interactive Television Box
[ "Technology" ]
1,021
[ "Information and communications technology", "Television technology" ]
2,146,589
https://en.wikipedia.org/wiki/Gorongosa%20National%20Park
Gorongosa National Park is at the southern end of the Great African Rift Valley in the heart of central Mozambique, Southeast Africa. The more than park comprises the valley floor and parts of surrounding plateaus. Rivers originating on nearby Mount Gorongosa () water the plain. Seasonal flooding and waterlogging of the valley, which is composed of a mosaic of soil types, creates a variety of distinct ecosystems. Grasslands are dotted with patches of acacia trees, savannah, dry forest on sands and seasonally rain-filled pans, and termite hill thickets. The plateaus contain miombo and montane forests and a spectacular rain forest at the base of a series of limestone gorges. This combination of unique features at one time supported some of the densest wildlife populations in all of Africa, including charismatic carnivores, herbivores, and over 500 bird species. But large mammal numbers were reduced by as much as 95% and ecosystems were stressed during the Mozambican Civil War (1977-1992). The Carr Foundation/Gorongosa Restoration Project, a U.S. non-profit organization, has teamed with the Government of Mozambique to protect and restore the ecosystem of Gorongosa National Park and to develop an ecotourism industry to benefit local communities. History Hunting reserve: 1920–1959 The first official act to protect the Gorongosa region, Portuguese Mozambique, came in 1920 when the Mozambique Company ordered 1,000 square km set aside as a hunting reserve for company administrators and their guests. Chartered by the government of Portugal, the Mozambique Company controlled all of central Mozambique between 1891 and 1940. In 1935, Mr. Jose Henriques Coimbra was named warden and Jose Ferreira became the reserve's first guide. That same year the Mozambique Company enlarged the reserve to 3,200 square km to protect habitat for nyala and black rhino, both highly prized hunting trophies. By 1940 the reserve had become so popular that a new headquarters and tourist camp was built on the floodplain near the Mussicadzi River. It was abandoned two years later due to heavy flooding in the rainy season. Lions then occupied the abandoned building and it became a popular tourist attraction for many years, known as Casa dos Leões (Lion House). National park: 1960–1980 Many improvements to the new park's trails, roads, and buildings ensued. Between 1963 and 1965 Chitengo camp was expanded to accommodate 100 overnight guests. By the late 1960s, it had two swimming pools, a bar and banquet hall, a restaurant serving 300-400 meals a day, a post office, a petrol station, a first-aid clinic, and a shop selling local handicrafts. The late 1960s also saw the first comprehensive scientific studies of the park, led by Armando Rosinha, Director of Gorongosa, and Kenneth Tinley, an Australian ecologist. In the first-ever aerial survey, Tinley and his team counted about 200 lions, 2,200 elephants, 14,000 African buffalo, 5,500 wildebeest, 3,000 zebras, 3,500 waterbuck, 2,000 impala, 3,500 hippos, and herds of eland, sable antelope and hartebeest numbering more than 500. Tinley also discovered that many people and most of the wildlife living in and around the park depended on one river, the Vundudzi, which originated on the slopes of the nearby Mount Gorongosa. Because the mountain was outside the park's boundaries, Tinley proposed expanding them to include it as a key element in a "Greater Gorongosa Ecosystem" of about 8,200 square kilometers. He and other scientists and conservationists were disappointed in 1966 when the government reduced the park's area to 3,770 square kilometers. Meanwhile, Mozambique was in the midst of a war for independence launched in 1964 by the Mozambique Liberation Front (Frelimo). Fortunately, the war had little impact on Gorongosa National Park until 1972, when a Portuguese company and members of the Provincial Volunteer Organization were stationed there to protect it. Even then, not much damage occurred, although some soldiers hunted illegally. In 1974, the Carnation Revolution in Lisbon overthrew the Estado Novo regime. When the new Portuguese authorities decided to abdicate power in their overseas territories, Mozambique became an independent republic. In 1976, a year after Mozambique won its independence from Portugal, aerial surveys of the park and adjacent Zambezi River delta counted thousands of elephants in the region and a healthy population of lions, numbering in the hundreds. It was the largest lion population recorded in the greater Gorongosa region to date. Civil war: 1981–1994 In 1977, the People's Republic of Mozambique, under the leadership of Samora Machel declared itself a Marxist-Leninist state. A rebel army known as RENAMO sprung up to oppose the new government. Feeling threatened by FRELIMO's new one-party government in Mozambique, neighbouring Rhodesia and South Africa began arming and supplying RENAMO. Once Rhodesia became Zimbabwe in 1980, direct support for RENAMO came from South Africa with the intention of destabilizing Machel's government. Initially dismissed by Machel as a group of "armed bandits", RENAMO's war developed into a full-scale national threat by 1981. In December 1981 the Mozambican National Resistance (MNR, or RENAMO) fighters attacked the Chitengo campsite and kidnapped several staff members, including two foreign scientists. The Mozambican Civil War lasted from 1977 to 1992. The violence increased in and around the park after that. In 1983 the park was shut down and abandoned. For the next nine years Gorongosa was the scene of frequent battles between opposing forces. Fierce hand-to-hand fighting and aerial bombing destroyed buildings and roads. The park's large mammals suffered huge losses. Both sides in the conflict slaughtered hundreds of elephants for their ivory, selling it to buy arms and supplies. Half of Gorongosa's elephants evolved to be tuskless. Hungry soldiers shot many more thousands of zebras, wildebeest, African buffalo, and other ungulates. Lions survived the war, but several species of top carnivore—leopard, African wild dog, and spotted hyena—were driven locally extinct. A cease-fire agreement ended the civil war in 1992 but widespread hunting in the park continued for at least two more years. By that time many large mammal populations—including elephants, hippos, buffalo, zebras, and lions had been reduced by more than 95 percent. An aerial survey conducted in 1994 over 68 km2 of the park counted just 5 elephants, 6 waterbuck, 3 zebra, 12 reedbuck, and 1 oribi; buffalo and sable were not detected in aerial surveys until 2001, wildebeest until 2007, and eland until 2010. Post-war: 1995–2003 A preliminary effort to rebuild Gorongosa National Park's infrastructure and restore its wildlife began in 1994 when the African Development Bank (ADB) started work on a rehabilitation plan with assistance from the European Union and the International Union for Conservation of Nature (IUCN). Fifty new staff were hired, most of them former soldiers. Restoration: 2004-present In 2004 the Government of Mozambique and the US-based Carr Foundation agreed to work together to rebuild the park's infrastructure, restore its wildlife populations and spur local economic development—opening an important new chapter in the park's history. Since the beginning of the project, aerial surveys of wildlife have shown sharp increases in the number of large animals. In the aftermath of the 2019 Cyclone Idai, park rangers conducted rescue missions using their helicopter, boat, and tractor. According to Gorongosa Project president Gregory Carr, the park was "right in the middle of the impacted area". Roughly half the park was flooded due to the cyclone, but impacts to wildlife were expected to be minimal as the animals would be able to migrate to higher ground. The protection of this area was cited as a reason that the impacts of the flood on the human population were less severe, as the protected wilderness area can moderate the flow of water. In March 2018, a leopard was captured on a park camera after 14 years, and additional leopards were reintroduced starting in 2020. In July 2018 and November 2019, two packs of African wild dogs from South Africa were reintroduced. Spotted hyena reintroductions began in July 2022. Ecology Geology The park is in a 4,000-square-km section of the Great African Rift Valley system. The Rift extends from Ethiopia to central Mozambique. Massive tectonic shifts began forming the Rift about 30 million years ago. Other warpings, uplifts, and sinkings of the Earth's crust over millennia shaped the plateaus on both sides and the mountain to the west. Mozambique's tropical savanna climate, with an annual cycle of wet and dry seasons, has added another factor to the complex equation: constant change in soil moisture that varies with elevation. The valley is located 21  km west of Mount Gorongosa at 14 m above sea level. Hydrology Gorongosa National Park protects a vast ecosystem defined and shaped by the rivers that flow into Lake Urema. The Nhandungue crosses the Barue Plateau on its way down to the valley. The Nhandue and Mucombeze come from the north. Mount Gorongosa contributes the Vunduzi. Several smaller rivers pour down off the Cheringoma Plateau. Together they comprise the Urema Catchment, an area of about 7,850 square km. Lake Urema is located in the middle of the valley, about three-quarters of the way down from the Park's northern boundary. The Muaredzi River, flowing from the Cheringoma Plateau, deposits sediments near the outlet of the lake slowing its drainage. This "plug" causes the Urema River to greatly expand in the rainy season. Water that makes its way past this alluvial fan flows down the Urema River to the Pungue and into the Indian Ocean. In the flooded rainy season, water backs up into the valley and out onto the plains, covering as much as 200 square km in many years. During some dry seasons, the lake's waters shrink to as little as 10 square km. This constant expansion and retraction of the floodplains, amidst a patchwork of savanna, woodland, and thickets, creates a complex mosaic of smaller ecosystems that support a greater abundance and diversity of wildlife than anywhere else in the park. Vegetation Scientists have identified three main vegetation types supporting the Gorongosa ecosystem's wealth of wildlife. Seventy-six percent is savanna — combinations of grasses and woody species that favor well-drained soils. Fourteen percent is woodlands — several kinds of forest and thickets. The rest is grasslands subjected to harsh seasonal conditions that prevent trees from growing. All three types are found throughout the system, with many different sub-types and varieties. Tree cover increased throughout the park in the decades following the Mozambican Civil War, likely due to the dramatic declines of large herbivores such as elephants during that period. Mount Gorongosa has rainforests, montane grasslands, riverine forests along its rivers, and forests and savanna woodlands at lower elevations. Both plateaus are covered with a kind of closed-canopy savanna, widespread in southern Africa, called "miombo", after the Swahili word for the dominant tree, a member of the genus Brachystegia. About 20 percent of the valley's grasslands are flooded much of the year. Mount Gorongosa In July 2010 the government of Mozambique and the Gorongosa Restoration Project (headed by the U.S.–based Carr Foundation) announced that Gorongosa Mountain would be added to the park bringing its total size to 4067 km2. This designation has contributed to an ongoing conflict between long-term residents of the mountain and representatives of the park. Wildlife Gorongosa is home to a large diversity of animals and plants—some of which are found nowhere else in the world. This rich biodiversity creates a complex world where animals, plants and people interact. From the smallest insects to the largest mammals, each plays an important role in the Gorongosa ecosystem. The park includes termite mounds used as shade by bushbuck and kudu. Many of the park's large herbivore populations were greatly reduced by years of war and poaching. However, almost all naturally occurring species—including more than 400 kinds of birds and a wide variety of reptiles—have survived. See also Ecotourism in Africa Jen Guyton References External links Pulitzer Center on Crisis Reporting American Greg Carr Describes Why He Is Devoting His Life And Fortune To Gorongosa (Video) National Geographic: "Devastated by war, this African park's wildlife is now thriving - A generation after the civil war, more than 100,000 large animals populate Mozambique's Gorongosa National Park, a rare spot of good news" "How Teeth Became Tusks, and Tusks Became Liabilities". The New York Times. "In Mozambique, a Living Laboratory for Nature's Renewal" The New York Times. Nature: "Upgrading protected areas to conserve wild biodiversity" VIMEO: Girls Club Gorongosa VIMEO: Dominique Gonçalves speaking about Gorongosa at National Geographic Society on Half-Earth Day, 2017 National Geographic - Children living near national parks are healthier, more prosperous Opinion by Thomas L. Friedman. The New York Times Quammen, David (May 2019). "How one of Africa's great parks is rebounding from war". National Geographic. UNDP: Stimulating Growth - Growing coffee to restore the rainforest and lift people out of poverty also reinforces Africa's greatest wildlife restoration initiative Carroll, Sean B. (22 May 2016). "Resurrecting Mozambique's Magnificent Gorongosa". Sierra. Matthews, Cate (2019). "Greatest Places 2019: Gorongosa National Park". Time. National parks of Mozambique Important Bird Areas of Mozambique Zambezian and mopane woodlands Geography of Sofala Province Tourist attractions in Sofala Province Protected areas established in 1920 Ecological restoration
Gorongosa National Park
[ "Chemistry", "Engineering" ]
2,905
[ "Ecological restoration", "Environmental engineering" ]
2,146,686
https://en.wikipedia.org/wiki/Pre-Romanesque%20art%20and%20architecture
The Pre-Romanesque period in European art spans from the emergence of the Merovingian kingdom around 500 AD, or from the Carolingian Renaissance in the late 8th century, to the beginning of the Romanesque period in the 11th century. While the term is typically used in English to refer primarily to architecture and monumental sculpture, this article will briefly cover all the arts of the period. The primary theme during this period is the introduction and absorption of classical Mediterranean and Early Christian forms with Germanic ones, which fostered innovative new forms. This in turn led to the rise of Romanesque art in the 11th century. In the outline of Medieval art it was preceded by what is commonly called the Migration Period art of the "barbarian" peoples: Hiberno-Saxon in the British Isles and predominantly Merovingian on the Continent. In most of western Europe, the Roman architectural tradition survived the collapse of the empire. The Merovingians (Franks) continued to build large stone buildings like monastery churches and palaces. The unification of the Frankish kingdom under Clovis I (465–511) and his successors, corresponded with the need for the building of churches, and especially monastery churches, as these were now the power-houses of the Merovingian church. Two hundred monasteries existed south of the Loire when St Columbanus, an Irish missionary, arrived in Europe in 585. Only 100 years later, by the end of the 7th century, over 400 flourished in the Merovingian kingdom alone. The building plans often continued the Roman basilica tradition. Many Merovingian plans have been reconstructed from archaeology. The description in Bishop Gregory of Tours' History of the Franks of the basilica of Saint-Martin, built at Tours by Saint Perpetuus (bishop 460–490) at the beginning of the period and at the time on the edge of Frankish territory, gives cause to regret the disappearance of this building, one of the most beautiful Merovingian churches, which he says had 120 marble columns, towers at the East end, and several mosaics: "Saint-Martin displayed the vertical emphasis, and the combination of block-units forming a complex internal space and the correspondingly rich external silhouette, which were to be the hallmarks of the Romanesque". The Merovingian dynasty were replaced by the Carolingian dynasty in 752 AD, which led to Carolingian architecture from 780 to 900, and Ottonian architecture in the Holy Roman Empire from the mid-10th century until the mid-11th century. These successive Frankish dynasties were large contributors to Romanesque architecture. Examples of Frankish buildings Merovingian, Carolingian and Ottonian Baptistère de Riez built in the 4th, 5th and 7th centuries Fréjus Cathedral circa 450 AD Crypt of Saint-Laurent Grenoble circa 500 Aix Cathedral circa 500, baptistery built by the Merovingians Baptistère Saint-Jean 507 Baptistère de Venasque circa 500 Abbey of Saint-Germain-des-Prés circa 540 Radegonde de Poitiers Tomb of St. Radegunda 587 Jouarre Abbey 630, Merovingian crypt Kloster Reichenau 724 Benedictine Convent of Saint John, Müstair 780 Granusturm 788, 20-meter-tall tower in Aarchen Lorsch Abbey, gateway, (c. 800) Palatine Chapel in Aachen (Aix-la-Chapelle) (792–805) Imperial Palace, Ingelheim 800 Oratory of Bishop Theodulf of Orleans in Germigny-des-Prés 806 St. Ursmar's Collegiate church, in Lobbes, Belgium (819–823) St. Michael, Fulda, rotunda and crypt (822) Einhard's Basilica, Steinbach (827) Saint Justinus' church, Frankfurt-Höchst (830) Hildesheim Cathedral, original build (872) Schloss Broich 883–884, Carolingian fortress Broich Castle, Muelheim on the Ruhr (884) Abbey of Corvey (885) St. George, Oberzell in Reichenau Island (888) St. Georg (Reichenau-Oberzell) 900 St. Johannis (Mainz) 910 Church of St Philibert, Tournus 950 St. Cyriakus, Gernrode 969 Ottonian and Holy Roman Empire Mainz Cathedral begun 991 and 994 and retains some structure of this period. St. Michael's Church Hildesheim, 1031 Imperial styles Carolingian art Carolingian art is the roughly 120-year period from about 780 to 900, during Charlemagne's and his immediate heirs' rule, popularly known as the Carolingian Renaissance. Although brief, it was very influential; northern European kings promoted classical Mediterranean Roman art forms for the first time, while also creating innovative new forms such as naturalistic figure line drawings that would have lasting influence. Carolingian churches generally are basilican, like the Early Christian churches of Rome, and commonly incorporated westworks, which is arguably the precedent for the western facades of later medieval cathedrals. An original westwork survives today at the Abbey of Corvey, built in 885. After a rather chaotic interval following the Carolingian period, the new Ottonian dynasty revived Imperial art from about 950, building on and further developing Carolingian style in Ottonian art. Ottonian art Germanic pre-Romanesque art during the 120-year period from 936 to 1056 is commonly called Ottonian art after the three Saxon emperors named Otto (Otto I, Otto II, and Otto III) who ruled the Holy Roman Empire from 936 to 1001. After the decline of the Carolingian Empire, the Holy Roman Empire was re-established under the Saxon (Ottonian) dynasty. From this emerged a renewed faith in the idea of Empire and a reformed Church, creating a period of heightened cultural and artistic fervour. It was in this atmosphere that masterpieces were created that fused the traditions from which Ottonian artists derived their inspiration: models of Late Antique, Carolingian, and Byzantine origin. Much Ottonian art reflected the dynasty's desire to establish visually a link to the Christian rulers of Late Antiquity, such as Constantine, Theoderich, and Justinian as well as to their Carolingian predecessors, particularly Charlemagne. Ottonian monasteries produced some of the most magnificent medieval illuminated manuscripts. They were a major art form of the time, and monasteries received direct sponsorship from emperors and bishops, having the best in equipment and talent available. Regional styles British Isles Prior to King Alfred the dominant art style in England was the Hiberno-Saxon culture, producing in Insular art the fusion of Anglo-Saxon and Celtic techniques and motifs, which had largely ceased in Ireland and Northern England with the Viking invasions. The period from the time of King Alfred (885) is known as the Anglo-Saxon period proper, with the revival of English culture after the end of the Viking raids, to the early 12th century, when Romanesque art became the new movement. Anglo-Saxon art is mainly known today through illuminated manuscripts and metalwork. Croatia In the 7th century the Croats, with other Slavs and Avars, came from Northern Europe to the region where they live today. The first Croatian churches were built as royal sanctuaries, and the influence of Roman art was strongest in Dalmatia where urbanization was thickest. Gradually that influence was neglected and certain simplifications and alterations of inherited forms, and even creation of original buildings, appeared. All of them (a dozen large ones and hundreds of small ones) were built with roughly cut stone bounded with a thick layer of mortar on the outside. Large churches are longitudinal with one or three naves like Church of Holy Salvation () at the spring of the river Cetina, built in the 9th century, along with the Church of Saint Cross in Nin. The largest and most complicated central based church from the 9th century is dedicated to Saint Donatus in Zadar. Altar rails and windows of those churches were highly decorated with transparent shallow string-like ornament that is called pleter (meaning to weed) because the strings were threaded and rethreaded through itself. Motifs of those reliefs were taken from Roman art; sometimes figures from the Bible appeared alongside this decoration, like relief in Holy Nedjeljica in Zadar, and then they were subdued by their pattern. This also happened to engravings in early Croatian script – Glagolitic. Soon, the Glagolitic writings were replaced with Latin on altar rails and architraves of old-Croatian churches. From the Crown Church of King Zvonimir (so called Hollow Church in Solin) comes the altar board with figure of Croatian King on the throne with Carolingian crown, servant by his side and subject bowed to the king. By joining the Hungarian crown in the twelfth century, Croatia lost its full independence, but it did not lose its ties with the south and the west, and instead this ensured the beginning of a new era of Central European cultural influence. France After the demise of the Carolingian Empire, France split into a number of feuding provinces, so that lacking any organized Imperial patronage, French art of the 10th and 11th centuries became localised around the large monasteries, and lacked the sophistication of a court-directed style. Multiple regional styles developed based on the chance availability of Carolingian manuscripts (as models to draw from), and the availability of itinerant artists. The monastery of Saint Bertin became an important centre under its abbot Odbert (986–1007) who created a new style based on Anglo-Saxon and Carolingian forms. The nearby abbey of Saint Vaast created a number of works. In southwestern France at the monastery of Saint Martial in Limoges a number of manuscripts were produced around year 1000, as were produced in Albi, Figeac and Saint-Sever-de-Rustan in Gascony. In Paris there developed a style at the abbey of Saint Germain-des-Prés. In Normandy a new style developed from 975 onward. Italy Southern Italy benefited from the presence and cross-fertilization of the Byzantines, the Arabs, and the Normans, while the north was mostly controlled first by the Carolingians. The Normans in Sicily chose to commission Byzantine workshops to decorate their churches such as Monreale and Cefalù Cathedrals where full iconographic programmes of mosaics have survived. Important frescos and illuminated manuscripts were produced. Spain and Portugal The first form of Pre-Romanesque in Spain and Portugal was the Visigothic art, that brought the horse-shoe arches to the latter Moorish architecture and developed jewellery. After the Moorish occupation, Pre-Romanesque art was first reduced to the Kingdom of Asturias, the only Christian realm in the area at the time which reached high levels of artistic depuration. (See Asturian art). The Christians who lived in Moorish territory, the Mozarabs, created their own architectural and illumination style, Mozarabic art. The best preserved Visigothic monument in Portugal is the Saint Frutuoso Chapel in Braga. See also Asturian architecture References Joachim E. Gaehde (1989). "Pre-Romanesque Art". Dictionary of the Middle Ages. Jacques Fontaine (1995) L'art pré-roman hispanique, Nuit des temps, Editions zodiaque External links El Portal del Arte Románico Visigothic, Mozarabe and Romanesque art in Spain. Medieval architecture Medieval art Architectural history Western art
Pre-Romanesque art and architecture
[ "Engineering" ]
2,401
[ "Architectural history", "Architecture" ]
2,146,848
https://en.wikipedia.org/wiki/Change%20of%20variables
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem. Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution). A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial: Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written (this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable . Substituting x by into the polynomial gives which is just a quadratic equation with the two solutions: The solutions in terms of the original variable are obtained by substituting x3 back in for u, which gives Then, assuming that one is interested only in real solutions, the solutions of the original equation are Simple example Consider the system of equations where and are positive integers with . (Source: 1991 AIME) Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as . Making the substitutions and reduces the system to . Solving this gives and . Back-substituting the first ordered pair gives us , which gives the solution Back-substituting the second ordered pair gives us , which gives no solutions. Hence the solution that solves the system is . Formal introduction Let , be smooth manifolds and let be a -diffeomorphism between them, that is: is a times continuously differentiable, bijective map from to with times continuously differentiable inverse from to . Here may be any natural number (or zero), (smooth) or (analytic). The map is called a regular coordinate transformation or regular variable substitution, where regular refers to the -ness of . Usually one will write to indicate the replacement of the variable by the variable by substituting the value of in for every occurrence of . Other examples Coordinate transformation Some systems can be more easily solved when switching to polar coordinates. Consider for example the equation This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution given by Note that if runs outside a -length interval, for example, , the map is no longer bijective. Therefore, should be limited to, for example . Notice how is excluded, for is not bijective in the origin ( can take any value, the point will be mapped to (0, 0)). Then, replacing all occurrences of the original variables by the new expressions prescribed by and using the identity , we get Now the solutions can be readily found: , so or . Applying the inverse of shows that this is equivalent to while . Indeed, we see that for the function vanishes, except for the origin. Note that, had we allowed , the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of is crucial. The function is always positive (for ), hence the absolute values. Differentiation The chain rule is used to simplify complicated differentiation. For example, consider the problem of calculating the derivative Let with Then: Integration Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant. Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems. Change of variables formula in terms of Lebesgue measure The following theorem allows us to relate integrals with respect to Lebesgue measure to an equivalent integral with respect to the pullback measure under a parameterization G. The proof is due to approximations of the Jordan content. Suppose that is an open subset of and is a diffeomorphism. If is a Lebesgue measurable function on , then is Lebesgue measurable on . If or then . If and is Lebesgue measurable, then is Lebesgue measurable, then .As a corollary of this theorem, we may compute the Radon–Nikodym derivatives of both the pullback and pushforward measures of under . Pullback measure and transformation formula The pullback measure in terms of a transformation is defined as . The change of variables formula for pullback measures is . Pushforward measure and transformation formula The pushforward measure in terms of a transformation , is defined as . The change of variables formula for pushforward measures is . As a corollary of the change of variables formula for Lebesgue measure, we have that Radon-Nikodym derivative of the pullback with respect to Lebesgue measure: Radon-Nikodym derivative of the pushforward with respect to Lebesgue measure: From which we may obtain The change of variables formula for pullback measure: The change of variables formula for pushforward measure: Differential equations Variable changes for differentiation and integration are taught in elementary calculus and the steps are rarely carried out in full. The very broad use of variable changes is apparent when considering differential equations, where the independent variables may be changed using the chain rule or the dependent variables are changed resulting in some differentiation to be carried out. Exotic changes, such as the mingling of dependent and independent variables in point and contact transformations, can be very complicated but allow much freedom. Very often, a general form for a change is substituted into a problem and parameters picked along the way to best simplify the problem. Scaling and shifting Probably the simplest change is the scaling and shifting of variables, that is replacing them with new variables that are "stretched" and "moved" by constant amounts. This is very common in practical applications to get physical parameters out of problems. For an nth order derivative, the change simply results in where This may be shown readily through the chain rule and linearity of differentiation. This change is very common in practical applications to get physical parameters out of problems, for example, the boundary value problem describes parallel fluid flow between flat solid walls separated by a distance δ; μ is the viscosity and the pressure gradient, both constants. By scaling the variables the problem becomes where Scaling is useful for many reasons. It simplifies analysis both by reducing the number of parameters and by simply making the problem neater. Proper scaling may normalize variables, that is make them have a sensible unitless range such as 0 to 1. Finally, if a problem mandates numeric solution, the fewer the parameters the fewer the number of computations. Momentum vs. velocity Consider a system of equations for a given function . The mass can be eliminated by the (trivial) substitution . Clearly this is a bijective map from to . Under the substitution the system becomes Lagrangian mechanics Given a force field , Newton's equations of motion are Lagrange examined how these equations of motion change under an arbitrary substitution of variables , He found that the equations are equivalent to Newton's equations for the function , where T is the kinetic, and V the potential energy. In fact, when the substitution is chosen well (exploiting for example symmetries and constraints of the system) these equations are much easier to solve than Newton's equations in Cartesian coordinates. See also Change of variables (PDE) Change of variables for probability densities Substitution property of equality Universal instantiation References Elementary algebra Mathematical physics
Change of variables
[ "Physics", "Mathematics" ]
1,601
[ "Applied mathematics", "Theoretical physics", "Elementary algebra", "Elementary mathematics", "Mathematical physics", "Algebra" ]
2,146,929
https://en.wikipedia.org/wiki/Sydney%20Coal%20Railway
The Sydney Coal Railway is a Canadian short-line railway operating in the eastern part of Cape Breton County, Nova Scotia. SCR operates from the international coaling piers on Sydney Harbour in Sydney to the Lingan Generating Station, a coal-fired electrical generating station near New Waterford. The railway's trackage, the piers, and the generating station are owned by Nova Scotia Power, a subsidiary of Emera Inc. History The railway line was completed in 1895 by the Dominion Coal Company (DOMCO) between Sydney and Louisbourg. The trackage was organized as the Sydney and Louisburg Railway (S&L) in 1910. The S&L, along with other assets of the corporate successor to DOMCO, Dominion Steel and Coal Corporation (DOSCO), were expropriated by the Cape Breton Development Corporation (DEVCO) on March 30, 1968. DEVCO operated the railway as an unincorporated department of its Coal Division, however it was informally known as the Devco Railway. DEVCO built a coal preparation and coal wash plant and storage facility, along with new locomotive shops at Victoria Junction, a location midway between Sydney and the Lingan Generating Station near New Waterford. On November 23, 2001, DEVCO closed its last underground coal mine, the Prince colliery, after the company failed to entice any private sector investors to purchase the mine. DEVCO was out of the coal mining business, however for a period of approximately 1 month, it was in the coal importation business, with trains operating from the international coaling piers to the storage facility and on to the Lingan Generating Station. The federal government moved swiftly to sell off assets, transferring the mine properties and mineral rights back to the provincial Department of Natural Resources. DEVCO subsequently decommissioned the Victoria Junction coal wash plant and began to immediately prepare remediation of the mine sites. On December 18, 2001 DEVCO sold all surface assets, including the international shipping piers, railway track, railway rights-of-way, locomotives and rolling stock, and a coal storage facility and locomotive shops at Victoria Junction to 510845 New Brunswick Incorporated, a wholly owned subsidiary of Emera, the holding company which owns Nova Scotia Power and operator of the Lingan Generating Station. Emera subsequently contracted the operation of its newly acquired DEVCO surface assets to Logistec Corporation. Logistec sub-contracted operation of the railway to the Société des chemins de fer du Québec (Quebec Railway Corporation), a Quebec-based railway holding company and short-line operating company. The new railway was called Sydney Coal Railway, although ownership of the track and other assets remains with Emera's subsidiary, 510845 New Brunswick Inc. Despite SCR having been created by SCFQ as an operating company in December 2001, the railway itself was actually legally chartered to 510845 N.B. Inc. until January 1, 2003, when the Sydney Coal Railway was formally recognized by the federal government. On 3 November 2008 Logistec announced that it was purchasing the SCR from Quebec Railway Corporation. Current operations Current SCR operations consist of running coal imports which arrive at the international coaling piers on Sydney Harbour by bulk carrier from the United States and South America. Coal is unloaded from ships and stored at the pier, and is then loaded onto trains and delivered directly to the Lingan Generating Station. Typical trains consist of a pair of ex-DEVCO GMD GP38-2 locomotives and 21 ex-DEVCO Ortner 5 bay rapid discharge hopper cars. The shop facility at Victoria Junction is still in use, but the wash plant and storage facilities are no longer used. SCR also maintains an interchange connection with the North American railway network at Sydney where it connects to the Cape Breton and Central Nova Scotia Railway, with the latter operator possessing a connection with Canadian National at Truro; however, this connection is currently threatened as CBNS has discontinued service on the line from Sydney to St. Peter's Junction (near Port Hawkesbury), and will apply to abandon the line April 1, 2016. References Mining railways Nova Scotia railways Transport in the Cape Breton Regional Municipality Coal in Canada Mining in Nova Scotia
Sydney Coal Railway
[ "Engineering" ]
842
[ "Mining equipment", "Mining railways" ]
2,146,970
https://en.wikipedia.org/wiki/Many-to-many
Many-to-many communication occurs when information is shared between groups. Members of a group receive information from multiple senders. Wikis are a type of many-to-many communication, where multiple editors collaborate to create content that is disseminated among a wide audience. Video conferencing, online gaming, chat rooms, and internet forums are also types of many-to-many communication. References See also Point-to-point (telecommunications) Point-to-multipoint communication Network architecture Information technology management Communication pt:N para M
Many-to-many
[ "Technology", "Engineering" ]
112
[ "Information technology management", "Computer network stubs", "Network architecture", "Computer networks engineering", "Information technology", "Computing stubs" ]
2,147,062
https://en.wikipedia.org/wiki/Ecology%20of%20Bermuda
Bermuda's ecology has an abundance of unique flora and fauna due to the island's isolation from the mainland of North America. The wide range of endemic species and the islands form a distinct ecoregion, the Bermuda subtropical conifer forests. The variety of species found both on land and in the waters surrounding Bermuda have varying positive and negative impacts on the ecosystem of the island, depending on the species. There are varying biotic and abiotic factors that have threatened and continue to threaten the island's ecology. There are, however, also means of conservation that can be used to mitigate these threats. Setting Located 900 km off the American East Coast, Bermuda is a crescent-shaped chain of 184 islands and islets that were once the rim of a volcano. The islands are slightly hilly rather than having steep cliffs, with the highest point being 79 m. The coast has many bays and inlets, with sandy beaches especially on the south coasts. Bermuda has a semi-tropical climate, warmed by the Gulf Stream current. Bermuda is very densely populated. Twenty of the islands are inhabited. Wildlife that could fly to the island or were carried there by winds and currents formed the species. There are no native mammals other than bats, and only two reptiles, but there are large numbers of birds, plants, and insects. Once on the island, organisms had to adapt to local conditions, such as the humid climate, lack of fresh water, frequent storms, and salt spray. The area of the islands shrank as water levels rose at the end of the Pleistocene epoch, and fewer species were able to survive in the reduced land-area. Nearly 8,000 different species of flora and fauna are known from the islands of Bermuda. The number is likely to be considerably higher if microorganisms, cave-dwellers and deep-sea species were counted. Today the variety of species on Bermuda has been greatly increased by introductions, both deliberate and accidental. Many of these introduced species have posed a threat to the native flora and fauna because of competition and interference with habitat. Plants Over 1000 species of vascular plants are found on the islands, the majority of which were introduced. Of the 165 native species, 17 are endemic. Forest cover is around 20% of the total land area, equivalent to 1,000 hectares of forest in 2020, which was unchanged from 1990. At the time of the first human settlement by shipwrecked English sailors in 1593, Bermuda was dominated by forests of Bermuda cedar (Juniperus bermudiana) with mangrove swamps on the coast. More deliberate settlement began after 1609, and colonists began clearing forests to use for building and shipmaking, and to develop agricultural cultivation. By the 1830s, the demands of the shipbuilding industry had denuded the forests, but these recovered in many areas. In the 1940s the cedar forests were devastated by introduced scale insects, which killed roughly eight-million trees. Replanting using resistant trees has taken place since then, but the area covered by cedar is only 10% of what it used to be. Another important component of the original forest was Bermuda palmetto (Sabal bermudana), a small palm tree. It now grows in a few small patches, notably at Paget Marsh. Other trees and shrubs include Bermuda olivewood (Cassine laneana) and Bermuda snowberry (Chiococca alba). The climate allows for the growth of other introduced palms such as royal palm (Roystonea spp.) and coconut palm (Cocos nucifera), although the coconuts seldom fruit properly, due to the relatively moderate temperatures on the island. Bermuda is the farthest north location where coconut palms grow naturally. Remnant patches of mangrove swamp can be found around the coast and at some inland sites, including Hungry Bay Nature Reserve and Mangrove Lake. These are important for moderating the effects of storms and providing transitional habitats. Here black mangrove (Avicennia germinans) and red mangrove (Rhizophora mangle) are the northernmost mangroves in the Atlantic. The inland swamps are particularly interesting as mangroves thrive in salty water; in this case, the saltwater arrives through underground channels rather than the usual tidal wash of coastal mangrove swamps. Areas of peat marsh include Devonshire, Pembroke, and Paget marshes. Bermuda has four endemic ferns: Bermuda maidenhair fern (Adiantum bellum), Bermuda shield fern (Thelypteris bermudiana), Bermuda cave fern (Ctenitis sloanei) and Governor Laffan's fern (Diplazium laffanianum). The latter is extinct in the wild but is grown at Bermuda Botanical Gardens. The endemic flora of the island also include two mosses, ten lichens and forty fungi. Among the many introduced species are the casuarina (Casuarina equisetifolia) and Suriname cherry (Eugenia uniflora). Endemic Bermudiana (Sisyrinchium bermudiana) Darrell's fleabane (Erigeron darrellianus) Bermuda campylopus (moss) (Campylopus bermudianus) Bermuda bean (Phaseolus lignosus) Bermuda spike rush (Eleocharis bermudiana) Bermuda trichostomum (moss) (Trichostomum bermudanum) Governor Laffan's fern (Diplazium laffanianum) Native Forestiera (Forestiera segregata) Lamarcks trema (Trema lamarckiana) Black mangrove (Avicennia nitida) White stopper (Eugenia axillaris) Wild coffee shrub (Psychotria undata) Yellow wood (Zanthoxylum flavum) Land animals Amphibians Bermuda has no native amphibians. A species of toad, cane toad (Rhinella marina), and two species of frog, Antilles coqui (Eleutherodactylus johnstonei), and Eleutherodactylus gossei were introduced by humans through the transportation of orchids to the island prior to the 1900s, and subsequently became naturalized. R. marina and E. johnstonei are common, but E. gossei is thought to have been recently extirpated. They are nocturnal and can often be heard at night in Bermuda. Their songs are most prevalent from April until November. Reptiles Four species of lizard and two species of turtle comprise Bermuda's non-marine reptilian fauna. Of the lizards, the Bermuda rock lizard (Plestiodon longirostris), also known as the rock lizard or Bermuda skink, is the only endemic species. Once very common, the Bermuda skink is critically endangered. The Jamaican anole (Anolis grahami) was deliberately introduced in 1905 from Jamaica and is now by far the most common lizard in Bermuda. The Leach's anole (Anolis leachii) was accidentally introduced from Antigua about 1940 and is now common. The Barbados anole (Anolis extremus) was accidentally introduced about 1940 and is rarely seen. The diamondback terrapin (Malaclemys terrapin) is native to Bermuda. The red-eared slider turtle (Trachemys scripta elegans) was introduced as a pet, but has subsequently become invasive. Mammals All mammals in Bermuda were introduced by humans, except for four species of migratory North American bats of the genus Lasiurus: the hoary bat, eastern red bat, Seminole bat and silver-haired bat. Early accounts refer to wild or feral hogs, descendants of pigs left by the Spanish and Portuguese as feedstock for ships stopping at the islands for supplies. The house mouse, brown rat and black rat were accidentally introduced soon after the settlement of Bermuda, and feral cats have become common as another introduced species. Birds Over 360 species of bird have been recorded on Bermuda. The majority of these are migrants or vagrants from North America or elsewhere. Only 24 species breed on the island; 13 of these are thought to be native. One endemic species is the Bermuda petrel or cahow (Pterodroma cahow), which was thought to have been extinct since the 1620s. Its ground-nesting habitats had been severely disrupted by introduced species and colonists had killed the birds for food. In 1951, researchers discovered 18 breeding pairs, and started a recovery program to preserve and protect the species. Another endemic subspecies is the Bermuda white-eyed vireo or chick-of-the-village (Vireo griseus bermudianus). The national bird of Bermuda is the white-tailed tropicbird or longtail, which is a summer migrant to Bermuda, its most northerly breeding site. Other native birds include the eastern bluebird, grey catbird and perhaps the common ground dove. The common moorhen is the most common native waterbird; very small numbers of American coot and pied-billed grebe are breeding. Small numbers of common tern nest around the coast. The barn owl and mourning dove colonized the island during the 20th century, and the green heron has recently begun to breed. Of the introduced birds, the European starling, house sparrow, great kiskadee, rock dove, American crow and chicken are all very numerous and considered to be pests. Other introduced species include the mallard, northern cardinal, European goldfinch and small numbers of orange-cheeked and common waxbills. The yellow-crowned night heron was introduced in the 1970s to replace the extinct native heron. Fossil remains of a variety of species have been found on the island, including a crane, an owl and the short-tailed albatross. Some of these became extinct as the islands' land-mass shrank by nine tenths after the Last Glacial Maximum, while others were exterminated by early settlers. The Bermuda petrel was thought to be extinct until its rediscovery in 1951. Among the many non-breeding migrants are a variety of shorebirds, herons and ducks. In spring many shearwaters can be seen of the South Shore. Over 30 species of New World warbler are seen each year, with the yellow-rumped warbler being the most abundant. The arrival of many species is dependent on weather conditions; low-pressure systems moving across from North America often bring many birds to the islands. Among the rare visitors recorded are the Siberian flycatcher from Asia and the fork-tailed flycatcher and tropical kingbird from South America. Insects Lawrence Ogilvie, Bermuda's agricultural scientist 1923 to 1928 identified 395 local insects and wrote the Department of Agriculture's 52-page book The Insects of Bermuda, including Aphis ogilviei, which he discovered. Ants There are four ant species found in Bermuda. The African big-headed ant (Pheidole megacephala) and Argentine ant (Linepithema humile) are both invasive to Bermuda. The African ant was first recorded on the island in 1889, and the Argentine ant arrived in Bermuda in the 1940s. These two ants battle for territory and control over the island. Furthermore, there is the Bermuda ant (Odontomachus insularis), which is indigenous to the island. This ant was initially presumed to be extinct, however, they were re-discovered alive in July 2002. Carpenter ants (Camponotus spp.) are also found in Bermuda. Terrestrial invertebrates More than 1100 kinds of insects and spiders are found on Bermuda, including 41 endemic insects and a possibly endemic spider. Eighteen species of butterfly have been seen; about six of these breed on the islands, including the large monarch and the very common Bermuda buckeye (Junonia coenia bergi). More than 200 moths have been recorded; one of the most conspicuous is Pseudosphinx tetrio, which can reach in wingspan. Bermuda has lost a number of its endemic invertebrates, including the Bermuda cicada (Neotibicen bermudianus), which became extinct when the cedar forests disappeared. Some species feared extinct have been rediscovered, including a Bermuda land snail (Poecilozonties circumfirmatus) and the Bermuda ant (Odontomachus insularis). Marine life Bermuda lies on the western edge of the Sargasso Sea, an area with high salinity, high temperature and few currents. Large quantities of seaweed of the genus Sargassum are present and there are high concentrations of plankton, but the area is less attractive to commercial fish species and seabirds. Greater diversity is present in the coral reefs which surround the island. Marine mammals A variety of whales, dolphins and porpoises have been recorded in the waters around Bermuda. The most common of these is the humpback whale, which passes the islands in April and May during its northward migration. Fish There are many fish species in Bermuda's waters, such as the barracuda, Bermuda chub, bluestriped grunt, hogfish, longspine squirrelfish, various types of parrotfish, smooth trunkfish, and slippery dick, to name a few. Marine invertebrates Sea squirts There are various types of sea squirts, such as the black sea squirt (Phallusia nigra), the purple sea squirt (Clavelina picta), the orange sea squirt (Ecteinascidia turbinata), and the lacy sea squirt (Botrylloides nigrum). Crustaceans There are various types of crabs in Bermuda. There are, Sally Lightfoot crabs (Grapsus grapsus), decorator crabs, swimming crabs (Portunidae), spider crabs (Majoidea), and Verrill's hermit crab (Calcinus verrillii). Great land crabs (Cardisoma guanhumi) are hard to find, but present in Bermuda. Finally, there is the Bermuda land crab (Gecarcinus lateralis), which is native, but not exclusive to, Bermuda. Threats and preservation Bermuda was the first place in the Americas to pass conservation laws, protecting the Bermuda petrel in 1616 and the Bermuda cedar in 1622. It has a well-organised network of protected areas including Spittal Pond, marshes in Paget and Devonshire and Pembroke Parishes, Warwick Pond and the hills above Castle Harbour. Only small areas of natural forest remain today; much was cleared since colonisation began in the 17th century, and recovered forest was lost in the 1940s due to insect infestation. The Bermuda petrel and Bermuda skink are highly endangered, and Bermuda cedar, Bermuda palmetto and Bermuda olivewood are all listed as threatened species. Some wild plants, including a spike rush, have disappeared. Introduced plants and animals have had adverse effects on the wildlife of the islands. The thriving tourist industry creates its own challenges to preserve the wildlife and habitat that attract visitors. Parrotfish are crucial to the coral reefs of Bermuda. Overfishing has caused issues for parrotfish that live in the coastal waters of other islands, such as the U.S. Virgin Islands. Looking at these instances in other habitats can help conservationists prevent the parrotfish population in Bermuda from experiencing decline as well. Birds such as the white-tailed tropicbird (Bermuda longtail) are strongly affected by hurricanes, as the hurricanes harm individuals and destroy their nests, which makes reproduction nearly impossible. Invasive species have been known to use similar nesting sites. Specifically, the rock pigeon often builds its nests within crevices around the island, including on rocky shorelines and cracks in Bermuda's tall cliffs. This is also where the Bermuda longtail nests, so the competition makes reproduction harder for the local species. References Amos, Eric J. R. (1991) A Guide to the Birds of Bermuda, privately published. Bermuda Aquarium, Museum and Zoo Bermuda Biodiversity Project , downloaded 21 February 2007. Dobson, A. (2002) A Birdwatching Guide to Bermuda, Arlequin Press, Chelmsford, UK. Flora of Bermuda (Illustrated) by Nathaniel Lord Britton, Ph.D., Sc.D., LL.D. (Published 1918) Forbes, Keith Archibald (2007) Bermuda's Fauna, downloaded downloaded 21 February 2007. Gehrman, Elizabeth (2102) Rare Birds: The Extraordinary Tale of the Bermuda Petrel and the Man Who Brought It Back from Extinction (Beacon Press). Ogden, George (2002) Bermuda A Gardener's Guide, The Garden Club of Bermuda Ogilvie, Lawrence (1928) The Insects of Bermuda Raine, André (2003) A Field Guide to the Birds of Bermuda, Macmillan, Oxford. External links Bermuda Audubon Society Bermuda biodiversity, Flickr Bermuda National Trust Bermuda Government, Department of Environment and Natural Resources Environment of Bermuda Nearctic ecoregions Tropical and subtropical coniferous forests Bermuda
Ecology of Bermuda
[ "Biology" ]
3,441
[ "Biota by country", "Wildlife by country" ]
2,147,102
https://en.wikipedia.org/wiki/Portastudio
Portastudio refers to a series of multitrack recorders produced by TASCAM beginning in 1979 with the introduction of the TEAC 144, the first four-track compact cassette-based recorder. A TASCAM trademark, "portastudio" is commonly used to refer to any self-contained multitrack recorder dedicated to music production. The Portastudio is credited with launching the home recording revolution by making it possible for musicians to easily and affordably record and produce multitrack music at home, and is cited as one of the most significant innovations in music production technology. History Cassette Portastudios The first Portastudio, the TEAC 144, was introduced on September 22, 1979 at the AES Convention in New York City. The 144 combined a 4-channel mixer with pan, treble, and bass on each input with a cassette recorder capable of recording four tracks in one direction at 3¾ inches per second (double the normal cassette playback speed) in a self-contained unit weighing less than 20 pounds at a list price of . The 144 was the first product that made it possible for musicians to affordably record several instrumental and vocal parts on different tracks of the built-in 4-track cassette recorder individually and later blend all the parts together, while transferring them to another standard, two-channel stereo tape deck (remix and mixdown) to form a stereo recording. In 1981, Fostex introduced the first of their "Multitracker" line of multitrack cassette recorders with the 250. In 1982, TASCAM replaced the 144 with the 244 Portastudio, which improved upon the previous design with overall better sound quality and more features, including: parametric EQ, dbx Type II noise reduction, and the ability to record up to four tracks simultaneously. TASCAM continued to develop and release cassette-based portastudio models with different features until 2001, including the "Ministudio" line of portastudios that offered a limited feature set and the ability to run on batteries at even more affordable price points, and the "MIDIStudio" line which added MIDI functionality. Other manufacturers, including Fostex, Yamaha, Akai, and others introduced their own lines of multitrack cassette recorders. Most were four-track recorders, but there were also six-track and even eight-track units. Digital Portastudios In 1997, TASCAM introduced the first digital Portastudio: the TASCAM 564 which recorded to MiniDisc. Later Digital Portastudio models, some with the ability to record 24 or even 32 tracks, utilize CD-R, internal hard drives, or SD cards, and commonly include built-in DSP effects. Impact and legacy The Portastudio, and particularly its first iteration, the TEAC 144, is credited with launching the home recording revolution by making it possible for musicians to easily and affordably record and produce multitrack music themselves wherever they wanted, and is cited as one of the most significant innovations in music production technology. In general, these machines were typically used by amateur and professional musicians to record demos, although some Portastudio projects, most notably Bruce Springsteen's 1982 album Nebraska, have become notable major-label releases. Beginning in the 1990s, cassette-based Portastudios experienced new popularity for lo-fi recording. In 2006, the TEAC Portastudio was inducted into the TECnology Hall of Fame, an honor given to "products and innovations that have had an enduring impact on the development of audio technology." In 2021, in conjunction with TASCAM's 50th anniversary, a software plug-in emulation of the Porta One ministudio was released by IK Multimedia. Notable usages Bruce Springsteen recorded acoustic demos for an upcoming album on a TEAC 144 between December 1981 and May 1982, and chose to release those demos over the full-band arrangements later recorded at the Power Station recording studio for his 1982 album Nebraska. Seal recorded the original demo for his multiple Grammy award-winning single Kiss from a Rose on a Tascam 244. William and Jim Reid of The Jesus and Mary Chain used a TASCAM Portastudio to record their first demos sent to Bobby Gillespie and Alan McGee. "Weird Al" Yankovic recorded half of the songs on his 1983 debut album with a TASCAM Portastudio. Depeche Mode member Alan Wilder used a Portastudio to record the tracks that would become the debut album of his solo side project, Recoil, in 1986. Ween recorded their albums The Pod (1991) and Pure Guava (1992) on various TASCAM Portastudio cassette recorders. Elliott Smith recorded his solo debut album Roman Candle on a Portastudio. John Frusciante recorded his first two solo albums, Niandra LaDes and Usually Just a T-Shirt and Smile from the Streets You Hold, on a 424 Portastudio. Wu-Tang Clan's debut studio album was mixed down to a Portastudio 244 after the recording and mixing was completed. Madlib recorded his debut studio album, released under the name of Quasimoto, on a TASCAM Portastudio. Portastatic was named after the Portastudio Mac McCaughan used to record the songs that became its first album. Clive Gregson and Christine Collister recorded their 1987 album Home And Away on a 244 Portastudio. Mac Demarco recorded his 2012 debut mini-LP Rock and Roll Night Club with a TASCAM 244 Portastudio. It made extensive use of the 244's pitch control and used the method of bouncing 3 separate tracks down to a single track. Black Tape for a Blue Girl recorded their 1986 debut The Rope on a Portastudio. See also Multitrack recording References External links The current line of Portastudios from the Tascam website. Audiovisual introductions in 1979 Multitrack recording Audio engineering Tape recording
Portastudio
[ "Technology", "Engineering" ]
1,242
[ "Electrical engineering", "Recording devices", "Audio engineering", "Tape recording" ]
2,147,251
https://en.wikipedia.org/wiki/Viridos%20%28company%29
In September 2021, Synthetic Genomics Inc. (SGI), a private company located in La Jolla, California, changed its name to Viridos. The company is focused on the field of synthetic biology, especially harnessing photosynthesis with micro algae to create alternatives to fossil fuels. Viridos designs and builds biological systems to address global sustainability problems. Synthetic biology is an interdisciplinary branch of biology and engineering, combining fields such as biotechnology, evolutionary biology, molecular biology, systems biology, biophysics, computer engineering, and genetic engineering. Synthetic Genomics uses techniques such as software engineering, bioprocessing, bioinformatics, biodiscovery, analytical chemistry, fermentation, cell optimization, and DNA synthesis to design and build biological systems. The company produces or performs research in the fields of sustainable bio-fuels, insect resistant crops, transplantable organs, targeted medicines, DNA synthesis instruments as well as a number of biological reagents. Core markets SGI mainly operates in three end markets: research, bioproduction and applied products. The research segment focuses on genomics solutions for academic and commercial research organizations. The commercial products and services include instrumentation, reagents, DNA synthesis services, and bioinformatics services and software. In 2015, the company launched the BioXP 3200 system, a fully automated benchtop instrument that produces DNA fragments from many different sources for genomic data. The company's efforts in bio-based production are intended to improve both existing production hosts and develop entirely new synthetic production hosts with the goal of more efficient routes to bioproducts. SGI has a number of commercial as well as research and development stage programs across a variety of industries. Some of these research partnerships include: History Synthetic Genomics was founded in the spring of 2005 by J. Craig Venter, Nobel Laureate Hamilton O. Smith, Juan Enriquez, and David Kiernan. Venter (and Smith)'s previous company, Celera Genomics, was a driving force in the race to sequence the human genome. The firm takes its name from the phrase synthetic genomics which is a scientific discipline of synthetic biology related to the generation of organisms artificially using genetic material. Many of SGI's collaborations have been with energy companies. In 2007, SGI worked with BP to commercialize microbial-based processes for increasing the conversion and recovery of subsurface hydrocarbons. In 2009, SGI received funding from ExxonMobil to produce biofuels on an industrial-scale using recombinant algae and other microorganisms. The company purchased an in the Imperial Valley in Southern California to produce algae fuel for their collaboration with Exxon Mobil. They also signed a collaborative agreement with New England Biolabs to Launch Gibson Assembly Master Mix Product for Synthetic and Molecular Biology Applications in 2012. In 2010, Synthetic Genomics spun off a new subsidiary, Synthetic Genomics Vaccines Inc., to develop next generation vaccines In 2014 SGI expanded into the field of organ transplantation with a collaborative agreement with United Therapeutics valued at $50M and brought in Oliver Fetzer as CEO. References External links Algae biomass producers Biotechnology companies of the United States Genetic engineering Genomics companies Synthetic biology
Viridos (company)
[ "Chemistry", "Engineering", "Biology" ]
659
[ "Synthetic biology", "Biological engineering", "Genetic engineering", "Bioinformatics", "Molecular genetics", "Molecular biology", "Algae biomass producers" ]
2,147,274
https://en.wikipedia.org/wiki/Supercritical%20fluid%20extraction
Supercritical fluid extraction (SFE) is the process of separating one component (the extractant) from another (the matrix) using supercritical fluids as the extracting solvent. Extraction is usually from a solid matrix, but can also be from liquids. SFE can be used as a sample preparation step for analytical purposes, or on a larger scale to either strip unwanted material from a product (e.g. decaffeination) or collect a desired product (e.g. essential oils). These essential oils can include limonene and other straight solvents. Carbon dioxide (CO2) is the most used supercritical fluid, sometimes modified by co-solvents such as ethanol or methanol. Extraction conditions for supercritical carbon dioxide are above the critical temperature of 31 °C and critical pressure of 74 bar. Addition of modifiers may slightly alter this. The discussion below will mainly refer to extraction with CO2, except where specified. Advantages Selectivity The properties of the supercritical fluid can be altered by varying the pressure and temperature, allowing selective extraction. For example, volatile oils can be extracted from a plant with low pressures (100 bar), whereas liquid extraction would also remove lipids. Lipids can be removed using pure CO2 at higher pressures, and then phospholipids can be removed by adding ethanol to the solvent. The same principle can be used to extract polyphenols and unsaturated fatty acids separately from wine wastes. Speed Extraction is a diffusion-based process, in which the solvent is required to diffuse into the matrix and the extracted material to diffuse out of the matrix into the solvent. Diffusivities are much faster in supercritical fluids than in liquids, and therefore extraction can occur faster. In addition, due to the lack of surface tension and negligible viscosities compared to liquids, the solvent can penetrate more into the matrix inaccessible to liquids. An extraction using an organic liquid may take several hours, whereas supercritical fluid extraction can be completed in 10 to 60 minutes. Limitations The requirement for high pressures increases the cost compared to conventional liquid extraction, so SFE will only be used where there are significant advantages. Carbon dioxide itself is non-polar, and has somewhat limited dissolving power, so cannot always be used as a solvent on its own, particularly for polar solutes. The use of modifiers increases the range of materials which can be extracted. Food grade modifiers such as ethanol can often be used, and can also help in the collection of the extracted material, but reduces some of the benefits of using a solvent which is gaseous at room temperature. Procedure The system must contain a pump for the CO2, a pressure cell to contain the sample, a means of maintaining pressure in the system and a collecting vessel. The liquid is pumped to a heating zone, where it is heated to supercritical conditions. It then passes into the extraction vessel, where it rapidly diffuses into the solid matrix and dissolves the material to be extracted. The dissolved material is swept from the extraction cell into a separator at lower pressure, and the extracted material settles out. The CO2 can then be cooled, re-compressed and recycled, or discharged to atmosphere. Pumps Carbon dioxide () is usually pumped as a liquid, usually below 5 °C (41 °F) and a pressure of about 50 bar. The solvent is pumped as a liquid as it is then almost incompressible; if it were pumped as a supercritical fluid, much of the pump stroke would be "used up" in compressing the fluid, rather than pumping it. For small scale extractions (up to a few grams / minute), reciprocating pumps or syringe pumps are often used. For larger scale extractions, diaphragm pumps are most common. The pump heads will usually require cooling, and the CO2 will also be cooled before entering the pump. Pressure vessels Pressure vessels can range from simple tubing to more sophisticated purpose built vessels with quick release fittings. The pressure requirement is at least 74 bar, and most extractions are conducted at under 350 bar. However, sometimes higher pressures will be needed, such as extraction of vegetable oils, where pressures of 800 bar are sometimes required for complete miscibility of the two phases. The vessel must be equipped with a means of heating. It can be placed inside an oven for small vessels, or an oil or electrically heated jacket for larger vessels. Care must be taken if rubber seals are used on the vessel, as the supercritical carbon dioxide may dissolve in the rubber, causing swelling, and the rubber will rupture on depressurization. Pressure maintenance The pressure in the system must be maintained from the pump right through the pressure vessel. In smaller systems (up to about 10 mL / min) a simple restrictor can be used. This can be either a capillary tube cut to length, or a needle valve which can be adjusted to maintain pressure at different flow rates. In larger systems a back pressure regulator will be used, which maintains pressure upstream of the regulator by means of a spring, compressed air, or electronically driven valve. Whichever is used, heating must be supplied, as the adiabatic expansion of the CO2 results in significant cooling. This is problematic if water or other extracted material is present in the sample, as this may freeze in the restrictor or valve and cause blockages. Collection The supercritical solvent is passed into a vessel at lower pressure than the extraction vessel. The density, and hence dissolving power, of supercritical fluids varies sharply with pressure, and hence the solubility in the lower density CO2 is much lower, and the material precipitates for collection. It is possible to fractionate the dissolved material using a series of vessels at reducing pressure. The CO2 can be recycled or depressurized to atmospheric pressure and vented. For analytical SFE, the pressure is usually dropped to atmospheric, and the now gaseous carbon dioxide bubbled through a solvent to trap the precipitated components. Heating and cooling This is an important aspect. The fluid is cooled before pumping to maintain liquid conditions, then heated after pressurization. As the fluid is expanded into the separator, heat must be provided to prevent excessive cooling. For small scale extractions, such as for analytical purposes, it is usually sufficient to pre-heat the fluid in a length of tubing inside the oven containing the extraction cell. The restrictor can be electrically heated, or even heated with a hairdryer. For larger systems, the energy required during each stage of the process can be calculated using the thermodynamic properties of the supercritical fluid. Simple model of SFE There are two essential steps to SFE, transport (by diffusion or otherwise) of the solid particles to the surface, and dissolution in the supercritical fluid. Other factors, such as diffusion into the particle by the SF and reversible release such as desorption from an active site are sometimes significant, but not dealt with in detail here. Figure 2 shows the stages during extraction from a spherical particle where at the start of the extraction the level of extractant is equal across the whole sphere (Fig. 2a). As extraction commences, material is initially extracted from the edge of the sphere, and the concentration in the center is unchanged (Fig 2b). As the extraction progresses, the concentration in the center of the sphere drops as the extractant diffuses towards the edge of the sphere (Figure 2c). The relative rates of diffusion and dissolution are illustrated by two extreme cases in Figure 3. Figure 3a shows a case where dissolution is fast relative to diffusion. The material is carried away from the edge faster than it can diffuse from the center, so the concentration at the edge drops to zero. The material is carried away as fast as it arrives at the surface, and the extraction is completely diffusion limited. Here the rate of extraction can be increased by increasing diffusion rate, for example raising the temperature, but not by increasing the flow rate of the solvent. Figure 3b shows a case where solubility is low relative to diffusion. The extractant is able to diffuse to the edge faster than it can be carried away by the solvent, and the concentration profile is flat. In this case, the extraction rate can be increased by increasing the rate of dissolution, for example by increasing flow rate of the solvent. The extraction curve of % recovery against time can be used to elucidate the type of extraction occurring. Figure 4(a) shows a typical diffusion controlled curve. The extraction is initially rapid, until the concentration at the surface drops to zero, and the rate then becomes much slower. The % extracted eventually approaches 100%. Figure 4(b) shows a curve for a solubility limited extraction. The extraction rate is almost constant, and only flattens off towards the end of the extraction. Figure 4(c) shows a curve where there are significant matrix effects, where there is some sort of reversible interaction with the matrix, such as desorption from an active site. The recovery flattens off, and if the 100% value is not known, then it is hard to tell that extraction is less than complete. Optimization The optimum will depend on the purpose of the extraction. For an analytical extraction to determine, say, antioxidant content of a polymer, then the essential factors are complete extraction in the shortest time. However, for production of an essential oil extract from a plant, then quantity of CO2 used will be a significant cost, and "complete" extraction not required, a yield of 70 - 80% perhaps being sufficient to provide economic returns. In another case, the selectivity may be more important, and a reduced rate of extraction will be preferable if it provides greater discrimination. Therefore, few comments can be made which are universally applicable. However, some general principles are outlined below. Maximizing diffusion This can be achieved by increasing the temperature, swelling the matrix, or reducing the particle size. Matrix swelling can sometimes be increased by increasing the pressure of the solvent, and by adding modifiers to the solvent. Some polymers and elastomers in particular are swelled dramatically by CO2, with diffusion being increased by several orders of magnitude in some cases. Maximizing solubility Generally, higher pressure will increase solubility. The effect of temperature is less certain, as close to the critical point, increasing the temperature causes decreases in density, and hence dissolving power. At pressures well above the critical pressure, solubility is likely to increase with temperature. Addition of low levels of modifiers (sometimes called entrainers), such as methanol and ethanol, can also significantly increase solubility, particularly of more polar compounds. Optimizing flow rate The flow rate of supercritical carbon dioxide should be measured in terms of mass flow rather than by volume because the density of the changes according to the temperature both before entering the pump heads and during compression. Coriolis flow meters are best used to achieve such flow confirmation. To maximize the rate of extraction, the flow rate should be high enough for the extraction to be completely diffusion limited (but this will be very wasteful of solvent). However, to minimize the amount of solvent used, the extraction should be completely solubility limited (which will take a very long time). Flow rate must therefore be determined depending on the competing factors of time and solvent costs, and also capital costs of pumps, heaters and heat exchangers. The optimum flow rate will probably be somewhere in the region where both solubility and diffusion are significant factors. See also Laboratory equipment Steam distillation Accelerated solvent extraction References Further reading Industrial processes Extraction (chemistry) Microtechnology
Supercritical fluid extraction
[ "Chemistry", "Materials_science", "Engineering" ]
2,416
[ "Extraction (chemistry)", "Materials science", "Microtechnology", "Separation processes" ]
2,147,428
https://en.wikipedia.org/wiki/SOG%20Knife
The SOG Knife was designed for, and issued to, covert Studies and Observations Group personnel during the Vietnam War. It was unmarked and supposedly untraceable to country of origin or manufacture in order to maintain plausible deniability of covert operators in the event of their death or capture. Design The SOG Knife was designed by Benjamin Baker, the Deputy Chief of the U.S. Counterinsurgency Support Office (CISO). A chrome-moly steel known as SKS-3 was chosen for the blade and hardened to a Rockwell hardness of 55-57. The blade pattern featured a convex false edge on the clip point of a Bowie knife. The stacked leather handle was inspired by a Marbles Gladstone Skinning Knife made in the 1920s owned by Baker, into which finger grooves were molded. The blade was typically parkerized or blackened to reduce glare. This was done so by applying a dark gun-blue finish (similar to those used on guns) on this SK-3 carbon steel knife. The knife was carried in a leather sheath which contained a sharpening steel or whetstone. The first contract was awarded to Japanese Trading Company Yogi Shokai, Okinawa for 1,300 seven-inch blades designated "Knife, indigenous, RECON, blade, w/scabbard & whetstone" at $9.85 each. In 1966, SOG ordered 1,200 sterile knives with six-inch blades and black sheaths and in March of the following year an additional lot of 3,700 was ordered. This second lot was serial numbered for accountability purposes and was designated "Knife, indigenous, hunting, blade, w/black sheath and whetstone". Further knives were ordered from Japan Sword, Tokyo as well. The orders were actually fulfilled by a number of knifemakers and as a result, the various lots had minor differences such as blade bluing color and guard color or shape. Although the SOG office based at Kadena and Yogi Shokai were in Okinawa, it is believed that only a major knifemaking source like Seki could have fulfilled all these orders, In 1986, a company named SOG Specialty Knives based in Santa Monica, California marketed a knife manufactured in Seki City, Japan very similar to the original SOG knife. It had a blued SK5 carbon steel blade, was marked with the US Army Special Forces Crest, and named the "S1 Bowie". It was a replica of the commemorative versions of the original MACV-SOG knives, rather than the actual sterile unmarked knives used in combat. SOG made a version with an Aus8 stainless steel blade and black micarta handle in commemoration of the U.S. Navy SEALs, known as the "SOG S2 Trident". The other Vietnam replica knife is known as the "Recon Bowie" by SOG with a distinctive banana-shaped blade. This type of knife was actually the first to go into service in Vietnam. The last replica knife is the "SCUBA/Demo" which was introduced in 2001, the rarest knife in this group as only one true original is reported to exist. It was created for and assigned to the USN Advisory Detachment, which operated coastal gunboats. The S1 and S2 knives were manufactured by Hattori of Seki under contract to SOG Knives USA from 1986 to 2005, after which SOG shifted to manufacturing in Taiwan. Hattori also manufactured the three commemorative SOG bowies for Boker, for sale in the European market. Replicas of the SOG knife have also been made by Al Mar Knives, Ek Knives, Tak Fukuta for Parker, and Strider Knives. SOG also contracted with Kinryu Co. Ltd of Seki Japan to manufacture the Recon Bowie and the Scuba Demo until 2007. None of these knives are currently in official use by any branch of the US Military. Original models are extremely valuable collector's items among both knife and militaria collectors. The later replicas are also in high demand by collectors, especially the early ones made in Seki. References External links Gallery of SOG knives SOG S1 Bowie - Information on the replica of the SOG Knife (Manufactured by SOG Specialty Knives) used by MACVSOG Forces in Vietnam Mechanical hand tools Camping equipment Military knives
SOG Knife
[ "Physics" ]
875
[ "Mechanics", "Mechanical hand tools" ]
2,147,511
https://en.wikipedia.org/wiki/Ultrasonic%20motor
An ultrasonic motor is a type of piezoelectric motor powered by the ultrasonic vibration of a component, the stator, placed against another component, the rotor or slider depending on the scheme of operation (rotation or linear translation). Ultrasonic motors differ from other piezoelectric motors in several ways, though both typically use some form of piezoelectric material, most often lead zirconate titanate and occasionally lithium niobate or other single-crystal materials. The most obvious difference is the use of resonance to amplify the vibration of the stator in contact with the rotor in ultrasonic motors. Ultrasonic motors also offer arbitrarily large rotation or sliding distances, while piezoelectric actuators are limited by the static strain that may be induced in the piezoelectric element. One common application of ultrasonic motors is in camera lenses where they are used to move lens elements as part of the auto-focus system. Ultrasonic motors replace the noisier and often slower micro-motor in this application. Mechanism Dry friction is often used in contact, and the ultrasonic vibration induced in the stator is used both to impart motion to the rotor and to modulate the frictional forces present at the interface. The friction modulation allows bulk motion of the rotor (i.e., for farther than one vibration cycle); without this modulation, ultrasonic motors would fail to operate. Two different ways are generally available to control the friction along the stator-rotor contact interface, traveling-wave vibration and standing-wave vibration. Some of the earliest versions of practical motors in the 1970s, by Sashida, for example, used standing-wave vibration in combination with fins placed at an angle to the contact surface to form a motor, albeit one that rotated in a single direction. Later designs by Sashida and researchers at Matsushita, ALPS, and Canon made use of traveling-wave vibration to obtain bi-directional motion, and found that this arrangement offered better efficiency and less contact interface wear. An exceptionally high-torque 'hybrid transducer' ultrasonic motor uses circumferentially-poled and axially-poled piezoelectric elements together to combine axial and torsional vibration along the contact interface, representing a driving technique that lies somewhere between the standing and traveling-wave driving methods. A key observation in the study of ultrasonic motors is that the peak vibration that may be induced in structures occurs at a relatively constant vibration velocity regardless of frequency. The vibration velocity is simply the time derivative of the vibration displacement in a structure, and is not (directly) related to the speed of the wave propagation within a structure. Many engineering materials suitable for vibration permit a peak vibration velocity of around 1 m/s. At low frequencies — 50 Hz, say — a vibration velocity of 1 m/s in a woofer would give displacements of about 10 mm, which is visible. As the frequency is increased, the displacement decreases, and the acceleration increases. As the vibration becomes inaudible at 20 kHz or so, the vibration displacements are in the tens of micrometers, and motors have been built that operate using 50 MHz surface acoustic wave (SAW) that have vibrations of only a few nanometers in magnitude. Such devices require care in construction to meet the necessary precision to make use of these motions within the stator. More generally, there are two types of motors, contact and non-contact, the latter of which is rare and requires a working fluid to transmit the ultrasonic vibrations of the stator toward the rotor. Most versions use air, such as some of the earliest versions by Hu Junhui. Research in this area continues, particularly in near-field acoustic levitation for this sort of application. (This is different from far-field acoustic levitation, which suspends the object at half to several wavelengths away from the vibrating object.) Applications Canon was one of the pioneers of the ultrasonic motor, and made the "USM" famous in the late 1980s by incorporating it into its autofocus lenses for the Canon EF lens mount. Numerous patents on ultrasonic motors have been filed by Canon, its chief lensmaking rival Nikon, and other industrial concerns since the early 1980s. Canon has not only included an ultrasonic motor (USM) in their DSLR lenses, but also in the Canon PowerShot SX1 IS bridge camera. The ultrasonic motor is now used in many consumer and office electronics requiring precision rotations over long periods of time. The technology has been applied to photographic lenses by a variety of companies under different names. See also Piezoelectric motor Linear actuator Stepper motor Ultrasonic homogenizer References General Certificate of authorship #217509 "Electric Engine", Lavrinenko V., Necrasov M., application #1006424 from 10 May 1965. US Patent #4.019.073, 1975. US Patent #4.453.103, 1982. US Patent #4.400.641, 1982. Piezoelectric motors. Lavrinenko V., Kartashev I., Vishnevskyi V., "Energiya" 1980. V. Snitka, V. Mizariene and D. Zukauskas The status of ultrasonic motors in the former Soviet Union, Ultrasonics, Volume 34, Issues 2–5, June 1996, Pages 247-250 Principles of construction of piezoelectric motors. V. Lavrinenko, , "Lambert", 2015, 236p. External links Ultrasonic Actuators, Motors and Sensors page, from NASA JPL Design and performances of high torque ultrasonic motor for application of automobile Design of miniature ultrasonic motors Ultrasonic Lens Motor Micro/Nano Physics Research Laboratory, with research on ultrasonic piezoelectric actuators by Dr James Friend Institute of Piezomechanics, Kaunas University of Technology, Lithuania Disassembly of a Canon EF lens, revealing an ultrasonic motor Research Center for Microsystems and Nanotechnology, KTU, Lithuania Electric motors de:Ultraschallmotor hi:पराश्रव्य मोटर ru:Ультразвуковой мотор
Ultrasonic motor
[ "Technology", "Engineering" ]
1,302
[ "Electrical engineering", "Engines", "Electric motors" ]
2,147,685
https://en.wikipedia.org/wiki/Point%20process
In statistics and probability theory, a point process or point field is a set of a random number of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes on the real line form an important special case that is particularly amenable to study, because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in a Geiger counter, location of radio stations in a telecommunication network or of searches on the world-wide web. General point processes on a Euclidean space can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others. Conventions Since point processes were historically developed by different communities, there are different mathematical interpretations of a point process, such as a random counting measure or a random set, and different notations. The notations are described in detail on the point process notation page. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear. Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or -dimensional Euclidean space. Other stochastic processes such as renewal and counting processes are studied in the theory of point processes. Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field. Mathematics In mathematics, a point process is a random element whose values are "point patterns" on a set S. While in the exact mathematical definition a point pattern is specified as a locally finite counting measure, it is sufficient for more applied purposes to think of a point pattern as a countable subset of S that has no limit points. Definition To define general point processes, we start with a probability space , and a measurable space where is a locally compact second countable Hausdorff space and is its Borel σ-algebra. Consider now an integer-valued locally finite kernel from into , that is, a mapping such that: For every , is a (integer-valued) locally finite measure on . For every , is a random variable over . This kernel defines a random measure in the following way. We would like to think of as defining a mapping which maps to a measure (namely, ), where is the set of all locally finite measures on . Now, to make this mapping measurable, we need to define a -field over . This -field is constructed as the minimal algebra so that all evaluation maps of the form , where is relatively compact, are measurable. Equipped with this -field, then is a random element, where for every , is a locally finite measure over . Now, by a point process on we simply mean an integer-valued random measure (or equivalently, integer-valued kernel) constructed as above. The most common example for the state space S is the Euclidean space Rn or a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets of Rn, in which case ξ is usually referred to as a particle process. Despite the name point process since S might not be a subset of the real line, as it might suggest that ξ is a stochastic process. Representation Every instance (or event) of a point process ξ can be represented as where denotes the Dirac measure, n is an integer-valued random variable and are random elements of S. If 's are almost surely distinct (or equivalently, almost surely for all ), then the point process is known as simple. Another different but useful representation of an event (an event in the event space, i.e. a series of points) is the counting notation, where each instance is represented as an function, a continuous function which takes integer values: : which is the number of events in the observation interval . It is sometimes denoted by , and or mean . Expectation measure The expectation measure Eξ (also known as mean measure) of a point process ξ is a measure on S that assigns to every Borel subset B of S the expected number of points of ξ in B. That is, Laplace functional The Laplace functional of a point process N is a map from the set of all positive valued functions f on the state space of N, to defined as follows: They play a similar role as the characteristic functions for random variable. One important theorem says that: two point processes have the same law if their Laplace functionals are equal. Moment measure The th power of a point process, is defined on the product space as follows : By monotone class theorem, this uniquely defines the product measure on The expectation is called the th moment measure. The first moment measure is the mean measure. Let . The joint intensities of a point process w.r.t. the Lebesgue measure are functions such that for any disjoint bounded Borel subsets Joint intensities do not always exist for point processes. Given that moments of a random variable determine the random variable in many cases, a similar result is to be expected for joint intensities. Indeed, this has been shown in many cases. Stationarity A point process is said to be stationary if has the same distribution as for all For a stationary point process, the mean measure for some constant and where stands for the Lebesgue measure. This is called the intensity of the point process. A stationary point process on has almost surely either 0 or an infinite number of points in total. For more on stationary point processes and random measure, refer to Chapter 12 of Daley & Vere-Jones. Stationarity has been defined and studied for point processes in more general spaces than . Transformations A point process transformation is a function that maps a point process to another point process. Examples We shall see some examples of point processes in Poisson point process The simplest and most ubiquitous example of a point process is the Poisson point process, which is a spatial generalisation of the Poisson process. A Poisson (counting) process on the line can be characterised by two properties : the number of points (or events) in disjoint intervals are independent and have a Poisson distribution. A Poisson point process can also be defined using these two properties. Namely, we say that a point process is a Poisson point process if the following two conditions hold 1) are independent for disjoint subsets 2) For any bounded subset , has a Poisson distribution with parameter where denotes the Lebesgue measure. The two conditions can be combined and written as follows : For any disjoint bounded subsets and non-negative integers we have that The constant is called the intensity of the Poisson point process. Note that the Poisson point process is characterised by the single parameter It is a simple, stationary point process. To be more specific one calls the above point process a homogeneous Poisson point process. An inhomogeneous Poisson process is defined as above but by replacing with where is a non-negative function on Cox point process A Cox process (named after Sir David Cox) is a generalisation of the Poisson point process, in that we use random measures in place of . More formally, let be a random measure. A Cox point process driven by the random measure is the point process with the following two properties : Given , is Poisson distributed with parameter for any bounded subset For any finite collection of disjoint subsets and conditioned on we have that are independent. It is easy to see that Poisson point process (homogeneous and inhomogeneous) follow as special cases of Cox point processes. The mean measure of a Cox point process is and thus in the special case of a Poisson point process, it is For a Cox point process, is called the intensity measure. Further, if has a (random) density (Radon–Nikodym derivative) i.e., then is called the intensity field of the Cox point process. Stationarity of the intensity measures or intensity fields imply the stationarity of the corresponding Cox point processes. There have been many specific classes of Cox point processes that have been studied in detail such as: Log-Gaussian Cox point processes: for a Gaussian random field Shot noise Cox point processes:, for a Poisson point process and kernel Generalised shot noise Cox point processes: for a point process and kernel Lévy based Cox point processes: for a Lévy basis and kernel , and Permanental Cox point processes: for k independent Gaussian random fields 's Sigmoidal Gaussian Cox point processes: for a Gaussian random field and random By Jensen's inequality, one can verify that Cox point processes satisfy the following inequality: for all bounded Borel subsets , where stands for a Poisson point process with intensity measure Thus points are distributed with greater variability in a Cox point process compared to a Poisson point process. This is sometimes called clustering or attractive property of the Cox point process. Determinantal point processes An important class of point processes, with applications to physics, random matrix theory, and combinatorics, is that of determinantal point processes. Hawkes (self-exciting) processes A Hawkes process , also known as a self-exciting counting process, is a simple point process whose conditional intensity can be expressed as where is a kernel function which expresses the positive influence of past events on the current value of the intensity process , is a possibly non-stationary function representing the expected, predictable, or deterministic part of the intensity, and is the time of occurrence of the i-th event of the process. Geometric processes Given a sequence of non-negative random variables , if they are independent and the cdf of is given by for , where is a positive constant, then is called a geometric process (GP). The geometric process has several extensions, including the α- series process and the doubly geometric process. Point processes on the real half-line Historically the first point processes that were studied had the real half line R+ = [0,∞) as their state space, which in this context is usually interpreted as time. These studies were motivated by the wish to model telecommunication systems, in which the points represented events in time, such as calls to a telephone exchange. Point processes on R+ are typically described by giving the sequence of their (random) inter-event times (T1, T2, ...), from which the actual sequence (X1, X2, ...) of event times can be obtained as If the inter-event times are independent and identically distributed, the point process obtained is called a renewal process. Intensity of a point process The intensity λ(t | Ht) of a point process on the real half-line with respect to a filtration Ht is defined as Ht can denote the history of event-point times preceding time t but can also correspond to other filtrations (for example in the case of a Cox process). In the -notation, this can be written in a more compact form: The compensator of a point process, also known as the dual-predictable projection, is the integrated conditional intensity function defined by Related functions Papangelou intensity function The Papangelou intensity function of a point process in the -dimensional Euclidean space is defined as where is the ball centered at of a radius , and denotes the information of the point process outside . Likelihood function The logarithmic likelihood of a parameterized simple point process conditional upon some observed data is written as Point processes in spatial statistics The analysis of point pattern data in a compact subset S of Rn is a major object of study within spatial statistics. Such data appear in a broad range of disciplines, amongst which are forestry and plant ecology (positions of trees or plants in general) epidemiology (home locations of infected patients) zoology (burrows or nests of animals) geography (positions of human settlements, towns or cities) seismology (epicenters of earthquakes) materials science (positions of defects in industrial materials) astronomy (locations of stars or galaxies) computational neuroscience (spikes of neurons). The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibit complete spatial randomness (i.e. are a realization of a spatial Poisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition. In contrast, many datasets considered in classical multivariate statistics consist of independently generated datapoints that may be governed by one or several covariates (typically non-spatial). Apart from the applications in spatial statistics, point processes are one of the fundamental objects in stochastic geometry. Research has also focussed extensively on various models built on point processes such as Voronoi tessellations, random geometric graphs, and Boolean models. See also Empirical measure Random measure Point process notation Point process operation Poisson process Renewal theory Invariant measure Transfer operator Koopman operator Shift operator Notes References Statistical data types Spatial processes
Point process
[ "Mathematics" ]
2,823
[ "Point processes", "Point (geometry)" ]
2,147,801
https://en.wikipedia.org/wiki/Gell-Mann%E2%80%93Nishijima%20formula
The Gell-Mann–Nishijima formula (sometimes known as the NNG formula) relates the baryon number B, the strangeness S, the isospin I3 of quarks and hadrons to the electric charge Q. It was originally given by Kazuhiko Nishijima and Tadao Nakano in 1953, and led to the proposal of strangeness as a concept, which Nishijima originally called "eta-charge" after the eta meson. Murray Gell-Mann proposed the formula independently in 1956. The modern version of the formula relates all flavour quantum numbers (isospin up and down, strangeness, charm, bottomness, and topness) with the baryon number and the electric charge. Formula The original form of the Gell-Mann–Nishijima formula is: This equation was originally based on empirical experiments. It is now understood as a result of the quark model. In particular, the electric charge Q of a quark or hadron particle is related to its isospin I3 and its hypercharge Y via the relation: Since the discovery of charm, top, and bottom quark flavors, this formula has been generalized. It now takes the form: where Q is the charge, I3 the 3rd-component of the isospin, B the baryon number, and S, C, B′, T are the strangeness, charm, bottomness and topness numbers. Expressed in terms of quark content, these would become: By convention, the flavor quantum numbers (strangeness, charm, bottomness, and topness) carry the same sign as the electric charge of the particle. So, since the strange and bottom quarks have a negative charge, they have flavor quantum numbers equal to −1. And since the charm and top quarks have positive electric charge, their flavor quantum numbers are +1. From a quantum chromodynamics point of view, the Gell-Mann–Nishijima formula and its generalized version can be derived using an approximate SU(3) flavour symmetry because the charges can be defined using the corresponding conserved Noether currents. Weak interaction analog In 1961 Glashow proposed a relation similar formula would also apply to the weak interaction: Here the charge is related to the projection of weak isospin and the hypercharge . References Further reading Standard Model he:נוסחת גל-מן-נישיג'ימה
Gell-Mann–Nishijima formula
[ "Physics" ]
510
[ "Standard Model", "Particle physics" ]
2,147,961
https://en.wikipedia.org/wiki/Invariants%20of%20tensors
In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor are the coefficients of the characteristic polynomial , where is the identity operator and are the roots of the polynomial and the eigenvalues of . More broadly, any scalar-valued function is an invariant of if and only if for all orthogonal . This means that a formula expressing an invariant in terms of components, , will give the same result for all Cartesian bases. For example, even though individual diagonal components of will change with a change in basis, the sum of diagonal components will not change. Properties The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective. Calculation of the invariants of rank two tensors In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor which has the eigenvalues , , and . Where , , and are the principal stretches, i.e. the eigenvalues of . Principal invariants For such tensors, the principal invariants are given by: For symmetric tensors, these definitions are reduced. The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that where is the second-order identity tensor. Main invariants In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator , such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called deviatoric, providing shear effects. Mixed invariants Furthermore, mixed invariants between pairs of rank two tensors may also be defined. Calculation of the invariants of order two tensors of higher dimension These may be extracted by evaluating the characteristic polynomial directly, using the Faddeev-LeVerrier algorithm for example. Calculation of the invariants of higher order tensors The invariants of rank three, four, and higher order tensors may also be determined. Engineering applications A scalar function that depends entirely on the principal invariants of a tensor is objective, i.e., independent of rotations of the coordinate system. This property is commonly used in formulating closed-form expressions for the strain energy density, or Helmholtz free energy, of a nonlinear material possessing isotropic symmetry. This technique was first introduced into isotropic turbulence by Howard P. Robertson in 1940 where he was able to derive Kármán–Howarth equation from the invariant principle. George Batchelor and Subrahmanyan Chandrasekhar exploited this technique and developed an extended treatment for axisymmetric turbulence. Invariants of non-symmetric tensors A real tensor in 3D (i.e., one with a 3x3 component matrix) has as many as six independent invariants, three being the invariants of its symmetric part and three characterizing the orientation of the axial vector of the skew-symmetric part relative to the principal directions of the symmetric part. For example, if the Cartesian components of are the first step would be to evaluate the axial vector associated with the skew-symmetric part. Specifically, the axial vector has components The next step finds the principal values of the symmetric part of . Even though the eigenvalues of a real non-symmetric tensor might be complex, the eigenvalues of its symmetric part will always be real and therefore can be ordered from largest to smallest. The corresponding orthonormal principal basis directions can be assigned senses to ensure that the axial vector points within the first octant. With respect to that special basis, the components of are The first three invariants of are the diagonal components of this matrix: (equal to the ordered principal values of the tensor's symmetric part). The remaining three invariants are the axial vector's components in this basis: . Note: the magnitude of the axial vector, , is the sole invariant of the skew part of , whereas these distinct three invariants characterize (in a sense) "alignment" between the symmetric and skew parts of . Incidentally, it is a myth that a tensor is positive definite if its eigenvalues are positive. Instead, it is positive definite if and only if the eigenvalues of its symmetric part are positive. See also Symmetric polynomial Elementary symmetric polynomial Newton's identities Invariant theory References Tensors Invariant theory Linear algebra
Invariants of tensors
[ "Physics", "Mathematics", "Engineering" ]
996
[ "Symmetry", "Tensors", "Group actions", "Invariant theory", "Linear algebra", "Algebra" ]
2,148,015
https://en.wikipedia.org/wiki/Spatial%20ecology
Spatial ecology studies the ultimate distributional or spatial unit occupied by a species. In a particular habitat shared by several species, each of the species is usually confined to its own microhabitat or spatial niche because two species in the same general territory cannot usually occupy the same ecological niche for any significant length of time. Overview In nature, organisms are neither distributed uniformly nor at random, forming instead some sort of spatial pattern. This is due to various energy inputs, disturbances, and species interactions that result in spatially patchy structures or gradients. This spatial variance in the environment creates diversity in communities of organisms, as well as in the variety of the observed biological and ecological events. The type of spatial arrangement present may suggest certain interactions within and between species, such as competition, predation, and reproduction. On the other hand, certain spatial patterns may also rule out specific ecological theories previously thought to be true. Although spatial ecology deals with spatial patterns, it is usually based on observational data rather than on an existing model. This is because nature rarely follows set expected order. To properly research a spatial pattern or population, the spatial extent to which it occurs must be detected. Ideally, this would be accomplished beforehand via a benchmark spatial survey, which would determine whether the pattern or process is on a local, regional, or global scale. This is rare in actual field research, however, due to the lack of time and funding, as well as the ever-changing nature of such widely-studied organisms such as insects and wildlife. With detailed information about a species' life-stages, dynamics, demography, movement, behavior, etc., models of spatial pattern may be developed to estimate and predict events in unsampled locations. History Most mathematical studies in ecology in the nineteenth century assumed a uniform distribution of living organisms in their habitat. In the past quarter century, ecologists have begun to recognize the degree to which organisms respond to spatial patterns in their environment. Due to the rapid advances in computer technology in the same time period, more advanced methods of statistical data analysis have come into use. Also, the repeated use of remotely sensed imagery and geographic information systems in a particular area has led to increased analysis and identification of spatial patterns over time. These technologies have also increased the ability to determine how human activities have impacted animal habitat and climate change. The natural world has become increasingly fragmented due to human activities; anthropogenic landscape change has had a ripple-effect impacts on wildlife populations, which are now more likely to be small, restricted in distribution, and increasingly isolated from one another. In part as a reaction to this knowledge, and partially due to increasingly sophisticated theoretical developments, ecologists began stressing the importance of spatial context in research. Spatial ecology emerged from this movement toward spatial accountability; "the progressive introduction of spatial variation and complexity into ecological analysis, including changes in spatial patterns over time". Concepts Scale In spatial ecology, scale refers to the spatial extent of ecological processes and the spatial interpretation of the data. The response of an organism or a species to the environment is particular to a specific scale, and may respond differently at a larger or smaller scale. Choosing a scale that is appropriate to the ecological process in question is very important in accurately hypothesizing and determining the underlying cause. Most often, ecological patterns are a result of multiple ecological processes, which often operate at more than one spatial scale. Through the use of such spatial statistical methods such as geostatistics and principal coordinate analysis of neighbor matrices (PCNM), one can identify spatial relationships between organisms and environmental variables at multiple scales. Spatial autocorrelation Spatial autocorrelation refers to the value of samples taken close to each other are more likely to have similar magnitude than by chance alone. When a pair of values located at a certain distance apart are more similar than expected by chance, the spatial autocorrelation is said to be positive. When a pair of values are less similar, the spatial autocorrelation is said to be negative. It is common for values to be positively autocorrelated at shorter distances and negative autocorrelated at longer distances. This is commonly known as Tobler's first law of geography, summarized as "everything is related to everything else, but nearby objects are more related than distant objects". In ecology, there are two important sources of spatial autocorrelation, which both arise from spatial-temporal processes, such as dispersal or migration: True/inherent spatial autocorrelation arises from interactions among individuals located in close proximity. This process is endogenous (internal) and results in the individuals being spatially adjacent in a patchy fashion. An example of this would be sexual reproduction, the success of which requires the closeness of a male and female of the species. Induced spatial autocorrelation (or 'induced spatial dependence') arises from the species response to the spatial structure of exogenous (external) factors, which are themselves spatially autocorrelated. An example of this would be the winter habitat range of deer, which use conifers for heat retention and forage. Most ecological data exhibit some degree of spatial autocorrelation, depending on the ecological scale (spatial resolution) of interest. As the spatial arrangement of most ecological data is not random, traditional random population samples tend to overestimate the true value of a variable, or infer significant correlation where there is none. This bias can be corrected through the use of geostatistics and other more statistically advanced models. Regardless of method, the sample size must be appropriate to the scale and the spatial statistical method used in order to be valid. Pattern Spatial patterns, such as the distribution of a species, are the result of either true or induced spatial autocorrelation. In nature, organisms are distributed neither uniformly nor at random. The environment is spatially structured by various ecological processes, which in combination with the behavioral response of species generally results in: Gradients (trends): steady directional change in numbers over a specific distance Patches (clumps): a relatively uniform and homogenous area separated by gaps Noise (random fluctuations): variation not able to be explained by a model Theoretically, any of these structures may occur at any given scale. Due to the presence of spatial autocorrelation, in nature gradients are generally found at the global level, whereas patches represent intermediate (regional) scales, and noise at local scales. The analysis of spatial ecological patterns comprises two families of methods: Point pattern analysis deals with the distribution of individuals through space, and is used to determine whether the distribution is random. It also describes the type of pattern and draws conclusions on what kind of process created the observed pattern. Quadrat-density and the nearest neighbor methods are the most commonly used statistical methods. Surface pattern analysis deals with spatially continuous phenomena. After the spatial distribution of the variables is determined through discrete sampling, statistical methods are used to quantify the magnitude, intensity, and extent of spatial autocorrelation present in the data (such as correlograms, variograms, and periodograms), as well as to map the amount of spatial variation. Applications Research Analysis of spatial trends has been used to research wildlife management, fire ecology, population ecology, disease ecology, invasive species, marine ecology, and carbon sequestration modeling using the spatial relationships and patterns to determine ecological processes and their effects on the environment. Spatial patterns have different ecosystem functioning in ecology for examples enhanced productive. Interdisciplinary The concepts of spatial ecology are fundamental to understanding the spatial dynamics of population and community ecology. The spatial heterogeneity of populations and communities plays a central role in such ecological theories as succession, adaptation, community stability, competition, predator-prey interactions, parasitism, and epidemics. The rapidly expanding field of landscape ecology utilizes the basic aspects of spatial ecology in its research. The practical use of spatial ecology concepts is essential to understanding the consequences of fragmentation and habitat loss for wildlife. Understanding the response of a species to a spatial structure provides useful information in regards to biodiversity conservation and habitat restoration. Spatial ecology modeling uses components of remote sensing and geographical information systems (GIS). Statistical tests A number of statistical tests have been developed to study such relations. Tests based on distance Clark and Evans' R Clark and Evans in 1954 proposed a test based on the density and distance between organisms. Under the null hypothesis the expected distance ( re ) between the organisms (measured as the nearest neighbor's distance) with a known constant density ( ρ ) is The difference between the observed ( ro ) and the expected ( re ) can be tested with a Z test where N is the number of nearest neighbor measurements. For large samples Z is distributed normally. The results are usually reported in the form of a ratio: R = ( ro ) / ( re ) Pielou's α Pielou in 1959 devised a different statistic. She considered instead of the nearest neighbors the distance between an organism and a set of pre-chosen random points within the sampling area, again assuming a constant density. If the population is randomly dispersed in the area these distances will equal the nearest neighbor distances. Let ω be the ratio between the distances from the random points and the distances calculated from the nearest neighbor calculations. The α is where d is the constant common density and π has its usual numerical value. Values of α less than, equal to or greater than 1 indicate uniformity, randomness (a Poisson distribution) or aggregation respectively. Alpha may be tested for a significant deviation from 1 by computing the test statistic where χ2 is distributed with 2n degrees of freedom. n here is the number of organisms sampled. Montford in 1961 showed that when the density is estimated rather than a known constant, this version of alpha tended to overestimate the actual degree of aggregation. He provided a revised formulation which corrects this error. There is a wide range of mathematical problems related to spatial ecological models, relating to spatial patterns and processes associated with chaotic phenomena, bifurcations and instability. See also Edge effects Spatial analysis Taylor's law References External links Spatial Ecology, hosts software for use in spatial ecological analysis. Spatial Ecology Research Programme at the University of Helsinki Spatial Ecology Lab at the University of Queensland Ecography publishes peer-reviewed articles on spatial ecology. National Center for Ecological Analysis and Synthesis at the University of California, Santa Barbara Spatial Ecology Lab at the University of Alaska, Fairbanks Spatial Ecology wikipedia, online resources for learning spatial ecological analysis and data processing using Open source software. Landscape ecology Biogeography Subfields of ecology Biostatistics Spatial analysis
Spatial ecology
[ "Physics", "Biology" ]
2,151
[ "Spacetime", "Biogeography", "Space", "Spatial analysis" ]
2,148,212
https://en.wikipedia.org/wiki/Sanctum%20sanctorum
The Latin phrase sanctum sanctorum is a translation of the Hebrew term קֹדֶשׁ הַקֳּדָשִׁים (Qṓḏeš HaQŏḏāšîm), literally meaning Holy of Holies, which generally refers in Latin texts to the holiest place of the Ancient Israelites, inside the Tabernacle and later inside the Temple in Jerusalem, but the term also has some derivative use in application to imitations of the Tabernacle in church architecture. The plural form sancta sanctorum is also used, arguably as a synecdoche, referring to the holy relics contained in the sanctuary. The Vulgate translation of the Bible uses sancta sanctorum for the Holy of Holies. Hence the derivative usage to denote the Sancta Sanctorum chapel in the complex of the Archbasilica of Saint John Lateran, Rome. In Hinduism, a temple's innermost part where the Murti of the deity is kept forms the Garbhagriha, also referred to as a sanctum sanctorum. Etymology The Latin word sanctum is the neuter form of the adjective "holy", and sanctorum its genitive plural. Thus the term sanctum sanctorum literally means "the holy [place/thing] of the holy [places/things]", replicating in Latin the Hebrew construction for the superlative, with the intended meaning "the most holy [place/thing]". Use of the term in modern languages The Latin word sanctum may be used in English, following Latin, for "a holy place", or a sanctuary, as in the novel Jane Eyre (1848) which refers to "the sanctum of school room". Romance languages tend to use the form sancta sanctorum, treating it as masculine and singular. E.g., the Spanish dictionary of the Real Academia Española admits sanctasanctórum (without the space and with an accent) as a derivative Spanish noun denoting both the Holy of Holies in the Temple in Jerusalem, any secluded and mysterious place, and something that a person holds in the highest esteem. The term is still often used by Indian writers for the garbhagriha or inner shrine chamber in Hindu temple architecture, after being introduced by British writers in the 19th century. German Catholic processions Some regional branches of the Catholic Church, e. g. Germans, are wont to refer to the Blessed Sacrament, when adored in the tabernacle or in exposition or procession (e.g. on Corpus Christi), as the Holy of Holies. By custom, It is adored with genuflection; with a double genuflection, that is a short moment of kneeling on both knees, if in exposition; in the procession this ritual may be nonrigoristically alleviated, but at least a simple genuflection is appropriate when It is elevated by the priest for blessing or immediately after transsubstantiation. Personnel in uniform — which in Germany includes student corporations — give the military salute when passing by or in the moment of elevation. The "enclosed house" of Hindu temple architecture The garbhagriha in Hindu temple architecture (a shrine inside a temple complex where the main deity is installed in a separate building by itself inside the complex) has also been compared to a "sanctum sanctorum" in texts on Hindu temple architecture, though the Sanskrit term actually means "enclosed house" or "the deep interior of the house". However, some Indian English authors seem to have translated the Sanskrit term literally as "womb house". See also References Vulgate Latin words and phrases Superlatives in religion Sacral architecture
Sanctum sanctorum
[ "Engineering" ]
756
[ "Sacral architecture", "Architecture" ]
2,148,313
https://en.wikipedia.org/wiki/Coxeter%20element
In mathematics, a Coxeter element is an element of an irreducible Coxeter group which is a product of all simple reflections. The product depends on the order in which they are taken, but different orderings produce conjugate elements, which have the same order. This order is known as the Coxeter number. They are named after British-Canadian geometer H.S.M. Coxeter, who introduced the groups in 1934 as abstractions of reflection groups. Definitions Note that this article assumes a finite Coxeter group. For infinite Coxeter groups, there are multiple conjugacy classes of Coxeter elements, and they have infinite order. There are many different ways to define the Coxeter number of an irreducible root system. The Coxeter number is the order of any Coxeter element;. The Coxeter number is where is the rank, and is the number of reflections. In the crystallographic case, is half the number of roots; and is the dimension of the corresponding semisimple Lie algebra. If the highest root is for simple roots , then the Coxeter number is The Coxeter number is the highest degree of a fundamental invariant of the Coxeter group acting on polynomials. The Coxeter number for each Dynkin type is given in the following table: The invariants of the Coxeter group acting on polynomials form a polynomial algebra whose generators are the fundamental invariants; their degrees are given in the table above. Notice that if is a degree of a fundamental invariant then so is . The eigenvalues of a Coxeter element are the numbers as runs through the degrees of the fundamental invariants. Since this starts with , these include the primitive th root of unity, which is important in the Coxeter plane, below. The dual Coxeter number is 1 plus the sum of the coefficients of simple roots in the highest short root of the dual root system. Group order There are relations between the order of the Coxeter group and the Coxeter number : For example, has : Coxeter elements Distinct Coxeter elements correspond to orientations of the Coxeter diagram (i.e. to Dynkin quivers): the simple reflections corresponding to source vertices are written first, downstream vertices later, and sinks last. (The choice of order among non-adjacent vertices is irrelevant, since they correspond to commuting reflections.) A special choice is the alternating orientation, in which the simple reflections are partitioned into two sets of non-adjacent vertices, and all edges are oriented from the first to the second set. The alternating orientation produces a special Coxeter element satisfying where is the longest element, provided the Coxeter number is even. For the symmetric group on elements, Coxeter elements are certain -cycles: the product of simple reflections is the Coxeter element . For even, the alternating orientation Coxeter element is: There are distinct Coxeter elements among the -cycles. The dihedral group is generated by two reflections that form an angle of and thus the two Coxeter elements are their product in either order, which is a rotation by Coxeter plane For a given Coxeter element , there is a unique plane on which acts by rotation by This is called the Coxeter plane and is the plane on which has eigenvalues and This plane was first systematically studied in , and subsequently used in to provide uniform proofs about properties of Coxeter elements. The Coxeter plane is often used to draw diagrams of higher-dimensional polytopes and root systems – the vertices and edges of the polytope, or roots (and some edges connecting these) are orthogonally projected onto the Coxeter plane, yielding a Petrie polygon with -fold rotational symmetry. For root systems, no root maps to zero, corresponding to the Coxeter element not fixing any root or rather axis (not having eigenvalue 1 or −1), so the projections of orbits under form -fold circular arrangements and there is an empty center, as in the diagram at above right. For polytopes, a vertex may map to zero, as depicted below. Projections onto the Coxeter plane are depicted below for the Platonic solids. In three dimensions, the symmetry of a regular polyhedron, with one directed Petrie polygon marked, defined as a composite of 3 reflections, has rotoinversion symmetry , , order . Adding a mirror, the symmetry can be doubled to antiprismatic symmetry, , , order . In orthogonal 2D projection, this becomes dihedral symmetry, , , order . In four dimensions, the symmetry of a regular polychoron, with one directed Petrie polygon marked is a double rotation, defined as a composite of 4 reflections, with symmetry (John H. Conway), (#1', Patrick du Val (1964)), order . In five dimensions, the symmetry of a regular 5-polytope, with one directed Petrie polygon marked, is represented by the composite of 5 reflections. In dimensions 6 to 8 there are 3 exceptional Coxeter groups; one uniform polytope from each dimension represents the roots of the exceptional Lie groups . The Coxeter elements are 12, 18 and 30 respectively. See also Longest element of a Coxeter group Notes References Hiller, Howard Geometry of Coxeter groups. Research Notes in Mathematics, 54. Pitman (Advanced Publishing Program), Boston, Mass.-London, 1982. iv+213 pp. Bernšteĭn, I. N.; Gelʹfand, I. M.; Ponomarev, V. A., "Coxeter functors, and Gabriel's theorem" (Russian), Uspekhi Mat. Nauk 28 (1973), no. 2(170), 19–33. Translation on Bernstein's website. Lie groups Coxeter groups
Coxeter element
[ "Mathematics" ]
1,176
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
2,148,329
https://en.wikipedia.org/wiki/Mathematical%20universe%20hypothesis
In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the ultimate ensemble theory, is a speculative "theory of everything" (TOE) proposed by cosmologist Max Tegmark. According to the hypothesis, the universe is a mathematical object in and of itself. Tegmark extends this idea to hypothesize that all mathematical objects exist, which he describes as a form of Platonism or Modal realism. The hypothesis has proved controversial. Jürgen Schmidhuber argues that it is not possible to assign an equal weight or probability to all mathematical objects a priori due to there being infinitely many of them. Physicists Piet Hut and Mark Alford have suggested that the idea is incompatible with Gödel's first incompleteness theorem. Tegmark replies that not only is the universe mathematical, but it is also computable. Description Tegmark's MUH is the hypothesis that our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics — specifically, a mathematical structure. Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world". The theory can be considered a form of Pythagoreanism or Platonism in that it proposes the existence of mathematical entities; a form of mathematicism in that it denies that anything exists except mathematical objects; and a formal expression of ontic structural realism. Tegmark claims that the hypothesis has no free parameters and is not observationally ruled out. Thus, he reasons, it is preferred over other theories-of-everything by Occam's Razor. Tegmark also considers augmenting the MUH with a second assumption, the computable universe hypothesis (CUH), which says that the mathematical structure that is our external physical reality is defined by computable functions. The MUH is related to Tegmark's categorization of four levels of the multiverse. This categorization posits a nested hierarchy of increasing diversity, with worlds corresponding to different sets of initial conditions (level 1), physical constants (level 2), quantum branches (level 3), and altogether different equations or mathematical structures (level 4). Criticisms and responses Andreas Albrecht of Imperial College in London called it a "provocative" solution to one of the central problems facing physics. Although he "wouldn't dare" go so far as to say he believes it, he noted that "it's actually quite difficult to construct a theory where everything we see is all there is". Definition of the ensemble Jürgen Schmidhuber argues that "Although Tegmark suggests that '... all mathematical structures are a priori given equal statistical weight,' there is no way of assigning equal non-vanishing probability to all (infinitely many) mathematical structures." Schmidhuber puts forward a more restricted ensemble which admits only universe representations describable by constructive mathematics, that is, computer programs; e.g., the Global Digital Mathematics Library and Digital Library of Mathematical Functions, linked open data representations of formalized fundamental theorems intended to serve as building blocks for additional mathematical results. He explicitly includes universe representations describable by non-halting programs whose output bits converge after finite time, although the convergence time itself may not be predictable by a halting program, due to the undecidability of the halting problem. In response, Tegmark notes that a constructive mathematics formalized measure of free parameter variations of physical dimensions, constants, and laws over all universes has not yet been constructed for the string theory landscape either, so this should not be regarded as a "show-stopper". Consistency with Gödel's theorem It has also been suggested that the MUH is inconsistent with Gödel's incompleteness theorem. In a three-way debate between Tegmark and fellow physicists Piet Hut and Mark Alford, the "secularist" (Alford) states that "the methods allowed by formalists cannot prove all the theorems in a sufficiently powerful system... The idea that math is 'out there' is incompatible with the idea that it consists of formal systems." Tegmark's response is to offer a new hypothesis "that only Gödel-complete (fully decidable) mathematical structures have physical existence. This drastically shrinks the Level IV multiverse, essentially placing an upper limit on complexity, and may have the attractive side effect of explaining the relative simplicity of our universe." Tegmark goes on to note that although conventional theories in physics are Gödel-undecidable, the actual mathematical structure describing our world could still be Gödel-complete, and "could in principle contain observers capable of thinking about Gödel-incomplete mathematics, just as finite-state digital computers can prove certain theorems about Gödel-incomplete formal systems like Peano arithmetic." In he gives a more detailed response, proposing as an alternative to MUH the more restricted "Computable Universe Hypothesis" (CUH) which only includes mathematical structures that are simple enough that Gödel's theorem does not require them to contain any undecidable or uncomputable theorems. Tegmark admits that this approach faces "serious challenges", including (a) it excludes much of the mathematical landscape; (b) the measure on the space of allowed theories may itself be uncomputable; and (c) "virtually all historically successful theories of physics violate the CUH". Observability Stoeger, Ellis, and Kircher note that in a true multiverse theory, "the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support". Ellis specifically criticizes the MUH, stating that an infinite ensemble of completely disconnected universes is "completely untestable, despite hopeful remarks sometimes made, see, e.g., Tegmark (1998)." Tegmark maintains that MUH is testable, stating that it predicts (a) that "physics research will uncover mathematical regularities in nature", and (b) by assuming that we occupy a typical member of the multiverse of mathematical structures, one could "start testing multiverse predictions by assessing how typical our universe is". Plausibility of radical Platonism The MUH is based on the radical Platonist view that math is an external reality. However, Jannes argues that "mathematics is at least in part a human construction", on the basis that if it is an external reality, then it should be found in some other animals as well: "Tegmark argues that, if we want to give a complete description of reality, then we will need a language independent of us humans, understandable for non-human sentient entities, such as aliens and future supercomputers". Brian Greene argues similarly: "The deepest description of the universe should not require concepts whose meaning relies on human experience or interpretation. Reality transcends our existence and so shouldn't, in any fundamental way, depend on ideas of our making." However, there are many non-human entities, plenty of which are intelligent, and many of which can apprehend, memorise, compare and even approximately add numerical quantities. Several animals have also passed the mirror test of self-consciousness. But a few surprising examples of mathematical abstraction notwithstanding (for example, chimpanzees can be trained to carry out symbolic addition with digits, or the report of a parrot understanding a "zero-like concept"), all examples of animal intelligence with respect to mathematics are limited to basic counting abilities. He adds, "non-human intelligent beings should exist that understand the language of advanced mathematics. However, none of the non-human intelligent beings that we know of confirm the status of (advanced) mathematics as an objective language." In the paper "On Math, Matter and Mind" the secularist viewpoint examined argues that math is evolving over time, there is "no reason to think it is converging to a definite structure, with fixed questions and established ways to address them", and also that "The Radical Platonist position is just another metaphysical theory like solipsism... In the end the metaphysics just demands that we use a different language for saying what we already knew." Tegmark responds that "The notion of a mathematical structure is rigorously defined in any book on Model Theory", and that non-human mathematics would only differ from our own "because we are uncovering a different part of what is in fact a consistent and unified picture, so math is converging in this sense." In his 2014 book on the MUH, Tegmark argues that the resolution is not that we invent the language of mathematics, but that we discover the structure of mathematics. Coexistence of all mathematical structures Don Page has argued that "At the ultimate level, there can be only one world and, if mathematical structures are broad enough to include all possible worlds or at least our own, there must be one unique mathematical structure that describes ultimate reality. So I think it is logical nonsense to talk of Level 4 in the sense of the co-existence of all mathematical structures." This means there can only be one mathematical corpus. Tegmark responds that "This is less inconsistent with Level IV than it may sound, since many mathematical structures decompose into unrelated substructures, and separate ones can be unified." Consistency with our "simple universe" Alexander Vilenkin comments that "The number of mathematical structures increases with increasing complexity, suggesting that 'typical' structures should be horrendously large and cumbersome. This seems to be in conflict with the beauty and simplicity of the theories describing our world". He goes on to note that Tegmark's solution to this problem, the assigning of lower "weights" to the more complex structures seems arbitrary ("Who determines the weights?") and may not be logically consistent ("It seems to introduce an additional mathematical structure, but all of them are supposed to be already included in the set"). Occam's razor Tegmark has been criticized as misunderstanding the nature and application of Occam's razor; Massimo Pigliucci reminds that "Occam's razor is just a useful heuristic, it should never be used as the final arbiter to decide which theory is to be favored". See also Abstract object theory Anthropic principle Church–Turing thesis Digital physics Pancomputationalism Impossible world Mathematicism Measure problem (cosmology) Modal realism Ontology Permutation City Structuralism (philosophy of science) "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" Hilbert's sixth problem References Sources Our Mathematical Universe: written by Max Tegmark and published on January 7, 2014, this book describes Tegmark's theory. Further reading Schmidhuber, J. (1997) "A Computer Scientist's View of Life, the Universe, and Everything" in C. Freksa, ed., Foundations of Computer Science: Potential - Theory - Cognition. Lecture Notes in Computer Science, Springer: p. 201-08. Tegmark, Max (2014), Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, Woit, P. (17 January 2014), "Book Review: 'Our Mathematical Universe' by Max Tegmark", The Wall Street Journal. Hamlin, Colin (2017). "Towards a Theory of Universes: Structure Theory and the Mathematical Universe Hypothesis". Synthese 194 (581–591). https://link.springer.com/article/10.1007/s11229-015-0959-y External links Jürgen Schmidhuber "The ensemble of universes describable by constructive mathematics." Page maintained by Max Tegmark with links to his technical and popular writings. "The 'Everything' mailing list" (and archives). Discusses the idea that all possible universes exist. Richard Carrier Blogs: Our Mathematical Universe Interview with Sam Harris Tegmark and Harris discuss efficacy of mathematics, multiverses, artificial intelligence. Collection of interviews with Max Tegmark in 'Closer to truth" "Is the Universe made of math?" Excerpt in Scientific American Abstract object theory Metaphysical realism Multiverse Ontology Physical cosmology
Mathematical universe hypothesis
[ "Physics", "Astronomy", "Mathematics" ]
2,651
[ "Astronomical hypotheses", "Astronomical sub-disciplines", "Mathematical Platonism", "Mathematical logic", "Theoretical physics", "Mathematical objects", "Astrophysics", "Computability theory", "Multiverse", "Physical cosmology" ]
2,148,532
https://en.wikipedia.org/wiki/Plastic%20ratio
In mathematics, the plastic ratio is a geometrical proportion close to . Its true value is the real solution of the equation The adjective plastic does not refer to the artificial material, but to the formative and sculptural qualities of this ratio, as in plastic arts. Definition Three quantities are in the plastic ratio if The ratio is commonly denoted Let and , then It follows that the plastic ratio is found as the unique real solution of the cubic equation The decimal expansion of the root begins as . Solving the equation with Cardano's formula, or, using the hyperbolic cosine, is the superstable fixed point of the iteration . The iteration results in the continued reciprocal square root Dividing the defining trinomial by one obtains , and the conjugate elements of are with and Properties The plastic ratio and golden ratio are the only morphic numbers: real numbers for which there exist natural numbers m and n such that and . Morphic numbers can serve as basis for a system of measure. Properties of (m=3 and n=4) are related to those of (m=2 and n=1). For example, The plastic ratio satisfies the continued radical , while the golden ratio satisfies the analogous The plastic ratio can be expressed in terms of itself as the infinite geometric series and in comparison to the golden ratio identity and vice versa. Additionally, , while For every integer one has From this an infinite number of further relations can be found. The algebraic solution of a reduced quintic equation can be written in terms of square roots, cube roots and the Bring radical. If then . Since Continued fraction pattern of a few low powers The plastic ratio is the smallest Pisot number. Because the absolute value of the algebraic conjugates is smaller than 1, powers of generate almost integers. For example: After 29 rotation steps the phases of the inward spiraling conjugate pair – initially close to – nearly align with the imaginary axis. The minimal polynomial of the plastic ratio has discriminant . The Hilbert class field of imaginary quadratic field can be formed by adjoining . With argument a generator for the ring of integers of , one has the special value of Dedekind eta quotient . Expressed in terms of the Weber-Ramanujan class invariant Gn Properties of the related Klein j-invariant result in near identity . The difference is . The elliptic integral singular value for has closed form expression (which is less than 1/3 the eccentricity of the orbit of Venus). Van der Laan sequence In his quest for perceptible clarity, the Dutch Benedictine monk and architect Dom Hans van der Laan (1904-1991) asked for the minimum difference between two sizes, so that we will clearly perceive them as distinct. Also, what is the maximum ratio of two sizes, so that we can still relate them and perceive nearness. According to his observations, the answers are , spanning a single order of size. Requiring proportional continuity, he constructed a geometric series of eight measures (types of size) with common ratio Put in rational form, this architectonic system of measure is constructed from a subset of the numbers that bear his name. The Van der Laan numbers have a close connection to the Perrin and Padovan sequences. In combinatorics, the number of compositions of n into parts 2 and 3 is counted by the nth Van der Laan number. The Van der Laan sequence is defined by the third-order recurrence relation with initial values The first few terms are 1, 0, 1, 1, 1, 2, 2, 3, 4, 5, 7, 9, 12, 16, 21, 28, 37, 49, 65, 86,... . The limit ratio between consecutive terms is the plastic ratio. The first 14 indices n for which is prime are n = 5, 6, 7, 9, 10, 16, 21, 32, 39, 86, 130, 471, 668, 1264 . The last number has 154 decimal digits. The sequence can be extended to negative indices using The generating function of the Van der Laan sequence is given by The sequence is related to sums of binomial coefficients by . The characteristic equation of the recurrence is . If the three solutions are real root and conjugate pair and , the Van der Laan numbers can be computed with the Binet formula , with real and conjugates and the roots of . Since and , the number is the nearest integer to , with and Coefficients result in the Binet formula for the related sequence . The first few terms are 3, 0, 2, 3, 2, 5, 5, 7, 10, 12, 17, 22, 29, 39, 51, 68, 90, 119,... . This Perrin sequence has the Fermat property: if p is prime, . The converse does not hold, but the small number of pseudoprimes makes the sequence special. The only 7 composite numbers below to pass the test are n = 271441, 904631, 16532714, 24658561, 27422714, 27664033, 46672291. The Van der Laan numbers are obtained as integral powers of a matrix with real eigenvalue The trace of gives the Perrin numbers. Alternatively, can be interpreted as incidence matrix for a D0L Lindenmayer system on the alphabet with corresponding substitution rule and initiator . The series of words produced by iterating the substitution have the property that the number of and are equal to successive Van der Laan numbers. Their lengths are Associated to this string rewriting process is a set composed of three overlapping self-similar tiles called the Rauzy fractal, that visualizes the combinatorial information contained in a multiple-generation letter sequence. Geometry Partitioning the square There are precisely three ways of partitioning a square into three similar rectangles: The trivial solution given by three congruent rectangles with aspect ratio 3:1. The solution in which two of the three rectangles are congruent and the third one has twice the side lengths of the other two, where the rectangles have aspect ratio 3:2. The solution in which the three rectangles are all of different sizes and where they have aspect ratio ρ2. The ratios of the linear sizes of the three rectangles are: ρ (large:medium); ρ2 (medium:small); and ρ3 (large:small). The internal, long edge of the largest rectangle (the square's fault line) divides two of the square's four edges into two segments each that stand to one another in the ratio ρ. The internal, coincident short edge of the medium rectangle and long edge of the small rectangle divides one of the square's other, two edges into two segments that stand to one another in the ratio ρ4. The fact that a rectangle of aspect ratio ρ2 can be used for dissections of a square into similar rectangles is equivalent to an algebraic property of the number ρ2 related to the Routh–Hurwitz theorem: all of its conjugates have positive real part. The circumradius of the snub icosidodecadodecahedron for unit edge length is . Cubic Lagrange interpolation The unique positive node that optimizes cubic Lagrange interpolation on the interval is equal to The square of is the single real root of polynomial with discriminant Expressed in terms of the plastic ratio, which is verified by insertion into With optimal node set the Lebesgue function evaluates to the minimal cubic Lebesgue constant at critical point The constants are related through and can be expressed as infinite geometric series Each term of the series corresponds to the diagonal length of a rectangle with edges in ratio which results from the relation with odd. The diagram shows the sequences of rectangles with common shrink rate converge at a single point on the diagonal of a rho-squared rectangle with length Plastic spiral A plastic spiral is a logarithmic spiral that gets wider by a factor of for every quarter turn. It is described by the polar equation with initial radius and parameter If drawn on a rectangle with sides in ratio , the spiral has its pole at the foot of altitude of a triangle on the diagonal and passes through vertices of rectangles with aspect ratio which are perpendicularly aligned and successively scaled by a factor In 1838 Henry Moseley noticed that whorls of a shell of the chambered nautilus are in geometrical progression: "It will be found that the distance of any two of its whorls measured upon a radius vector is one-third that of the next two whorls measured upon the same radius vector ... The curve is therefore a logarithmic spiral." Moseley thus gave the expansion rate for a quarter turn. Considering the plastic ratio a three-dimensional equivalent of the ubiquitous golden ratio, it appears to be a natural candidate for measuring the shell. History and names was first studied by Axel Thue in 1912 and by G. H. Hardy in 1919. French high school student discovered the ratio for himself in 1924. In his correspondence with Hans van der Laan a few years later, he called it the radiant number (). Van der Laan initially referred to it as the fundamental ratio (), using the plastic number () from the 1950s onward. In 1944 Carl Siegel showed that is the smallest possible Pisot–Vijayaraghavan number and suggested naming it in honour of Thue. Unlike the names of the golden and silver ratios, the word plastic was not intended by van der Laan to refer to a specific substance, but rather in its adjectival sense, meaning something that can be given a three-dimensional shape. This, according to Richard Padovan, is because the characteristic ratios of the number, and , relate to the limits of human perception in relating one physical size to another. Van der Laan designed the 1967 St. Benedictusberg Abbey church to these plastic number proportions. The plastic number is also sometimes called the silver number, a name given to it by Midhat J. Gazalé and subsequently used by Martin Gardner, but that name is more commonly used for the silver ratio , one of the ratios from the family of metallic means first described by Vera W. de Spinadel. Gardner suggested referring to as "high phi", and Donald Knuth created a special typographic mark for this name, a variant of the Greek letter phi ("φ") with its central circle raised, resembling the Georgian letter pari ("Ⴔ"). See also Solutions of equations similar to : Golden ratio – the only positive solution of the equation Supergolden ratio – the only real solution of the equation Notes References Further reading . . . External links Plastic rectangle and Padovan sequence at Tartapelago by Giorgio Pietrocola. The digital study room of Dom Hans van der Laan at The Van der Laan Archives. . Cubic irrational numbers Mathematical constants History of geometry Integer sequences Composition in visual art
Plastic ratio
[ "Mathematics" ]
2,291
[ "Sequences and series", "Integer sequences", "Mathematical structures", "History of geometry", "Recreational mathematics", "Mathematical objects", "Combinatorics", "nan", "Geometry", "Mathematical constants", "Numbers", "Number theory" ]
2,148,548
https://en.wikipedia.org/wiki/Uniformization%20%28set%20theory%29
In set theory, a branch of mathematics, the axiom of uniformization is a weak form of the axiom of choice. It states that if is a subset of , where and are Polish spaces, then there is a subset of that is a partial function from to , and whose domain (the set of all such that exists) equals Such a function is called a uniformizing function for , or a uniformization of . To see the relationship with the axiom of choice, observe that can be thought of as associating, to each element of , a subset of . A uniformization of then picks exactly one element from each such subset, whenever the subset is non-empty. Thus, allowing arbitrary sets X and Y (rather than just Polish spaces) would make the axiom of uniformization equivalent to the axiom of choice. A pointclass is said to have the uniformization property if every relation in can be uniformized by a partial function in . The uniformization property is implied by the scale property, at least for adequate pointclasses of a certain form. It follows from ZFC alone that and have the uniformization property. It follows from the existence of sufficient large cardinals that and have the uniformization property for every natural number . Therefore, the collection of projective sets has the uniformization property. Every relation in L(R) can be uniformized, but not necessarily by a function in L(R). In fact, L(R) does not have the uniformization property (equivalently, L(R) does not satisfy the axiom of uniformization). (Note: it's trivial that every relation in L(R) can be uniformized in V, assuming V satisfies the axiom of choice. The point is that every such relation can be uniformized in some transitive inner model of V in which the axiom of determinacy holds.) References Set theory Descriptive set theory Axiom of choice
Uniformization (set theory)
[ "Mathematics" ]
398
[ "Set theory", "Mathematical logic", "Mathematical axioms", "Axiom of choice", "Axioms of set theory" ]
2,148,549
https://en.wikipedia.org/wiki/Vowel%E2%80%93consonant%20synthesis
Vowel–consonant synthesis is a type of hybrid digital–analogue synthesis developed and employed by the early Casiotone keyboards. It employs two digital waveforms, which are mixed and filtered by a static lowpass filter, with different filter positions selected for use according to presets. The filters are modeled on the frequencies present in the human vocal tract, hence the name given by Casio technicians during the research and development process. The waveforms are stored and unalterable without considerable modification, such as the addition of a computer or microcontroller, to deliver alternative control data to the sound synthesis chip. References Sound synthesis types Japanese inventions
Vowel–consonant synthesis
[ "Technology" ]
130
[ "Computing stubs", "Computer science", "Computer science stubs" ]
2,148,630
https://en.wikipedia.org/wiki/Zimelidine
Zimelidine (INN, BAN; brand names Zimeldine, Normud, Zelmid) was one of the first selective serotonin reuptake inhibitor (SSRI) antidepressants to be marketed. It is a pyridylallylamine, and is structurally different from other antidepressants. Zimelidine was developed in the late 1970s and early 1980s by Arvid Carlsson, who was then working for the Swedish company Astra AB. It was invented following a search for drugs with structures similar to brompheniramine (it is a derivative of brompheniramine), an antihistamine with antidepressant activity. Zimelidine was first sold in 1982. While zimelidine had a very favorable safety profile, within a year and a half of its introduction, rare case reports of Guillain–Barré syndrome emerged that appeared to be caused by the drug, prompting its manufacturer to withdraw it from the market. After its withdrawal, it was succeeded by fluvoxamine and fluoxetine (derived from the antihistamine diphenhydramine) in that order, and the other SSRIs. Mechanism of action The mode of action is a strong reuptake inhibition of serotonin from the synaptic cleft. Postsynaptic receptors are not acted upon. Other uses Zimelidine was reported by Montplaisir and Godbout to be very effective for cataplexy in 1986, back when this was usually controlled by tricyclic antidepressants, which often had anticholinergic effects. Zimelidine was able to improve cataplexy without causing daytime sleepiness. Side effects Most often reported were: Dry mouth, dryness of pharyngeal and nasal membranes Increased sweating (hyperhidrosis) Vertigo Nausea Interactions MAO inhibitors — severe or life-threatening reactions possible See also Ranitidine RTI-353 Triprolidine SB-649,915 References Alkene derivatives Dimethylamino compounds Antidepressants Drugs developed by AstraZeneca Hepatotoxins 4-Bromophenyl compounds 3-Pyridyl compounds Selective serotonin reuptake inhibitors Withdrawn drugs
Zimelidine
[ "Chemistry" ]
468
[ "Drug safety", "Withdrawn drugs" ]
2,148,656
https://en.wikipedia.org/wiki/Packet%20writing
Packet writing (or incremental packet writing, IPW) is an optical disc recording technology used to allow write-once and rewritable CD and DVD media to be used in a similar manner to a floppy disk from within the operating system. Details Packet writing allows users to create, modify, and delete files and directories on demand without the need to burn a whole disc. Packet writing technology achieves this by writing data in incremental blocks rather than in a single block. Deleting files and directories of a CD-R using packet writing technology does not recover the space occupied by these objects but, rather, they are simply marked as being deleted (making them effectively hidden). Similarly, changes to files cause new instances to be created instead of replacing the original files. Because of this, the available space on a non-rewritable medium using packet writing technology will decrease every time its content is modified. The most common file system for packet writing systems is UDF. Due to the characteristics of optical rewritable media such as CD-RWs and DVD-RWs, the ability of data sectors to hold their contents diminishes when changing them frequently (since re-crystallized alloy de-crystallizes). To cope with this the packet writing system can remap bad sectors with good sectors as required. These bad sectors cannot be recovered by formatting the media. Software Packet writing is most popularly implemented by Microsoft since Windows Vista, where it is referred to as the Live File System. Software implementing packet writing includes: Nero InCD Drive Letter Access Linux (since 2.6.8) References Optical disc authoring Optical computer storage media
Packet writing
[ "Technology" ]
345
[ "Multimedia", "Computing stubs", "Optical disc authoring", "Computer hardware stubs" ]
2,148,734
https://en.wikipedia.org/wiki/Drum%20replacement
Drum replacement is the practice, in modern music production, of an engineer or producer recording a live drummer and replacing (or adding to) the sound of a particular drum with a pre-recorded sample. For example, a drummer might play a beat, whereupon the engineer might then replace all of the snare hits with the sound of a hand-clap. It is considered by some to be one of the most arcane practices of the modern music production industry and is an example of the considerable influence of computers in modern music, even in genres not strictly classified as "electronic music." Origins The practice is an extension of the recording techniques of the 1970s through to the 1980s, wherein the constant search for better or "more perfect" sound led to a variety of techniques being tested, including the extensive use of drum machines. Among these techniques was drum replacement, which was pioneered by producer Roger Nichols while in the studio with Steely Dan in the late '70s, and has grown in both popularity and complexity since. One of the most common uses of this technique is the replacing of every snare hit in a performance (which may or may not sound subjectively "good") with an "ideal" snare drum hit. Should the decision be made to use drum replacement techniques, the actual implementation of the practice usually falls to an audio engineer during the mixing stage. Association Drum replacing is often mentioned, along with autotune, harmonizers, and advanced compressors, as being symptomatic of the "artificial nature" of modern western music by certain critics. Some critics suggest that the practice defeats the purpose of having a live drummer as opposed to a drum machine, since the result is effectively exactly the same as what a drum machine would produce if the drum machine had a custom sample recorded for it by the engineer. Others laud it as one of the subtleties of studio technique, used by engineers to give their craft more complexity in an increasingly automated world. References External links An example of drum replacement software - Slate Digital TRIGGER An example of drum replacement software - Drumagog Audio engineering
Drum replacement
[ "Engineering" ]
422
[ "Electrical engineering", "Audio engineering" ]
2,148,918
https://en.wikipedia.org/wiki/Anionic%20addition%20polymerization
In polymer chemistry, anionic addition polymerization is a form of chain-growth polymerization or addition polymerization that involves the polymerization of monomers initiated with anions. The type of reaction has many manifestations, but traditionally vinyl monomers are used. Often anionic polymerization involves living polymerizations, which allows control of structure and composition. History As early as 1936, Karl Ziegler proposed that anionic polymerization of styrene and butadiene by consecutive addition of monomer to an alkyl lithium initiator occurred without chain transfer or termination. Twenty years later, living polymerization was demonstrated by Michael Szwarc and coworkers. In one of the breakthrough events in the field of polymer science, Szwarc elucidated that electron transfer occurred from radical anion sodium naphthalene to styrene. The results in the formation of an organosodium species, which rapidly added styrene to form a "two – ended living polymer." An important aspect of his work, Szwarc employed the aprotic solvent tetrahydrofuran. Being a physical chemist, Szwarc elucidated the kinetics and the thermodynamics of the process in considerable detail. At the same time, he explored the structure property relationship of the various ion pairs and radical ions involved. This work provided the foundations for the synthesis of polymers with improved control over molecular weight, molecular weight distribution, and the architecture. The use of alkali metals to initiate polymerization of 1,3-dienes led to the discovery by Stavely and co-workers at Firestone Tire and Rubber company of cis-1,4-polyisoprene. This sparked the development of commercial anionic polymerization processes that utilize alkyllithium initiators. Roderic Quirk won the 2019 Charles Goodyear Medal in recognition of his contributions to anionic polymerization technology. He was introduced to the subject while working in a Phillips Petroleum lab with Henry Hsieh. Monomer characteristics Two broad classes of monomers are susceptible to anionic polymerization. Vinyl monomers have the formula CH2=CHR, the most important are styrene (R = C6H5), butadiene (R = CH=CH2), and isoprene (R = C(Me)=CH2). A second major class of monomers are acrylate esters, such as acrylonitrile, methacrylate, cyanoacrylate, and acrolein. Other vinyl monomers include vinylpyridine, vinyl sulfone, vinyl sulfoxide, vinyl silanes. Cyclic monomers Many cyclic compounds are susceptible to ring-opening polymerization. Epoxides, cyclic trisiloxanes, some lactones, lactides, cyclic carbonates, and amino acid N-carboxyanhydrides. In order for polymerization to occur with vinyl monomers, the substituents on the double bond must be able to stabilize a negative charge. Stabilization occurs through delocalization of the negative charge. Because of the nature of the carbanion propagating center, substituents that react with bases or nucleophiles either must not be present or be protected. Initiation Initiators are selected based on the reactivity of the monomers. Highly electrophilic monomers such as cyanoacrylates require only weakly nucleophilic initiators, such as amines, phosphines, or even halides. Less reactive monomers such as styrene require powerful nucleophiles such as butyl lithium. Reactions of intermediate strength are used for monomers of intermediate reactivity such as vinylpyridine. The solvents used in anionic addition polymerizations are determined by the reactivity of both the initiator and nature of the propagating chain end. Anionic species with low reactivity, such as heterocyclic monomers, can use a wide range of solvents. Initiation by electron transfer Initiation of styrene polymerization with sodium naphthalene proceeds by electron transfer from the naphthalene radical anion to the monomer. The resulting radical dimerizes to give a disodium compound, which then functions as the initiator. Polar solvents are necessary for this type of initiation both for stability of the anion-radical and to solvate the cation species formed. The anion-radical can then transfer an electron to the monomer. Initiation can also involve the transfer of an electron from the alkali metal to the monomer to form an anion-radical. Initiation occurs on the surface of the metal, with the reversible transfer of an electron to the adsorbed monomer. Initiation by strong anions Nucleophilic initiators include covalent or ionic metal amides, alkoxides, hydroxides, cyanides, phosphines, amines and organometallic compounds (alkyllithium compounds and Grignard reagents). The initiation process involves the addition of a neutral (B:) or negative (:B−) nucleophile to the monomer. The most commercially useful of these initiators has been the alkyllithium initiators. They are primarily used for the polymerization of styrenes and dienes. Monomers activated by strong electronegative groups may be initiated even by weak anionic or neutral nucleophiles (i.e. amines, phosphines). Most prominent example is the curing of cyanoacrylate, which constitutes the basis for superglue. Here, only traces of basic impurities are sufficient to induce an anionic addition polymerization or zwitterionic addition polymerization, respectively. Propagation Propagation in anionic addition polymerization results in the complete consumption of monomer. This stage is often fast, even at low temperatures. Living anionic polymerization Living anionic polymerization is a living polymerization technique involving an anionic propagating species. Living anionic polymerization was demonstrated by Szwarc and co workers in 1956. Their initial work was based on the polymerization of styrene and dienes. One of the remarkable features of living anionic polymerization is that the mechanism involves no formal termination step. In the absence of impurities, the carbanion would still be active and capable of adding another monomer. The chains will remain active indefinitely unless there is inadvertent or deliberate termination or chain transfer. This gave rise to two important consequences: The number average molecular weight, Mn, of the polymer resulting from such a system could be calculated by the amount of consumed monomer and the initiator used for the polymerization, as the degree of polymerization would be the ratio of the moles of the monomer consumed to the moles of the initiator added. , where Mo = formula weight of the repeating unit, [M]o = initial concentration of the monomer, and [I] = concentration of the initiator. All the chains are initiated at roughly the same time. The final result is that the polymer synthesis can be done in a much more controlled manner in terms of the molecular weight and molecular weight distribution (Poisson distribution). The following experimental criteria have been proposed as a tool for identifying a system as living polymerization system. Polymerization until the monomer is completely consumed and until further monomer is added. Constant number of active centers or propagating species. Poisson distribution of molecular weight Chain end functionalization can be carried out quantitatively. However, in practice, even in the absence of terminating agents, the concentration of the living anions will reduce with time due to a decay mechanism termed as spontaneous termination. Consequences of living polymerization Block copolymers Synthesis of block copolymers is one of the most important applications of living polymerization as it offers the best control over structure. The nucleophilicity of the resulting carbanion will govern the order of monomer addition, as the monomer forming the less nucleophilic propagating species may inhibit the addition of the more nucleophilic monomer onto the chain. An extension of the above concept is the formation of triblock copolymers where each step of such a sequence aims to prepare a block segment with predictable, known molecular weight and narrow molecular weight distribution without chain termination or transfer. Sequential monomer addition is the dominant method, also this simple approach suffers some limitations. Moreover, this strategy, enables synthesis of linear block copolymer structures that are not accessible via sequential monomer addition. For common A-b-B structures, sequential block copolymerization gives access to well defined block copolymers only if the crossover reaction rate constant is significantly higher than the rate constant of the homopolymerization of the second monomer, i.e., kAA >> kBB. End-group functionalization/termination One of the remarkable features of living anionic polymerization is the absence of a formal termination step. In the absence of impurities, the carbanion would remain active, awaiting the addition of new monomer. Termination can occur through unintentional quenching by impurities, often present in trace amounts. Typical impurities include oxygen, carbon dioxide, or water. Termination intentionally allows the introduction of tailored end groups. Living anionic polymerization allow the incorporation of functional end-groups, usually added to quench polymerization. End-groups that have been used in the functionalization of α-haloalkanes include hydroxide, -NH2, -OH, -SH, -CHO,-COCH3, -COOH, and epoxides. An alternative approach for functionalizing end-groups is to begin polymerization with a functional anionic initiator. In this case, the functional groups are protected since the ends of the anionic polymer chain is a strong base. This method leads to polymers with controlled molecular weights and narrow molecular weight distributions. Additional reading Cowie, J.; Arrighi,V. Polymers: Chemistry and Physics of Modern Materials; CRC Press: Boca Raton, FL, 2008. References Polymerization reactions ja:重合反応#アニオン重合
Anionic addition polymerization
[ "Chemistry", "Materials_science" ]
2,161
[ "Polymerization reactions", "Polymer chemistry" ]
2,148,933
https://en.wikipedia.org/wiki/Social%20psychiatry
Social psychiatry is a branch of psychiatry that studies how the social environment impacts mental health and mental illness. It applies a cultural and societal lens on mental health by focusing on mental illness prevention, community-based care, mental health policy, and societal impact of mental health. It is closely related to cultural psychiatry and community psychiatry. Social psychiatry research is interdisciplinary by nature. It takes an epidemiological research approach and involves collaboration between psychiatrists and social scientists across sociology, anthropology, and social psychology. It has been associated with the development of community-based care and therapeutic communities, and emphasizes the effect of socioeconomic factors on mental illness. Social psychiatry can be contrasted with biopsychiatry, which focuses on genetics, brain neurochemistry and medication. Social psychiatry has influenced U.S. social policy and social movements, including the community mental health movement and the era of deinstitutionalization. History Early 20th century The term “social psychiatry” can be traced back to 1903 in a paper by German psychiatrist Georg Ilberg, ‘Soziale Psychiatrie.’ In it, Ilberg defined social psychiatry as factors that affect the mental health of populations and ways in which to prevent mental illness among society. Ilberg argued that there were many factors that influenced mental health, but the majority of mental illnesses were hereditary. In 1911, German psychiatrist Max Fischer defined social psychiatry as "the act of providing psychiatric care outside of asylums", and advocated for the creation of welfare centers to deliver psychiatric care outside of asylums. At this time in Germany, social psychiatry emphasized protection of the general public of those who are mentally ill and 'antisocial.' United States mental health community movement The mental hygiene movement in the United States marked a shift from individual responsibility of mental health to how public health and society could promote mental health. In 1909, the National Committee for Mental Hygiene (now called Mental Health America) was created to focus on mental illness prevention and mental health promotion. In 1915, the National Committee for Mental Hygiene administered a series of social surveys that explored mental illness outside of asylums and institutions. These surveys uncovered the extent of mental health challenges in society, and led to the development of community-based mental hygiene clinics. A psychiatrist, social workers, and a psychologist staffed these clinics, and provided outpatient services and public health and educational initiatives to prevent mental illness. This would later become a key component of the mental health community movement, which influenced the deinstitutionalization era. 1930s-1940s Prior to World War II, the majority of research related to social psychiatry focused on the impact of urbanization on serious psychiatric disorders like schizophrenia. One of the first social psychiatric studies, ‘Mental Disorders in Urban Areas: An Ecological Study of Schizophrenia and Other Psychoses’, challenged this focus. The study, published in 1939 by University of Chicago sociologists Robert Faris and Warren Dunham, applied a social sciences methodological approach to the study of mental illness. Their findings introduced the concept of social isolation and poverty as factors in mental illness, when existing research had primarily focused on urban versus rural environments. This study captured the attention of United States politicians and policy officials and helped usher in a wave of policy reforms in the coming years. World War II Social psychiatry turned its focus to veteran mental health as a result of World War II. The experiences of soldiers at war and coming home inspired psychiatrists to study the epidemiology of mental illness and the factors that exacerbate it. In particular, psychiatrists in the United Kingdom and United States began to study the impact of the environment on mental illness in soldiers and helped usher in the community mental health movement. ‘Therapeutic communities’ started forming across the UK, and community mental health services for outpatient care grew in number across the United States. In 1946, the United States passed the National Mental Health Act, citing the declining mental health of veterans during World War II. This act provided federal funding for prevention and treatment of mental illness. A direct result of this act was the creation of the National Institute of Mental Health (NIMH) in 1949. NMIH was formed to shift care from psychiatric hospitals to community-based services. 1950s-1960s As the community mental health movement gained traction, President John F Kennedy passed the Community Mental Health Act in 1963, which provided $2.9 billion to build community mental health centers across the country. This ushered in the deinstitutionalization era, which marked widespread closure of state psychiatric hospitals in favor of community mental health services. Lyndon B. Johnson’s War on Poverty, declared in his State of the Union in 1964, further advanced social psychiatry in United States public policy. It justified more spending on social welfare programs and community mental health centers. However, its focus on the ‘culture of poverty’ rather than poverty itself led to criticism among social scientists and psychiatrists. The decline of social psychiatry The shifting terrain of American politics impacted the influence of social psychiatry at a national level. When President Nixon took office in 1969, he dismantled many of the social welfare programs implemented from the War on Poverty. In addition, America’s involvement in the Vietnam War pulled attention away from domestic affairs and reallocated social welfare spending to war efforts. The 1960s marked several movements that questioned mainstream psychiatry, which social psychiatry had become part of as a result of the policy enacted from its research. Movements like anti-psychiatry, radical psychiatry, and the psychiatric survivors movement protested psychiatric treatments like lobotomies, ECT, and insulin shock therapy. Although social psychiatry was not involved in these therapies, its mainstream status as a field resulted in eroded trust among psychotherapists and psychiatrists. Biological psychiatry as a field was rising in popularity during these counter-culture movements, further eroding social psychiatry as a field. Technological advancements in brain imaging techniques influenced this shift in focus and provided genetic and biological explanations for psychiatric disorders, rather than social explanations. These advancements in neurology, psychopharmacology, and genetics fueled the rise of pharmacological drugs like Prozac, for mental disorders and deemphasized the need for psychotherapy. By the 1980s, biological psychiatry had overtaken social psychiatry as the premier mode of research in mental illness. Core theories and concepts Social psychiatry emphasizes how social, cultural, and environmental factors influence mental health and illness. It focuses on understanding and addressing the social determinants of mental health, the role of relationships and community in psychological well-being, and the prevention and treatment of mental disorders within broader social contexts. Psychobiology Psychobiology, a term first coined by Adolf Meyer in the early 20th century, refers to an interdisciplinary approach to understanding behavior and mental health by integrating biological, psychological, and social factors. Meyer, often considered the father of modern American psychiatry, advocated for a holistic perspective that examined the interplay between an individual’s biological constitution, psychological experiences, and social environment. This approach was unique because it diverged from models that focused exclusively on either biological or psychodynamic explanations for mental illness. It emphasized the importance of context, life history, and adaptability in understanding human behavior. Psychobiology laid the groundwork for recognizing the role of environmental and relational factors in mental health. By framing psychiatric disorders as dynamic processes influenced by life events and social interactions, psychobiology inspired approaches that consider patients within their broader environment. This influenced later theories, including Harry Stack Sullivan's interpersonal theory and the study of social determinants of health. It also reinforced the need for interdisciplinary collaboration between psychiatry and sociology, anthropology, and public health. Interpersonal theory Harry Stack Sullivan's interpersonal theory emphasizes the role of interpersonal relationships in shaping personality development and mental health, arguing that individuals' personalities are formed and expressed within the context of their social interactions. In his book The Interpersonal Theory of Psychiatry (1955), Sullivan argued that psychiatric disorders are best understood through interpersonal interactions, not just internal conflicts. By integrating social and cultural influences with biological and intra-psychic models, Sullivan believed it was crucial to examine societal structures and interpersonal systems in order to address mental health challenges. Sullivan proposed that personality develops through relationships and that disruptions in these interactions often underlie psychological distress. He introduced the idea that the "self" is shaped by social experiences and outlined a developmental framework linking psychological well-being to navigating interpersonal challenges at different life stages, such as trust in infancy and intimacy in adolescence. Interpersonal theory helped advance social psychiatry by emphasizing the significance of social and interpersonal factors in mental health, shifting the focus from purely biological explanations to a more holistic understanding of mental illness. By incorporating interpersonal dynamics and social influences into psychiatric theory, Sullivan shifted the field toward a more holistic understanding of mental health, paving the way for innovations such as family therapy, community mental health programs, and the exploration of social determinants of health. Biopsychosocial model The biopsychosocial model, developed by George Engel in 1977, integrates biological, psychological, and social factors to provide a comprehensive understanding of mental health. Social psychiatry builds on this framework to design interventions around community-based care and mental illness prevention. Social determinants of mental health Social psychiatry emphasizes how different societal and environmental factors influence mental health and contribute to psychiatric disorders. Below are some of the core factors the field has identified. Housing and Urbanization Housing is recognized as a fundamental determinant of mental health, serving as both a basic human need and a stabilizing factor in people’s lives. Social psychiatry attributes those with housing instability, such as frequent moves, evictions, or homelessness, generates stress, disrupts social support networks, with higher risk of psychiatric disorders. Poor living conditions, including overcrowding, unsafe environments, and exposure to hazards like mold or lead, further exacerbate mental health challenges, particularly in children. Residential segregation, often resulting from systemic issues like redlining and gentrification, concentrates marginalized communities in under-resourced areas, perpetuating mental health disparities. Conversely, stable and affordable housing provides psychological safety, fostering a sense of control and security that protects mental well-being. Poverty Social psychiatry views poverty as a critical determinant of mental health, emphasizing its role in creating chronic stress, limited access to resources, and systemic barriers to care. Research has shown that these barriers can generate psychological distress and increase vulnerability to conditions like depression, anxiety, and substance abuse. Poverty often leads to social isolation, a key topic in social psychiatry. Faris and Dunham’s 1939 study was among the first to identify social isolation as a determinant to mental health. Children in poverty, social psychiatry argues, can have developmental impacts and are associated with higher risk for adverse mental health outcomes. In the medical community, poverty is considered an Adverse Childhood Experience (ACE). Class and Socioeconomic Status Social psychiatry examines class and socioeconomic status (SES) as factors that shape mental health by influencing access to power, privilege, and resources. Research shows that employment instability, low-paying jobs, and poor working conditions are linked to higher rates of psychological distress. In his book The Impact of Inequality: How to Make Sick Societies Healthier, British epidemiologist Richard G. Wilkinson outlines how lower-class individuals can experience "status anxiety" or humiliation tied to social stratification, which impacts mental health. Class disparities can also shape perceptions and treatment of mental illness, with working-class populations often encountering greater stigma and fewer resources. Social psychiatry also emphasizes the importance of intersectionality; the interplay of class, race, and gender can amplify risks for mental illness. Education Social psychiatry views education as both a pathway to improved mental health and a source of stress or inequity. Higher levels of education often correlate with better mental health outcomes due to increased economic opportunities, problem-solving skills, and social mobility.  A 2022 study, "Mental health effects of education", found that an extra year of education was associated with lower rates of depression and anxiety among high school students, highlighting that the impact was even stronger for women and individuals in rural communities. According to social psychiatry, disparities in educational quality and access mirror broader socioeconomic and racial inequities, perpetuating cycles of disadvantage that affect mental health. Negative school environments—characterized by bullying, exclusion, or lack of culturally responsive curricula—can harm students' mental well-being, particularly those from minority or marginalized groups. Race and ethnicity Social psychiatrists have done research on how race and ethnicity influences mental health, particularly in the context of systemic racism, migration, immigration, and globalization. Experiences of discrimination and institutional bias in areas like housing, employment, and healthcare contribute to chronic stress and poorer mental health outcomes for marginalized racial groups. Immigrants and racial minorities may also face acculturation stress and identity conflicts, further affecting their mental well-being. However, some communities develop strong cultural or social bonds that act as protective factors against the effects of racism, a key area of interest in social psychiatry. Racial disparities in mental health care access, often influenced by cultural stigma, exacerbate untreated conditions and highlight the need for culturally informed interventions. Social psychiatry leverages these insights to advocate for policies that promote housing security, equitable education, anti-discrimination measures, and economic redistribution. These systemic changes aim to address the root causes of mental health disparities and improve overall population mental health. Gender Epidemiological studies have consistently found that women experience more mental health disorders compared to men. Depression, post-traumatic stress disorder (PTSD), anxiety, and eating disorders are among the mental health disorders women experience at a higher frequency than men. The American Psychiatric Association attributes some of these to higher social risk factors. Women experience more poverty compared to men, they are more likely to be victims of violence, and they earn less income than men. The Stress-Vulnerability Model The Stress-Vulnerability Model explains the development and progression of mental health disorders as a result of an interaction between an individual’s biological or psychological vulnerabilities and external stressors. Vulnerabilities can include genetic predispositions, neurobiological abnormalities, or personality traits, while stressors refer to environmental and social factors such as trauma, poverty, discrimination, or interpersonal conflict. The model outlines how mental health problems progress to full-blown disorders when stressors exceed an individual’s coping resources and resilience. In social psychiatry, the Stress-Vulnerability Model provides a framework for understanding how social and environmental factors contribute to mental illness. This perspective shifts the focus from purely biological causes to the broader social context, recognizing that reducing external stressors and enhancing social support can mitigate mental health risks. Community-based care and prevention Social psychiatry advocates for community-based care and preventive measures to address mental health issues, rather than solely relying on traditional hospital-based care. Public policy analyst Gerald Caplan laid the foundation for preventative mental health care in his 1964 book, Principles of Preventative Psychiatry, where he argues that early intervention in community settings can reduce mental illness stigma and promote mental health. By shifting the focus from individual pathology to the social context, community-based care promotes recovery, reduces stigma, and improves overall well-being. Community-based care offers a range of services, including early intervention, crisis intervention, medication management, therapy, and rehabilitation. These services are delivered in various settings, such as clinics, schools, workplaces, and community centers. By providing accessible and culturally competent care, community-based programs aim to reduce disparities in mental health care and improve outcomes for individuals with mental illness. These programs often involve collaboration between healthcare providers, social workers, educators, and community members to create supportive environments and empower individuals to build resilience. Cultural norms, values, and beliefs Social psychiatry acknowledges that cultural norms, values, and beliefs shape the expression, diagnosis, and treatment of mental illnesses. This perspective advocates for culturally competent care to adjust for biases in psychiatry and public health services. In this sense, social psychiatry mirrors cultural psychiatry by emphasizing how mental illnesses and psychiatric disorders vary across cultural contexts. Cultural psychiatry outlines how different cultures view mental health differently and how that impacts people from seeking help. Social network and social support theories Social psychiatry believes that social networks, support, and communities positively influence mental health and wellbeing. Leveraging social network theory and social support models, it emphasizes the importance of fostering strong social ties and support systems to increase resilience and wellbeing. The field focuses on the negative effects of social isolation, as well, arguing that social isolation is a key contributor to mental illnesses. This phenomena, social psychiatrists argue, is closely tied to poverty and urbanization. Research Social psychiatry integrates different social factors of mental health to advocate for systemic changes, such as improving living conditions, addressing discrimination, and ensuring equitable access to mental health care. By combining quantitative and qualitative methods, social psychiatry research aims to identify effective interventions, reduce stigma, and promote mental health equity. Epidemiology Social psychiatry relies on epidemiological data to understand the distribution and determinants of mental health. This involves studying the patterns of mental illness within populations, including prevalence and incidence rates. Researchers often analyze risk factors and protective factors associated with mental disorders. Life Events Research Life events research investigates how significant life events, such as loss, trauma, or major life transitions, can impact mental health. It is one example of a longitudinal approach to social psychiatry. By following individuals over time, researchers can identify the long-term consequences of early life experiences and social factors on mental health. Landmark studies Several landmark studies have significantly contributed to the field of social psychiatry. Faris and Dunham's Chicago study (1939) This 1939 study, ‘Mental Disorders in Urban Areas: An Ecological Study of Schizophrenia and Other Psychoses’ was one of the first studies to link mental health issues with social and environmental factors, shifting the focus from purely biological explanations to sociological ones. University of Chicago sociologists Robert E. L. Faris and H. Warren Dunham aimed to examine the relationship between urban environmental factors and rates of mental illness, particularly schizophrenia. Using psychiatric hospital admission records in Chicago, they mapped the distribution of schizophrenia and other psychoses across different neighborhoods, correlating mental illness prevalence with socioeconomic conditions. They found higher rates of schizophrenia in areas with significant poverty, high population turnover, and social disintegration, particularly in inner-city neighborhoods. Mental illness rates decreased as neighborhoods became more stable and affluent. This study laid the foundation for future research on social determinants of mental health and legitimized the role of social science methods in psychiatric research. It also influenced public health policies over the next few decades, including the War on Poverty and the community health movements. Hollingshead and Redlich's New Haven study (1958) Social Class and Mental Illness was a collaboration between psychiatrist Frederick Redlich and sociologist August Hollingshead. It explored the relationship between social class and mental illness in New Haven, Connecticut, focusing on disparities in access to and types of mental health care. Using a classification system that divided participants into five social classes, Hollingshead and Redlich analyzed patterns of mental illness diagnoses, treatment settings, and care quality. They found that individuals in lower social classes experienced higher rates of mental illness but were more likely to receive custodial care, while those in upper classes accessed psychotherapy and higher-quality treatments. By highlighting the systemic inequities in mental health care based on social class, this study contributed to social psychiatry by emphasizing the need for equitable mental health services and policy reform to reduce inequities in psychiatric care. Midtown Manhattan study (1962) Mental Health in the Metropolis, published in 1962, was one of the earliest urban studies to systematically document the social factors of mental health. The study, conducted by two sociologists, one anthropologist, and two psychiatrists, explored the prevalence of mental disorders in urban Midtown Manhattan. This study provided critical evidence of how urban environments impact mental well-being, and emphasized the significance of cultural diversity and social stressors in shaping mental health. It highlighted that social factors like poverty were more influential to mental health compared to simply living in urban environments. Leighton and Murphy’s Stirling County study (1948 - present) The Stirling County Study is a longitudinal investigation investigating the social determinants of mental illness in a rural Canadian community. Sociologist and psychiatrist Alexander Leighton and psychiatric epidemiologist Jane Murphy founded the study in 1948, and it is still in effect today. It uses community surveys, interviews, and follow-ups to gather comprehensive data on mental health and social conditions of individuals in the Stirling county community. The long-term nature of this study provided valuable insights for social psychiatry by demonstrating that mental illness did exist in rural communities, not just urban ones. The study revealed that mental illness rates were significantly influenced by social factors, including economic deprivation, social isolation, and family dynamics. Its long-term perspective provided insights into how changing social and economic conditions affect mental health over time. This research also highlighted the unique challenges of mental health in rural settings and helped influence the community mental health center movement. Current work Social psychiatry can be most effectively applied in helping to develop mental health promotion and prevent certain mental illnesses by educating individuals, families, and societies. Facilitating the social inclusion of people with mental health problems is a major focus of modern social psychiatry. Social psychiatry research today spans many topics, including the effect of the pandemic, social media, race, and poverty on mental health. It also researches social causes and implications of common mental disorders like depression, anxiety, eating disorders, and substance abuse. Modern social psychiatry research continues to address a wide range of topics, including: Impact of the COVID-19 pandemic on mental health How technology and social media impacts mental health How race, poverty, and socioeconomic status impact mental health Social determinants of substance use disorders Development and evaluation of culturally competent interventions References External links https://web.archive.org/web/20050327051651/http://www.sanctuaryweb.com/main/social_psychiatry.htm http://library.cpmc.columbia.edu/hsl/archives/findingaids/opler.html Faculty of Rehabilitation and Social Psychiatry of the Royal College of Psychiatrists in the UK. Social psychiatry and public mental health: present situation and future objectives. Time for rethinking and renaissance? Psychiatric specialities Social psychology Sociology Schizophrenia
Social psychiatry
[ "Biology" ]
4,608
[ "Behavioural sciences", "Behavior", "Sociology" ]
2,148,941
https://en.wikipedia.org/wiki/Four%20Symbols
[[File:性命圭旨 和合四象圖.png|thumb|Neidan Illustration of Bringing Together the Four Symbols , 1615 Xingming guizhi]] The Four Symbols are mythological creatures appearing among the Chinese constellations along the ecliptic, and viewed as the guardians of the four cardinal directions. These four creatures are also referred to by a variety of other names, including "Four Guardians", "Four Gods", and "Four Auspicious Beasts". They are the Azure Dragon of the East, the Vermilion Bird of the South, the White Tiger of the West, and the Black Tortoise (also called "Black Warrior") of the North. Each of the creatures is most closely associated with a cardinal direction and a color, but also additionally represents other aspects, including a season of the year, an emotion, virtue, and one of the Chinese "five elements" (wood, fire, earth, metal, and water). Each has been given its own individual traits, origin story and a reason for being. Symbolically, and as part of spiritual and religious belief and meaning, these creatures have been culturally important across countries in the Sinosphere. History Depictions of mythological creatures clearly ancestral to the modern set of four creatures have been found throughout China. Currently, the oldest known depiction was found in 1987 in a tomb in Xishuipo in Puyang, Henan, which has been dated to approximately 5300 BC. In the tomb, labeled M45, immediately adjacent to the remains of the main occupant to the east and west were found mosaics made of clam shells and bones forming images closely resembling the Azure Dragon and White Tiger, respectively. The modern standard configuration was settled much later, with variations appearing throughout Chinese history. For example, the Rong Cheng Shi manuscript recovered in 1994, which dates to the Warring States period (–221 BCE), gives five directions rather than four and places the animals differently. According to that document, Yu the Great gave directional banners to his people, marked with the following insignia: the north with a bird, the south with a snake, the east with the sun, the west with the moon, and the center with a bear. The Chinese classic Book of Rites mentions the Vermillion Bird, Black Tortoise (Dark Warrior), Azure Dragon, and White Tiger as heraldic animals on war flags; they were the names of asterisms associated with the four cardinal directions: South, North, East, and West, respectively. In Taoism, the Four Symbols have been assigned human identities and names. The Azure Dragon is named Meng Zhang (), the Vermilion Bird is called Ling Guang (), the White Tiger Jian Bing (), and the Black Tortoise Zhi Ming (). Its Japanese equivalent, in corresponding order: Seiryū (east), Suzaku (south), Byakko (west), Genbu (North). The colours associated with the four creatures can be said to match the colours of soil in the corresponding areas of China: the bluish-grey water-logged soils of the east, the reddish iron-rich soils of the south, the whitish saline soils of the western deserts, the black organic-rich soils of the north, and the yellow soils from the central loess plateau. In I Ching The chapter in the I Ching () describes the origins of the Four Symbols thus: Correspondence with the Five Phases These mythological creatures have also been syncretized into the Five Phases system (Wuxing''). The Azure Dragon of the East represents Wood, the Vermilion Bird of the South represents Fire, the White Tiger of the West represents Metal, and the Black Tortoise (or Black Warrior) of the North represents Water. In this system, the fifth principle Earth is represented by the Yellow Dragon of the Center. In popular culture The Four Symbols are represented in an inspired line of skins for characters of the first-person-shooter Overwatch. In the game's 2018 Chinese New Year (Year of the Dog) event, playable characters Zarya, Mercy, Pharah, and Genji received cosmetic skins based on the Black Tortoise (Xuanwu), the Vermillion Bird (Zhuque), the Azure Dragon (Qinglong), and the White Tiger (Baihu), respectively. In the Nintendo DS game Pokémon Black and White, Tornadus, Thundurus, and Landorus have forms that are inspired by the Vermillion Bird, the Azure Dragon, and the White Tiger respectively. Later, Pokémon Legends: Arceus completed the set with Enamorus, which is inspired by the Black Tortoise. See also References Chinese astrological signs Chinese constellations Chinese legendary creatures Chinese mythology Cultural lists
Four Symbols
[ "Astronomy" ]
987
[ "Chinese constellations", "Constellations" ]
2,148,946
https://en.wikipedia.org/wiki/Ride%20height
Ride height or ground clearance is the amount of space between the base of an automobile tire and the lowest point of the automobile, typically the bottom exterior of the differential housing (even though the lower shock mounting point may be lower); or, more properly, to the shortest distance between a flat, level surface, and the lowest part of a vehicle other than those parts designed to contact the ground (such as tires, tracks, skis, etc.). Ground clearance is measured with standard vehicle equipment, and for cars, is usually given with no cargo or passengers. Function Ground clearance is a critical factor in several important characteristics of a vehicle. For all vehicles, especially cars, variations in clearance represent a trade-off between handling, ride quality, and practicality. A higher ride height and ground clearance means that the wheels have more vertical room to travel and absorb road shocks. Also, the car is more capable of being driven on roads that are not level, without the scraping against surface obstacles and possibly damaging the chassis and underbody. For a higher ride height, the center of mass of the car is higher, which makes for less precise and more dangerous handling characteristics (most notably, the chance of rollover is higher). Higher ride heights will typically adversely affect aerodynamic properties. This is why sports cars typically have very low clearances, while off-road vehicles and SUVs have higher ones. Example ride heights A road car usually has a ride height around , while an SUV usually lies around . Two well-known extremes are the Ferrari F40 with a ride height and the Hummer H1 with a ride height. The table below provides average ride height for different car types which were available on the market in India in 2020: Specialized uses Underslung frame Some cars have used underslung frames to achieve a lower ride height and the consequent improvement in center of gravity. The 1905-14 cars of the American Motor Car Company are one example. Self-leveling Self-leveling suspension systems are designed to maintain a constant ride height regardless of load. The suspension detects the load via mechanical or electronic means and raises or lowers the vehicle, by inflating cylinders in the suspension to lift the chassis higher. Vehicles not equipped with self-leveling will pitch down at one end when laden; this adversely affects ride, handling, and aerodynamic properties. Height adjustable Some modern automobiles (such as the Audi Allroad Quattro and Tesla Model S) have height adjustable suspension, which can vary the ride height by adjusting the hydropneumatic suspension or air suspension. This adjustment can be automatic, depending on road conditions, and/or the settings selected by the driver. Adjustable shock absorber Other, simpler suspension systems, such as coilover springs, offer a way of manually adjusting ride height (and often, spring stiffness) by compressing the spring in situ, using a threaded shaft and adjustable knob or nut. Aftermarket Lowering a car's suspension is a common and relatively inexpensive aftermarket modification. Many car enthusiasts prefer the more aggressive look of a lowered body, and there is an easily realized car handling improvement from the lower center of gravity. Most passenger cars are produced such that one or two inches of lowering will not significantly increase the probability of damage. On most automobiles, ride height is modified by changing the length of the suspension springs, and is the essence of many aftermarket suspension kits supplied by manufacturers such as KW, Eibach, and H&R. For trucks, lifted trucks are popular with truck owners, who often upsize their wheels and tires when lifting their vehicles. Military For armored fighting vehicles (AFV), ground clearance presents an additional factor in a vehicle's overall performance: a lower ground clearance means that the vehicle minus the chassis is lower to the ground and thus harder to spot and harder to hit. The final design of any AFV reflects a compromise between being a smaller target on one hand, and having greater battlefield mobility on the other. Very few AFVs have top speeds at which car-like handling becomes an issue, though rollovers can and do occur. By contrast, an AFV is far more likely to need high ground clearance than a road vehicle. Trucks 18-wheel tractor-trailers also have to take the ground clearance of both their tractor and especially trailer into consideration on certain areas of uneven terrain, such as raised railroad crossings. Their extremely long wheelbase means that such terrain could potentially catch the undercarriage of the trailer in the wide space between the axles, potentially leaving the truck stuck with no means to extricate itself. Buses In some areas buses are required to have a ground clearance of at least . Too much ride height can cause the vehicle to have an excessively high center of gravity, which could cause the vehicle to be unstable or even flip. Types of ground clearance Axle clearance Colloquially referred to as differential clearance or diff clearance. Distance from bottom exterior of axle housing or bottom exterior of differential housing, whichever is lower, to the ground. Suspension clearance Distance between bottom of suspension components to ground. In vehicles with independent suspension this is typically the distance between the bottom of the lower control arm and the ground. Running clearance Distance between the bottom of the lowest sprung mass and the ground. See also Approach and departure angles Body lift Breakover angle Center of mass, automotive applications Clearance Height adjustable suspension Loading gauge Speed bump Suspension lift Lowrider Turning diameter References Automotive engineering Road safety
Ride height
[ "Engineering" ]
1,100
[ "Automotive engineering", "Mechanical engineering by discipline" ]
2,149,045
https://en.wikipedia.org/wiki/Development%20geography
Development geography is a branch of geography which refers to the standard of living and its quality of life of its human inhabitants. In this context, development is a process of change that affects peoples' lives. It may involve an improvement in the quality of life as perceived by the people undergoing change. However, development is not always a positive process. Gunder Frank commented on the global economic forces that lead to the development of underdevelopment. This is covered in his dependency theory. In development geography, geographers study spatial patterns in development. They try to find by what characteristics they can measure development by looking at economic, political and social factors. They seek to understand both the geographical causes and consequences of varying development. Studies compare More Economically Developed Countries (MEDCs) with Less Economically Developed Countries (LEDCs). Additionally variations within countries are looked at such as the differences between northern and southern Italy, the Mezzogiorno. Quantitative indicators Quantitative indicators are numerical indications of development. Economic indicators include GNP (Gross National Product) per capita, unemployment rates, energy consumption and percentage of GNP in primary industries. Of these, GNP per capita is the most used as it measures the value of all the goods and services produced in a country, excluding those produced by foreign companies, hence measuring the economic and industrial development of the country. However, using GNP per capita also has many problems. For example, GNP per capita does not take into account the distribution of the money which can often be extremely unequal as in the UAE where oil money has been collected by a rich elite and has not flowed to the bulk of the country. Secondly, GNP does not measure whether the money produced is actually improving people's lives and this is important because in many MEDCs, there are large increases in wealth over time but only small increases in happiness. Thirdly, the GNP figure rarely takes into account the unofficial economy, which includes subsistence agriculture and cash-in-hand or unpaid work, which is often substantial in LEDCs. In LEDCs it is often too expensive to accurately collect this data and some governments intentionally or unintentionally release inaccurate figures. In addition, the GNP figure is usually given in US dollars which due to changing currency exchange rates can distort the money's true street value so it is often converted using purchasing power parity (PPP) in which the actual comparative purchasing power of the money in the country is calculated. Other indicators Social indications in general include access to clean water and sanitation (which indicate the level of infrastructure developed in the country) and adult literacy rate, measuring the resources the government has to meet the needs of the people. Demographic indicators include the birth rate, death rate and fertility rate, which indicate the level of industrialization. Health indicators (a sub-factor of demographic indicators) include nutrition (calories per day, calories from protein, percentage of population with malnutrition), infant mortality and population per doctor, which indicate the availability of healthcare and sanitation facilities in a country. Environmental indications include how much a country does for the environment. Composite indicators The table above compares the GDP per capita and HDI in three select countries. In this instance, Gross domestic product (GDP) is used instead of Gross national product (GNP). Ostensibly, the difference between the two terms is that GDP refers to all finished services and goods physically within a country while GNP refers to all finished services and goods owned by a country's citizens, whether or not those goods are produced in that country. PQLI Other composite measures include the PQLI (Physical Quality of Life Index) which was a precursor to the HDI which used infant mortality rate instead of GNP per capita and rated countries from 0 to 100. It was calculated by assigning each country a score of 0 to 100 for each indicator compared with other countries in the world. The average of these three numbers makes the PQLI of a country. HPI The HPI (Human Poverty Index) is used to calculate the percentage of people in a country who live in relative poverty. In order to better differentiate the number of people in abnormally poor living conditions the HPI-1 is used in developing countries, and the HPI-2 is used in developed countries. The HPI-1 is calculated based on the percentage of people not expected to survive to 40, the adult illiteracy rate, the percentage of people without access to safe water, health services and the percentage of children under 5 who are underweight. The HPI-2 is calculated based on the percentage of people who do not survive to 60, the adult functional illiteracy rate and the percentage of people living below 50% of median personal disposable income. GDI The GDI (Gender-related Development Index) measures gender equality in a country in terms of life expectancy, literacy rates, school attendance and income. Qualitative indicators Qualitative indicators include descriptions of living conditions and people's quality of life. They are useful in analyzing features that are not easily calculated or measured in numbers such as freedom, corruption, or security, which are largely non-material benefits. Geographic variations in development There is a considerable spatial variation in development rates. The most famous pattern in development is the North-South divide. The North-South divide separates the rich North or the developed world, from the poor South. This line of division is not as straightforward as it sounds and splits the globe into two main parts. It is also known as the Brandt Line. The "North" in this divide is regarded as being North America, Europe, Russia, South Korea, Japan, Australia, New Zealand and the like. The countries within this area are generally the more economically developed. The "South" therefore encompasses the remainder of the Southern Hemisphere, mostly consisting of KFCs. Another possible dividing line is the Tropic of Cancer with the exceptions of Australia and New Zealand. It is critical to understand that the status of countries is far from static and the pattern is likely to become distorted with the fast development of certain southern countries, many of them NICs (Newly Industrialised Countries) including India, Thailand, Brazil, Malaysia, Mexico and others. These countries are experiencing sustained fast development on the back of growing manufacturing industries and exports. Most countries are experiencing significant increases in wealth and standard of living. However, there are unfortunate exceptions to this rule. Noticeably some of the former Soviet Union countries has experienced major disruption of industry in the transition to a market economy. Many African nations have recently experienced reduced GNPs due to wars and the AIDS epidemic, including Angola, Congo, Sierra Leone and others. Arab oil producers rely very heavily on oil exports to support their GDPs so any reduction in oil's market price can lead to rapid decreases in GNP. Countries which rely on only a few exports for much of their income are very vulnerable to changes in the market value of those commodities and are often derogatively called banana republics. Many developing countries do rely on exports of a few primary goods for a large amount of their income (coffee and timber for example), and this can create havoc when the value of these commodities drops, leaving these countries with no way to pay off their debts. Within countries the pattern is that wealth is more concentrated around urban areas than rural areas. Wealth also tends towards areas with natural resources or in areas that are involved in tertiary (service) industries and trade. This leads to a gathering of wealth around mines and monetary centres such as New York, London and Tokyo. Geography can also affect economic development in a number of ways. Analysis of current data sets show three significant implications of geography on developing nations. First, access to sea routes is important; this has been noted as far back as Adam Smith. Sea travel is much cheaper and faster than that of land, leading to a wider and quicker dissemination of both resources and ideas, both of which are integral to economic stimulus. Geography also dictates the prevalence of disease: for example, the World Health Organization estimates roughly 300–500 million new cases of malaria every year. Malaria is largely associated with nations that have struggled to achieve sound economic development. Not only does disease decrease labor productivity, but it changes the age structure of the country, forcing the population to lean heavily toward children as adults die from disease and the population sees an increase of fertility to keep up with the high death rates. High fertility both lowers the quality of life for each child due to a decrease in resources allocated to each of them, and also decreases labor productivity for women. The third way geography affects development is through agricultural productivity. Temperate regions have shown the highest output of major grains; regions such as the African savanna relatively yield much less value for the labor cost. Low agricultural output means that a larger portion of the population must spend their efforts in agriculture, leading to a slower urban development. This, in turn, discourages technological advance: an essential source of development for the twenty-first century. Barriers to international development Geographers along with other social scientists have recognized that certain factors present in a given society may impede the social and economic development of that society. Factors, which have been identified as obstructing the economic and social welfare of developing societies, include: Lack of education Lack of healthcare Pervasiveness of intoxicating drugs Weak political, social, and economic institutions Ineffective taxation Environmental degradation Lack of religious/gender/racial/sexual freedoms Indebtedness Protectionist barriers to trade Foreign aid Dependence upon primary resource exports Unequal distribution of wealth Inhospitable climate Effective governments may address many barriers to economic and social development, however in many instances this is challenging due to the path dependency societies develop regarding many of these issues. Some barriers to development may be impossible to address, such as climatic barriers to development. In these cases societies must evaluate whether such climatic barriers to development dictate that society must relocate a given settlement in order to enjoy greater economic development. Many scholars agree that foreign aid provided to developing nations is ineffective and in many instances counter productive. This is due to the manner in which foreign aid changes the incentives for productivity in a given developing society, and the manner in which foreign aid has the tendency to corrupt the governments responsible for its allocation and distribution. Cultural barriers to development such as discrimination based on gender, race, religion, or sexual orientation are challenging to address in certain oppressive societies, though recent progress has been significant in some societies. While the aforementioned barriers to economic growth and development are most prevalent in the less developed economies of the world, even the most developed economies are plagued by select barriers to development such as drug prohibition and income inequality. Aid MEDCs (More Economically Developed Countries) can give aid to LEDCs (Less Economically Developed Countries). There are several types of aid: Governmental (bilateral) aid International Organizational (multilateral) aid, e.g. The World Bank Voluntary aid from individuals, often mediated through NGOs Short-term/emergency aid (humanitarian assistance) Long-term/sustainable aid Non-governmental organization (NGO) aid Aid can be given in several ways. Through money, materials, or skilled and learned people (e.g. teachers). Aid has advantages. Mostly short-term or emergency aid help people in LEDCs to survive a natural (earthquake, tsunami, volcano eruption etc.) or human (civil war etc.) disaster. Aid helps make the recipient country (the country that receives aid) get more developed. However, aid also has disadvantages. Often aid does not even reach the poorest people. Often money gained from aid is used up to make infrastructures (bridges, roads etc.), which only the rich can use. Also, the recipient country becomes more dependent on aid from a donor country (the country giving aid). Whilst the above conception of aid has been the most pervasive within development geography work, it is important to remember that the aid landscape is far more complex than one directional flows from 'developed' to 'developing' countries. Development geographers have been at the forefront of research that aims to understand both the material exchanges and discourse surrounding 'South-South' development cooperation. 'Non-traditional' foreign aid from Southern, Middle Eastern and post-Socialist states (those outside the Development Assistance Committee (DAC) of the OECD) provide alternative development discourses and approaches to that of the mainstream Western model. Development geographers seek to examine the geopolitical drivers behind the aid donor programmes of "LEDCs", as well as the discursive symbolic repertoires of non-DAC donor states. Two illustrative examples of the complex aid landscape are that of China, which has been active as an aid donor throughout the latter half of the twentieth century but published its first report on foreign aid policy as recently as 2011 and India, an often cited aid recipient, but which has had donor programmes to Nepal and Bhutan since the 1950s. See also Cultural geography Environmental determinism References Notes Allen J. Scott & Gioacchino Garofoli (2007) Development on the Ground. Routledge, London. Social and Spatial Inequalities International development Economic geography Human geography Geography
Development geography
[ "Environmental_science" ]
2,677
[ "Environmental social science", "Human geography" ]
2,149,148
https://en.wikipedia.org/wiki/Municipal%20solid%20waste
Municipal solid waste (MSW), commonly known as trash or garbage in the United States and rubbish in Britain, is a waste type consisting of everyday items that are discarded by the public. "Garbage" can also refer specifically to food waste, as in a garbage disposal; the two are sometimes collected separately. In the European Union, the semantic definition is 'mixed municipal waste,' given waste code 20 03 01 in the European Waste Catalog. Although the waste may originate from a number of sources that has nothing to do with a municipality, the traditional role of municipalities in collecting and managing these kinds of waste have produced the particular etymology 'municipal.' Composition The composition of municipal solid waste varies greatly from municipality to municipality, and it changes significantly with time. In municipalities which have a well-developed waste recycling system, the waste stream mainly consists of intractable wastes such as plastic film and non-recyclable packaging materials. At the start of the 20th century, the majority of domestic waste (53%) in the UK consisted of coal ash from open fires. In developed areas without significant recycling activity it predominantly includes food wastes, market wastes, yard wastes, plastic containers and product packaging materials, and other miscellaneous solid wastes from residential, commercial, institutional, and industrial sources. Most definitions of municipal solid waste do not include industrial wastes, agricultural wastes, medical waste, radioactive waste or sewage sludge. Waste collection is performed by the municipality within a given area. The term residual waste relates to waste left from household sources containing materials that have not been separated out or sent for processing. Waste can be classified in several ways, but the following list represents a typical classification: Biodegradable waste: food and kitchen waste, green waste, paper (most can be recycled, although some difficult to compost plant material may be excluded) Recyclable materials: paper, cardboard, glass, bottles, jars, tin cans, aluminum cans, aluminium foil, metals, certain plastics, textiles, clothing, tires, batteries, etc. Inert waste: construction and demolition waste, dirt, rocks, debris Electrical and electronic waste (WEEE) – ⁣electrical appliances, light bulbs, washing machines, TVs, computers, screens, mobile phones, alarm clocks, watches, etc. Composite wastes: waste clothing, Tetra Pack food and drink cartons, waste plastics such as toys and plastic garden furniture Hazardous waste including most paints, chemicals, tires, batteries, light bulbs, electrical appliances, fluorescent lamps, aerosol spray cans, and fertilizers Toxic waste including pesticides, herbicides, and fungicides Biomedical waste, expired pharmaceutical drugs, etc. For example, typical municipal solid waste in China is composed of 55.9% food residue, 8.5% paper, 11.2% plastics, 3.2% textiles, 2.9% wood waste, 0.8% rubber, and 18.4% non-combustibles. Components of solid waste management The municipal solid waste industry has four components: recycling, composting, disposal, and waste-to-energy via incineration. There is no single approach that can be applied to the management of all waste streams, therefore the Environmental Protection Agency, a U.S. federal government agency, developed a hierarchy ranking strategy for municipal solid waste. The waste management hierarchy is made up of four levels ordered from most preferred to least preferred methods based on their environmental soundness: Source reduction and reuse; recycling or composting; energy recovery; treatment and disposal. Collection The functional element of collection includes not only the gathering of solid waste and recyclable materials, but also the transport of these materials, after collection, to the location where the collection vehicle is emptied. This location may be a materials processing facility, a transfer station or a landfill disposal site. Waste handling and separation, storage and processing at the source Waste handling and separation involves activities associated with waste management until the waste is placed in storage containers for collection. Handling also encompasses the movement of loaded containers to the point of collection. Separating different types of waste components is an important step in the handling and storage of solid waste at the source of collection. Segregation and processing and transformation of solid wastes The types of means and facilities that are now used for the recovery of waste materials that have been separated at the source include kerbside collection, drop-off, and buy-back centres. The separation and processing of wastes that have been separated at the source and the separation of commingled wastes usually occur at a materials recovery facility, transfer stations, combustion facilities, and treatment plants. Transfer and transport This element involves two main steps. First, the waste is transferred from a smaller collection vehicle to larger transport equipment. The waste is then transported, usually over long distances, to a processing or disposal site. Disposal Today, the disposal of wastes by land filling or land spreading is the ultimate fate of all solid wastes, whether they are residential wastes collected and transported directly to a landfill site, residual materials from materials recovery facilities (MRFs), residue from the combustion of solid waste, compost, or other substances from various solid waste processing facilities. A modern sanitary landfill is not a dump; it is an engineered facility used for disposing of solid wastes on land without creating nuisances or hazards to public health or safety, such as the problems of insects and the contamination of groundwater. Reusing In recent years, environmental organizations, such as Freegle or The Freecycle Network, have been gaining popularity for their online reuse networks. These networks provide a worldwide online registry of unwanted items that would otherwise be thrown away, for individuals and nonprofits to reuse or recycle. Therefore, this free Internet-based service reduces landfill pollution and promotes the gift economy. Landfills Landfills are created by land dumping. Land dumping methods vary, most commonly it involves the mass dumping of waste into a designated area, usually a hole or sidehill. After the waste is dumped, it is then compacted by large machines. When the dumping cell is full, it is then "sealed" with a plastic sheet and covered in several feet of dirt. This is the primary method of dumping in the United States because of the low cost and abundance of unused land in North America. Landfills are regulated in the US by the Environmental Protection Agency, which enforces standards provided in the Resource Conservation Recovery Act, such as requiring liners and groundwater monitoring. This is because landfills pose the threat of pollution and can contaminate groundwater. The signs of pollution are effectively masked by disposal companies, and it is often hard to see any evidence. Usually, landfills are surrounded by large walls or fences hiding the mounds of debris. Large amounts of chemical odor eliminating agent are sprayed in the air surrounding landfills to hide the evidence of the rotting waste inside the plant. Energy generation Municipal solid waste produces enormous amounts of methane, a potent greenhouse gas. However, nearly 90% of these methane emissions could be avoided with existing technologies. In particular, municipal solid waste can be used to generate energy because of the lipid content present within it. A lot of MSW products can be converted into clean energy if the lipid content can be accessed and utilized. Several technologies have been developed that make the processing of MSW for energy generation cleaner and more economical than ever before, including landfill gas capture, combustion, pyrolysis, gasification, and plasma arc gasification. While older waste incineration plants emitted a lot of pollutants, recent regulatory changes and new technologies have significantly reduced this concern. United States Environmental Protection Agency (EPA) regulations in 1995 and 2000 under the Clean Air Act have succeeded in reducing emissions of dioxins from waste-to-energy facilities by more than 99 percent below 1990 levels, while mercury emissions have been reduced by over 90 percent. The EPA noted these improvements in 2003, citing waste-to-energy as a power source "with less environmental impact than almost any other source of electricity". See also :Category:Waste by country Garbology (study of modern refuse and trash) List of waste management acronyms MSW/LFG (municipal solid waste and landfill gas) Methanol fuel#History and production Sewage Waste management Waste minimisation Global waste trade References Further reading Waste
Municipal solid waste
[ "Physics" ]
1,697
[ "Materials", "Waste", "Matter" ]